It doesn’t matter that much that Apple and Google encrypts your phone

Apple’s and Google’s announcements that they will encrypt information on your phone are nice, but won’t help much. Most data is in the cloud, these days; and your protections in the cloud are governed by laws of numerous countries, almost all of which have quite large exceptions.

CybercrimeAt the Internet Engineering Task Force we have taken a very strong stand that pervasive surveillance is a form of attack.  This is not a matter of lack of trust of any one organization, but rather a statement that if one organization can snoop on your information, others will be able to do so as well, and they may not be so nice as the NSA.  The worst you can say about the NSA is that a few analysts got carried away and spied on their partners.  With real criminals it’s another matter.  As we have seen with Target, other large department stores, and now JP Morgan, theirs is a business, and you are their commodity, in the form of private information and credit card numbers.

So now here comes Apple, saying that they will protect you from the government.   According to Apple, you do, as they has chosen to make it difficult for the government to break into your phone.  Like all technology, this “advance” has its pluses and minuses.  To paraphrase a leader in the law enforcement community, everyone wants their privacy until it’s their child at risk.  However, in the United States, at least, we have a standard that the director of the FBI seems to have forgotten- it’s called probable cause.  It’s based on a dingy pesky old amendment to the Constitution which states:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

So what happens if one does have probable cause?  This is where things get interesting.  If one has probable cause to believe that there is an imminent threat to life or property and they can’t break into a phone, then something bad may happen.  Someone could get hurt, for instance.  Is that Apple’s fault?  And who has the right to interpret and enforce the fourth amendment?  If Apple has a right to do so, then do I have the right to interpret what laws I will?  On the other hand, Apple might respond that it has no responsibility to provide law enforcement anything, and all it is doing is exercising the right of free speech to deliver a product that others use to communicate with.  Cryptographer and Professor Daniel Bernstein successfully argued this case in the 9th Circuit in the 1990s.  And he was right to do so, because going back to the beginning of this polemic, even if you believe your government to be benevolent, if it can access your information, so can a bad guy, and there are far more bad guys out there.

Apple hasn’t simply made this change because it doesn’t like the government.  Rather, the company has recognized that for consumers to put private information into their phone, they must trust the device to not be mishandled by others.  At the same time, Apple has said through their public statements that information that goes into their cloud is still subject to lawful seizure.  And this brings us back to the point that President Obama made at the beginning of the year: government risk isn’t the only form of risk.  The risk remains that private aggregators of information – like Apple and Google or worse, Facebook– will continue to use your information for whatever purposes they see fit.  If you don’t think this is the case, ask how much you pay for their services?

And since most of the data about your or that you own is either in the cloud or heading to the cloud, you might want to worry less about the phone or tablet, and more about where your data actually resides.  If you’re really concerned about governments, then you might also want to ask this question:  which governments can seize your data?  The answer to that question is not straight forward, but there are three major factors:

  1. Where the data resides;
  2. Where you reside;
  3. Where the company that controls the data resides.

For instance, If you reside in the European Union, then nominally you should receive some protection from the Data Privacy Directive.  Any company that serves European residents has to respect the rights specified in that.  On the other hand, there are of course exceptions for law enforcement.  If a server resides in some random country, however, like the Duchy of Grand Fenwick, perhaps there is a secret law that states that operators must provide the government all sorts of data and must not tell anyone they are doing so.  That’s really not so far from what the U.S. government did with National Security Letters.There’s a new service that Cisco has rolled out, called the Intercloud that neatly addresses this matter for large enterprises, providing a framework to keep some data local, and some data in the cloud, and the enterprise has some control over which.  Whether that benefit will extend to consumers is unclear.In the end I conclude that people who are truly worried about their data need to consider what online services they use, including Facebook, this blog you are reading right now, Google, Amazon, or anyone else.  They also have to consider how if at all they are using the cloud.  I personally think they have to worry less about physical devices, and that largely speaking Apple’s announcement is but a modest improvement in overall security.  The same could be said for IETF efforts.

Web (in)Security and What Can Be Done

We all like to think that web security is perfect, but we all know better.  You know about spam, phishing, and all manner of malware.  You probably run a virus scanner on your computer.  But what you don’t expect and shouldn’t expect is that the core of our security system would have a flaw.  It does, and has, from the beginning.  What’s more, it’s a known flaw.

How is it your browser decides to trust a site, or to show that lovely lock icon and perhaps a green URL bar when your communication is both encrypted and verified to be to a specific end point?  The simple answer is that your browser provider, Microsoft, Mozilla, Apple, or Google, has made a decision on your behalf that – at least as initially configured – your browser will trust a certain set of authorities– certificate authorities (CAs)– who will validate others.

One such certificate authority got hacked.  Badly.  And because they were trusted by your browser, so might you have been.  Here’s how it works.

  • When you access a URL that begins with “https”, a certificate is sent by that site that is signed by one of the trusted CAs, saying “yes, I agree that this is,” (for example).  If someone gets in between you and Google, they won’t have the private key associated with that certificate, and they won’t be able to validate to your browser.
  • If someone breaks into a CA and gets a certificate for “” (again, for example), and then gets between you and the real Google, they will be able to masquerade.  It doesn’t matter which CA it is, as long as your browser trusts it.  Google needn’t have any relationship with that CA.

This is what happened with DigiNotar.  Not only did they get hacked, but they didn’t notice.  They didn’t have sufficient controls in place to even spot the attack.  That they should have had.

But now there’s something else we can do.  In the Internet Engineering Task Force (IETF), a few folks led by a gentleman by the name of Paul Hoffman have developed an approach where sites like Google can effectively register which certificates are valid for them in an separate alternative authority that we largely trust, the Domain Name System (DNS).  You use DNS to convert site names like to IP addresses like

The group working on it is called “dane“.  Had the dane mechanism been in place in the browser, the attack on Diginotar and Google would have failed, even if Google was a customer of Diginotar (which they weren’t).

When we speak of security we always discuss defense in depth.  That is– never rely on exactly one mechanism to protect you, because at some point it will surely break.  In this case, the attacker needed to (a) compromise the CA and (b) get in between the service and the end user to succeed.  Had dane been in place, atop (a) and (b), the attacker would also have to have compromised Google’s DNS for the attack to succeed.  That’s likely even harder than compromising a CA.

Dane has another potential benefit: in the long run, it may get browsers completely out of the business of telling you who to trust, or it will extremely limit that trust.

This attack also demonstrates that as threats evolve our response to those threats evolves.  Here we understood the threat, but just didn’t get the work done fast enough before a CA was compromised.  I still call this a win, as I think we can expect to see dane even faster than we expected before the attack.

Android Phones the next security threat?

Take it as an axiom that older software is less secure.  It’s not always true, but if the code wasn’t mature at the time of its release- meaning it hasn’t been fielded for years upon years- it’s certain to be true.  In an article in PC Magazine, Sara Yin finds that only 0.4% of Android users have up to date software, as compared to the iPhone where 90% of users have their phones up to date.

This represents a serious threat to cybersecurity, and it should have been a lesson that was already learned.  Friend and researcher Stefan Frei has already examined in great detail update rates for browsers, a primary vessel for attacks.  The irony here is that the winning model he exposed was that of Google’s Chrome.

What then was the failure with Android?  According to the PC Magazine article, the logic lies with who is responsible for updating software.  Apple take sole responsibility for the iPhone’s software.  There are a few parameters that the service provider can set, but other than that they’re hands off.  Google, however, provides the software to mobile providers, and it is those mobile providers who must then update the phone.  Guess which model is more secure?  Having SPs in the loop makes the Internet more insecure.  Google needs to reconsider their distribution model.

Can The Industry Stop break-ins on Facebook?

FacebookAfter my last post, a reasonable question is whether we in the industry have been goofing off on the job.  After all, how could it be that someone got their account broken into?  Everyone knows that passwords are a weak form of authentication.  Most enterprises won’t allow it for employee access, and we would string a bank CSO up by his or her toenails if a bank only used passwords to access your information. They use at a bear minimum RSA one time password tokens or perhaps Smart Cards.  So why are the rules different for Facebook?

They would say, I’m sure, that they do not hold the keys to your financial data.  Only that may not be true.  Have you entered credit card details into Facebook?  Then in that case maybe they do hold the keys to your financial data.  Even if you haven’t entered any financial data into Facebook?  Are you using the same password for Facebook that you are for your financial institution?  Many people are, and that is the problem.

Passwords have become, for want of a better term, an attractive nuisance.  It’s not that the concept itself is terrible, but they are increasingly difficult to secure, as the number of accounts that people hold continues to skyrocket.  Yes, the problem is getting worse, not better.  My favorite example is the latest update to the Wall Street Journal iPhone app, where the upgrade description says, “Application Enhancements to Add Free Registration & the Ability for Subscribers and Users to Login”.  What a lovely enhancement.  Right up there with enhancing the keyboard I am typing on to give me electric shocks.

Facebook is at least making a feeble attempt to get around this problem by offering OpenID access in some limited way (I tried using it from this site, and FB is broken, even though I can get into all sorts of other sites, including LiveJournal).  Still, it probably works for you if you are a Google, Yahoo!, or MySpace user, but for better or worse those sites themselves do not accept OpenID.  (The better part is that no one can simply break into one account and gain access to all of these other sites.  The worse part is that if you have some other OpenID, you can’t use it with these sites.)

OpenID has lots of problems, the biggest of which is that there is no standard privileged interface to the user.  This is something that Google, Yahoo!, and MySpace might actually like, because it means that they provide the interface they want to provide.  Unfortunately, programs, or more precisely the authors of programs, might find that a little irritating, since OpenID is so closely tied to the web that it is difficult to use for other applications (like email).

SAML and Higgins to the rescue?  OAUTH?  Blech.