Ain’t No Perfect. That’s why we need network protection.

If Apple can blow it, so too can the rest of us. That’s why a layered defensive approach is necessary.

When we talk about secure platforms, there is one name that has always risen to the top: Apple.  Apple’s business model for iOS has been repeatedly demonstrated to provide superior security results over its competitors.  In fact, Apple’s security model is so good that governments feel threatened enough by it that we have had repeated calls for some form of back door into their phones and tablets.  CEO Tim Cook has repeatedly taken the stage to argue for such strong protection, and indeed I personally have  friends who I know take this stuff so seriously that they lose sleep over some of the design choices that are made.

And yet this last week, we learned of a vulnerability that was as easy to exploit as to type “root” twice in order to gain privileged access.

Wait what?

 

Wait. What?

 

 

Ain’t no perfect.

If the best and the brightest of the industry can occasionally have a flub like this, what about the rest of us?  I recently installed a single sign-on package from Ping Identity, a company whose job it is to provide secure access.  This simple application that generates cryptographically generated sequences of numbers to be used as passwords is over 70 megabytes, and includes a complex Java runtime environment (JRE).  How many bugs remain hidden in those hundreds of thousands of lines of code?

Now enter the Internet of Things, where manufacturers of devices that have not traditionally been connected to the network have not been expert at security for decades.  What sort of problems lurk in each and every one of those devices?

It is simply not possible to assure perfect security, and because computers are designed by imperfect humans, all these devices are imperfect.  Even devices that we believe are secure today will have vulnerabilities exposed in the future.  This is one of the reasons why the network needs to play a role.

The network stands between you and attackers, even when devices have vulnerabilities.  The network is best in a position to protect your devices when it knows what sort of access a device needs to operate properly.  That’s your washing machine.  But even for your laptop, where you might want to access whatever you want to access, whenever you want to access it, through whatever system you wish to use, informing the network makes it possible to stop all communications that you don’t want.  To be sure, endpoint manufacturers should not rely solely on network protection.  Devices should be built with as much protection as is practicable and affordable.  The network provides an additional layer of protection.

Endpoint manufacturers thus far have not done a good job in making use of the network for protection.  That requires a serious rethink, and Apple is the posture child as to why.  They are the best and the brightest, and they got it wrong this time.

Court Order to Apple to Unlock San Bernardino iPhone May Unlock Hackers

A judge’s order that Apple cooperate with federal authorities in the San Bernardino bombing investigation may have serious unintended consequences. There are no easy answers. Once more, a broad dialog is required.

Scales of JusticePreviously I opined about how a dialog should occur between policy makers and the technical community over encryption.  The debate has moved on.  Now, the New York Times reports that federal magistrate judge Sheri Pym has ordered Apple to facilitate access to the iPhone of Syed Rizwan Farook, one of the San Bernardino bombers.  The Electronic Frontier Foundation is joining Apple in fight against the order.

The San Bernardino fight raises both technical and policy questions.

Can Apple retrieve data off the phone?

Apparently not.  According to the order, Apple is required to install an operating system that would allow FBI technicians to make as many password attempts as they can without the device delaying them or otherwise deleting any information.  iPhones have the capability of deleting all personal information after a certain number of authentication failures.

You may ask: why doesn’t the judge just order Apple to create an operating system that doesn’t require a password?  According to Apple,  the password used to access the device is itself a key encrypting key (KEK) that is used to gain access to decrypt the key that itself then decrypts stored information.  Thus, bypassing the password check doesn’t get you any of the data.  Thus, the FBI needs the password.

What Apple can do is install a new operating system without the permission of the owner.  There are good reasons for them to have this ability.  For one, it is possible that a previous installation failed or that the copy of the operating system stored on a phone has been corrupted in some way.  If technicians couldn’t install a new version, then the phone itself would become useless.  This actually happened to me, personally, as it happens.

The FBI can’t build such a version of the operating system on their own.  As is best practice, iPhones validate that all operating systems are properly digitally signed by Apple.  Only Apple has the keys necessary to sign imagines.

With a new version of software on the iPhone 5c, FBI technicians would be able to effect a brute force attack, trying all passwords, until they found the right one.  This won’t be effective on later model iPhones because their hardware slows down queries, as detailed in this blog.

Would such a capability amount to malware?

Kevin S. Bankston, director of New Americas Open Technology Institute has claimed that the court is asking Apple to create malware for  the FBI to use on Mr. Farook’s device.  There’s no single clean definition of malware, but a good test as to whether the O/S the FBI is asking for is in fact malware is this: if this special copy of the O/S leaked from the FBI, could “bad guys” (for some value of “bad guys”) also use the software against the “good guys” (for some value of “good guys”)?  Apple has the ability to write into the O/S a check to determine the serial number of the device.  It would not be possible for bad guys to modify that number without invalidating the signature the phone would check before loading.  Thus, by this definition, the software would not amount to malware.  But I wouldn’t call it goodware, either.

Is a back door capability desirable?

Unfortunately, here there are no easy answers, but trade-offs.  On the one hand, one must agree that the FBI’s investigation is impeded by the lack of access to Mr. Farook’s iPhone, and as other articles show, this case is neither the first, nor will it be the last, of its kind.  As a result, agents may not be able to trace leads to other possible co-conspirators.  A  Berkman Center study claims that law enforcement has sufficient access to metadata to determine those links, and there’s some reason to believe that.  When someone sends an email, email servers between the sender and recipient keep a log that a message was sent from one person to another.  A record of phone calls is kept by the phone company.  But does Apple keep a record of FaceTime calls?  Why would they if it meant a constant administrative burden, not to mention additional liability and embarrassment, when (not if) they suffer a breach?  More to the point, having access to the content on the phone provides investigators clues as to what metadata to look for, based on what applications were installed and used on the phone.

If Apple had the capability to access Mr. Farook’s iPhone, the question would then turn to how it would be overseen.  The rules about how companies  handle customer data vary from one jurisdiction to another.  In Europe, the Data Privacy Directive is quite explicit, for instance.  The rules are looser in the United States.  Many are worried that if U.S. authorities have access to data, so will other countries, such as China or Russia.  Those worries are not unfounded: a technical capability knows nothing of politics.  Businesses fear that if they accede to U.S. demands, they must also accede to others if they wish to sell products and services in those countries.  This means that there’s billions of dollars at stake.  Worse, other countries may demand more intrusive mechanisms.  As bad as that is, and it’s very bad, there is worse.

The Scary Part

If governments start ordering Apple to insert or create malware, what other technology will also come under these rules?  It is plain as day that any rules that apply to Apple iPhones would also apply to Android-based cell phones.  But what about other devices, such as  televisions?  How about  Refrigerators?  Cars?  Home security systems?  Baby monitoring devices?  Children’s Toys?  And this is where it gets really scary.  Apple has one of the most competent security organizations in the world.  They probably understand device protection better than most government clandestine agencies.  The same cannot be said for other device manufacturers.  If governments require these other manufacturers to provide back door access to them, it would be tantamount to handing the keys to all our home to criminals.

To limit this sort of damage, there needs to be a broad consensus as to what sorts of devices governments should be able to access, under what circumstances that access should happen, and how that access will be overseen to avert abuse.  This is not an easy conversation.  That’s the conversation Apple CEO Tim Cook is seeking.  I agree.

It doesn’t matter that much that Apple and Google encrypts your phone

Apple’s and Google’s announcements that they will encrypt information on your phone are nice, but won’t help much. Most data is in the cloud, these days; and your protections in the cloud are governed by laws of numerous countries, almost all of which have quite large exceptions.

CybercrimeAt the Internet Engineering Task Force we have taken a very strong stand that pervasive surveillance is a form of attack.  This is not a matter of lack of trust of any one organization, but rather a statement that if one organization can snoop on your information, others will be able to do so as well, and they may not be so nice as the NSA.  The worst you can say about the NSA is that a few analysts got carried away and spied on their partners.  With real criminals it’s another matter.  As we have seen with Target, other large department stores, and now JP Morgan, theirs is a business, and you are their commodity, in the form of private information and credit card numbers.

So now here comes Apple, saying that they will protect you from the government.   According to Apple, you do, as they has chosen to make it difficult for the government to break into your phone.  Like all technology, this “advance” has its pluses and minuses.  To paraphrase a leader in the law enforcement community, everyone wants their privacy until it’s their child at risk.  However, in the United States, at least, we have a standard that the director of the FBI seems to have forgotten- it’s called probable cause.  It’s based on a dingy pesky old amendment to the Constitution which states:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

So what happens if one does have probable cause?  This is where things get interesting.  If one has probable cause to believe that there is an imminent threat to life or property and they can’t break into a phone, then something bad may happen.  Someone could get hurt, for instance.  Is that Apple’s fault?  And who has the right to interpret and enforce the fourth amendment?  If Apple has a right to do so, then do I have the right to interpret what laws I will?  On the other hand, Apple might respond that it has no responsibility to provide law enforcement anything, and all it is doing is exercising the right of free speech to deliver a product that others use to communicate with.  Cryptographer and Professor Daniel Bernstein successfully argued this case in the 9th Circuit in the 1990s.  And he was right to do so, because going back to the beginning of this polemic, even if you believe your government to be benevolent, if it can access your information, so can a bad guy, and there are far more bad guys out there.

Apple hasn’t simply made this change because it doesn’t like the government.  Rather, the company has recognized that for consumers to put private information into their phone, they must trust the device to not be mishandled by others.  At the same time, Apple has said through their public statements that information that goes into their cloud is still subject to lawful seizure.  And this brings us back to the point that President Obama made at the beginning of the year: government risk isn’t the only form of risk.  The risk remains that private aggregators of information – like Apple and Google or worse, Facebook– will continue to use your information for whatever purposes they see fit.  If you don’t think this is the case, ask how much you pay for their services?

And since most of the data about your or that you own is either in the cloud or heading to the cloud, you might want to worry less about the phone or tablet, and more about where your data actually resides.  If you’re really concerned about governments, then you might also want to ask this question:  which governments can seize your data?  The answer to that question is not straight forward, but there are three major factors:

  1. Where the data resides;
  2. Where you reside;
  3. Where the company that controls the data resides.

For instance, If you reside in the European Union, then nominally you should receive some protection from the Data Privacy Directive.  Any company that serves European residents has to respect the rights specified in that.  On the other hand, there are of course exceptions for law enforcement.  If a server resides in some random country, however, like the Duchy of Grand Fenwick, perhaps there is a secret law that states that operators must provide the government all sorts of data and must not tell anyone they are doing so.  That’s really not so far from what the U.S. government did with National Security Letters.There’s a new service that Cisco has rolled out, called the Intercloud that neatly addresses this matter for large enterprises, providing a framework to keep some data local, and some data in the cloud, and the enterprise has some control over which.  Whether that benefit will extend to consumers is unclear.In the end I conclude that people who are truly worried about their data need to consider what online services they use, including Facebook, this blog you are reading right now, Google, Amazon, or anyone else.  They also have to consider how if at all they are using the cloud.  I personally think they have to worry less about physical devices, and that largely speaking Apple’s announcement is but a modest improvement in overall security.  The same could be said for IETF efforts.

Android Phones the next security threat?

Take it as an axiom that older software is less secure.  It’s not always true, but if the code wasn’t mature at the time of its release- meaning it hasn’t been fielded for years upon years- it’s certain to be true.  In an article in PC Magazine, Sara Yin finds that only 0.4% of Android users have up to date software, as compared to the iPhone where 90% of users have their phones up to date.

This represents a serious threat to cybersecurity, and it should have been a lesson that was already learned.  Friend and researcher Stefan Frei has already examined in great detail update rates for browsers, a primary vessel for attacks.  The irony here is that the winning model he exposed was that of Google’s Chrome.

What then was the failure with Android?  According to the PC Magazine article, the logic lies with who is responsible for updating software.  Apple take sole responsibility for the iPhone’s software.  There are a few parameters that the service provider can set, but other than that they’re hands off.  Google, however, provides the software to mobile providers, and it is those mobile providers who must then update the phone.  Guess which model is more secure?  Having SPs in the loop makes the Internet more insecure.  Google needs to reconsider their distribution model.

House Republicans Read the Constitution

That’s right.  On day 2 of their rule in the House, the AP reports that House Republicans will read the U.S. Constitution.  Better late than never, I suppose.  Of course I would like a reading comprehension test to follow.  Let’s hope that they don’t read the Constitution off an iPod/iPhone app.  When President Obama did his recess appointments last week, I wanted to review Article II (Powers of the President), and it was at that point I thought I should carry a pocket version.  I’ll leave out the names of the guilty, but one free version had truncated each of the articles, and another free version omitted Article II entirely.  That’s probably the version Congress would enjoy.  Fortunately the National Constitution Center in Philadelphia has done a very nice job on theirs.  Funnily enough, however, the Constitution is not accessible from their home page.  Here’s a link to Cornell that I like.