Let’s not blame Yahoo! for a difficult policy problem

Yahoo!Many in the tech community are upset over reports from The New York Times and others that Yahoo! responded to an order issued by the Foreign Intelligence Surveillance Act Court (FISC) to search across their entire account base a specific “signatures” of people believed to be terrorists.

It is not clear what capabilities Yahoo! already has, but it would not be unreasonable to expect them to have the ability to scan incoming messages for spam and malware, for instance.  What’s more, we are all the better for this sort of capability.  Consider that around 85% of all email is spam, a small amount of which contains malware, and Yahoo! users don’t see most of that.  Much of that can be rejected without Yahoo! having to look at the content by just examining the source IP address of the device attempting to send Yahoo! mail, but in all likelihood they do look at some, as many systems do.  In fact one of the most popular open source systems in the early days known as SpamAssassin did just this.  The challenge from a technical perspective is to implement such a mechanism without the mechanism itself having a large target surface.

If the government asking for certain messages sounds creepy, we have to ask what a signature is.  A signature normally refers to characteristics of a communication that would either identify its source or that it has some quality.  For instance, viruses all have signatures.  In this case, what is claimed is that terrorists communicated in a certain way such that they could be identified.  According to The Times, the government demonstrated probably cause that this was true, and that the signature was “highly unique”*.  That is, the signature likely matches very few actual messages that the government would see, although we don’t know how small that number really is.  Yahoo! has denied having a capability to scan across all messages in their system, but beyond that not enough is public to know what they would have done.  It may well not have been reasonable to search specific accounts because one can easily create an account, and the terrorists may have many.  The government publicly revealing either the probable cause or the signature would tantamount to alerting terrorists that they are in fact investigation, and that they can be tracked.

The risk to civil liberties is that there are no terrorists at all, and this is just a fishing expedition, or worse, persecution of some form.  The FISC and its appellate courts are intended to provide some level of protection against abuse, but in all other cases, the public as a view to whether that abuse is actually occurring.  Many have complained about a lack of transparent oversight of the FISC, but the question is how to have that oversight without alerting The Bad Guys.

The situation gets more complex if one considers that other countries would want the same right to demand information from their mail service providers that the U.S. enjoys, as Yahoo’s own transparency report demonstrates.

In short we are left with a set of difficult compromises that pit gathering of intelligence on terrorists and other criminals against the risk of government abuse.  That’s not Yahoo!’s fault.  This is a hard problem that requires thoughtful consideration of these trade offs, and the timing is right to think about this.  Once again, the Foreign Intelligence Surveillance Act (FISA) will be up for reauthorization in Congress next year.  And in this case, let’s at least consider the possibility that the government is trying to fulfill its responsibility of protecting its citizens and residents, and Yahoo! is trying to be a good citizen in looking at each individual request on its merits and in accordance with relevant laws.


* No I don’t know the difference between “unique” and “highly unique” either.

[del.icio.us] [Digg] [Facebook] [Reddit] [Twitter]

Comey and Adult Conversations About Encryption

What does an adult conversation over encryption look like? To start we need to understand what Mr. Comey is seeking. Then we can talk about the risks.

AP and others are reporting that FBI director James Comey has asked for “an adult conversation about encryption.” As I’ve previously opined, we need just such a dialog between policy makers, the technical community, and the law enforcement community, so that the technical community has a clear understanding of what it is that investigators really want, and policy makers and law enforcement have a clear understanding of the limits of technology.  At the moment, however, it cannot be about give and take.  Just as no one cannot legislate that π = 3, no one can legislate that lawful intercept can be done in a perfectly secure way.  Mr. Comey’s comments do not quite seem to grasp that notion.  At the same time, some in the technical community do not want to give policy makers to even evaluate the risks for themselves.  We have recently seen stories of the government stockpiling malware kits.  This should not be too surprising, given that at the moment there are few alternatives to accomplish their goals (whatever they are).

So where to start?  It would be helpful to have from Mr. Comey and friends a concise statement as to what access they believe they need, and what problem they think they are solving with that access.  Throughout All of This, such a statement has been conspicuous in its absence.  In its place we have seen sweeping assertions about grand bargains involving the Fourth Amendment.  We need to be specific about what the actual demand from the LI community is before we can have those sorts of debates.  Does Mr. Comey want to be able to crack traffic on the wire?  Does he want access to end user devices?  Does he want access to data that has been encrypted in the cloud?  It would be helpful for him to clarify.

Once we have such a statement, the technical community can provide a view as to what the risks of various mechanisms to accomplish policy goals are.  We’ve assuredly been around the block on this a few times.  The law enforcement community will never obtain a perfect solution.  They may not need perfection.  So what’s good enough for them and what is safe enough for the Internet?  How can we implement such a mechanism in a global context?  And how would the mechanism be abused by adversaries?

The devil is assuredly in the details.

How Much Do You Value Privacy?

People in my company travel a lot, and they like to have their itineraries easily accessible.  My wife wants to know when and where I will be, and that’s not at all unreasonable.  So, how best to process and share that information?  There are now several services that attempt to help you manage it.  One of those services, TripIt.Com, will take an email message as input, organize your itinerary, generate appropriate calendar events, and share that information with those you authorize.

The service is based in the U.S., and might actually share information with those you do not authorize, to market something to you- or worse.  If the information is stolen, as was the case with travel information from a hotel we discussed recently, it can be resold to burglars who know when you’re way.  That can be particularly nasty if in fact only you are away, and the rest of your family is not.

But before we panic and refuse to let any of this information out, one should ask just how secure that information is.  As it happens, travel itineraries are some of the least secure pieces of information you can possibly have.  All a thief really needs is an old ticket stub that has one’s frequent flyer number, and we’re off to the races.  In one case, it was shown that with this information a thief could even book a ticket for someone else.

So how, then, do we evaluate the risk of using a service like TripIt? First of all, TripIt does not use any form of encryption or certificate trust chain to verify their identity.  That means that all of your itinerary details go over the network in the clear.  But as it turns out, you’ve probably already transmitted all of your details in the clear to them by sending the itinerary in email.  Having had a quick look at their mail servers, they do not in fact verify their server identities through the use of STARTTLS, not that you as a user can easily determine this in advance.

Some people might have stopped now, but others have more tolerance for risk.

Perhaps a bigger problem with TripIt is that neither its password change page nor its login page make use of SSL.  That means that when enter your your password, the text of that password goes over the network in the clear, for all to see.  It also means that you cannot be sure that the server on the other end is actually that of TripIt.  To me this is a remarkable oversight.

For all of these concerns, you still get the ability to generate an iCal calendar subscription as well as the ability to share all of this information with friends and family.  Is it worth it?  One answer is that it depends on whether you actually want to enter the information yourself, whether you care about security concerns, and whether you like using calendaring clients.  It also depends on what other services are available.

Another service that is available is Dopplr.  It also attempts to be a social networking site, not unlike Linked In.  Dopplr allows you to share you itineraries with other people, tells you about their upcoming trips (if they’re sharing with you), and it lets you create an iCal subscription.

Dopplr also has some security problems, in that they do not use SSL to protect your password.  They also do not use SSL for their main pages.  They do, however, support OpenId, an attempt to do away with site passwords entirely.  I’ll say more about OpenId in the future, but for now I’ll state simply that just because something is new does not make it better.  It may be better or worse.

And so there you have it.  Two services, both with very similar offerings, and both with almost the same privacy risks.  One of them, by the way, could distinguish themselves by improving their privacy offering.  That would certainly win more of my business.