Pew should evolve its cybersecurity survey

Pew should evolve the questions they are asking and the advice they are giving based on how the threat environment is changing. But they should keep asking.

Last year, Pew Research surveyed just over 1,000 people to try to get a feel for how informed they are about cybersecurity.  That’s a great idea because it informs us as a society as to how well consumers are able to defend themselves against common attacks.   Let’s consider some ways that this survey could be evolved, and how consumers can mitigate certain common risks.  Keep in mind that Pew conducted the survey in June of last year in a fast changing world.

Several of the questions related to phishing, Wifi access points and VPNs.  VPNs have been in the news recently because of the Trump administration’s and Congress’  backtracking on privacy protections.  While privacy invasion by service providers is a serious problem, accessing one’s bank at an open access point is probably considerably less so.  There are two reasons for this.  First, banks almost all make use of TLS to protect communications.  Attempts to fake bank sites by intercepting communications will, at the very least produce a warning that browser manufacturers have made increasingly difficult to bypass.  Second, many financial institutions make use of apps in mobile devices that take some care to validate that the user is actually talking to their service.  In this way, these apps actually mark a significant reduction in phishing risk.  Yes, the implication is that using a laptop with a web browser is a slightly riskier means to access your bank than the app it likely provides, and yes, there’s a question hiding there for Pew in its survey.

Another question on the survey refers to password quality.  While this is something of a problem, there are two bigger problems hiding that consumers should understand:

  • Reuse of passwords.  Consumers will often reuse passwords simply because it’s hard to remember many of them.  Worse, many password managers themselves have had vulnerabilities.  Why not?  It’s like the apocryphal Willie Sutton quote about robbing banks because that’s where the money is.  Still, with numerous break-ins, such as those that occurred with Yahoo! last year*, and the others that have surely gone unreported or unnoticed, re-use of passwords is a very dangerous practice.
  • Aggregation of trust in smart phones.  As recent articles about American Customs and Border Patrol demanding access to smart phones demonstrate, access to many services such as Facebook, Twitter, and email can be gained just by gaining access to the phone.  Worse, because SMS and email are often used to reset user passwords, access to the phone itself typically means easy access to most consumer services.

One final area that requires coverage: as the two followers of my blog are keenly aware, IoT presents a whole new class of risk that Pew has yet to address in its survey.

The risks I mention were not well understood as early as five years ago.  But now they are, and they have been for at least the last several years.  Pew should keep surveying, and keep informing everyone, but they should also evolve the questions they are asking and the advice they are giving.


* Those who show disdain toward Yahoo! may find they themselves live in an enormous glass house.

MUD sliding along

Your chance to try and chime in on Manufacturer Usage Descriptions, a way to protect IoT devices.

You may recall that I am working on a mechanism known as Manufacturer Usage Descriptions (MUD).  This is the system by which manufacturers can inform the network about how best to protect their products.  The draft for this work is now about to enter “working group last call” at the IETF.  This means that now would be a very good time for people to chime in with their views on the subject.

In the meantime, MUD Maker has also been coming along. This is a tool that generates manufacturer usage descriptions.  You can find the tool here.

MUD isn’t meant to be the whole enchilada of IoT security.  Other tools are needed to authenticate devices onto the network, and to securely update them.  And manufacturers have to take seriously not only their customers’ needs, but what risk they may impose on others, as Mirai reminded us.  Had MUD been around at the time, it’s possible that Mirai would not have happened.

Time to end the war on the network

Edward SnowdenWhen Edward Snowden disclosed the NSA’s activities, many people came to realize that network systems can be misused, even though this was always the case.  People just realized what was possible.  What happened next was a concerted effort to protect protect data from what has become known as “pervasive surveillance”.  This included development of a new version of HTTP that is always encrypted and an easy way to get certificates.

However, when end nodes hide everything from the network, not only can the network not be used by the bad guys, but it can no longer be used by the good guys to either authorize appropriate communications or identify attacks.  A example is spam.  Your mail server sits in front of you and can reject messages when they contain malware or are just garbage.  It does that by examining both the source of the message and the message itself.  Similarly, anyone who has read my writing about Things knows that the network needs just a little bit of information from the device in order to stop unwanted communications.

I have written an Internet Draft that begins to establish a framework for when and how information should be shared, with the idea being that information should be carefully shared with a purpose, understanding that there are risks involved in doing so.  The attacks on Twitter and on krebsonsecurity.com are preventable, but it requires us to recognize that end nodes are not infallible, and they never will be.  Neither, by the way, are network devices.  So long as all of these systems are designed and built by humans, that will be the case.  Each can help each other in good measure to protect the system as a whole.


Photo of Edward Swowden By Laura Poitras / Praxis Films, CC BY 3.0

Home wireless security challenges for Things

It’s hard – but not impossible – for Things to connect to a home network in some sort of automated fashion.

WifiWhat’s the right way to connect a Thing to your home network?  Way back in the good old days, say last year, in order to connect a device to your home network, you could do it easily enough because the system had a display and a touch screen or a keyboard.  With many Things, there is no display and there is no keyboard, and some of the devices we are connecting may themselves not be that accessible to the home owner.  Think attic fans or even some light bulbs.  A means is needed first to tell these devices which network is the correct network to join, and then what the credentials for that network are.  In order to do any of this, there needs to be a way for the home router to communicate with the device in a secure and confidential way.  That means that each end requires some secret.  Public key cryptography is perfect for this, and it is how things would work in the enterprise.

WPA2 Enterprise makes use of individual keys and a flexible means to authenticate individuals and devices.  It looks a little like this:

EAP over Radius

EAP stands for Extensible Access Protocol, and it is just that.  There are many different authentication mechanisms available with EAP.   One method called EAP-TLS calls for both sides of the communication to transmit a certificate in an authentication transaction that contains their identities as certified by someone.  Initially, a device may be certified by its manufacturer, but then later it would use a certificate that is certified by the local network system.

A QR code

One challenge is getting the device certificate to be known by the network. One simple method to do this is to have an application tied to a camera that scans a QR code that points to a URL containing a signed copy of the device’s identity or certificate.  For instance, the QR code to the right encodes this URL:

https://www.ofcourseimright.com/qr/2834298343404739274639374630463934

which in turn gets you a certificate.  The next challenge is whether the device should trust the network. In the enterprise, there is a new approach being developed  known as Bootstrapping Remote Secure Key Infrastructures (BRSKI) (sometimes pronounced “brewski”).  In this case the manufacturer tells the device that the network is the correct one to join by essentially providing the device the network’s operational trust anchor.  This allows the device to validate the network’s certificate.

That’s something of a tall order even in the enterprise, but one that is worth aiming for.  If the home can leverage a service offered either by a service provider or by a new fangled home router company, if THEY can authenticate the home, and the manufacturer can authenticate them, then we have ourselves a ball game.  More work needed to get all the elements in place.

Let’s not blame Yahoo! for a difficult policy problem

Yahoo!Many in the tech community are upset over reports from The New York Times and others that Yahoo! responded to an order issued by the Foreign Intelligence Surveillance Act Court (FISC) to search across their entire account base a specific “signatures” of people believed to be terrorists.

It is not clear what capabilities Yahoo! already has, but it would not be unreasonable to expect them to have the ability to scan incoming messages for spam and malware, for instance.  What’s more, we are all the better for this sort of capability.  Consider that around 85% of all email is spam, a small amount of which contains malware, and Yahoo! users don’t see most of that.  Much of that can be rejected without Yahoo! having to look at the content by just examining the source IP address of the device attempting to send Yahoo! mail, but in all likelihood they do look at some, as many systems do.  In fact one of the most popular open source systems in the early days known as SpamAssassin did just this.  The challenge from a technical perspective is to implement such a mechanism without the mechanism itself having a large target surface.

If the government asking for certain messages sounds creepy, we have to ask what a signature is.  A signature normally refers to characteristics of a communication that would either identify its source or that it has some quality.  For instance, viruses all have signatures.  In this case, what is claimed is that terrorists communicated in a certain way such that they could be identified.  According to The Times, the government demonstrated probably cause that this was true, and that the signature was “highly unique”*.  That is, the signature likely matches very few actual messages that the government would see, although we don’t know how small that number really is.  Yahoo! has denied having a capability to scan across all messages in their system, but beyond that not enough is public to know what they would have done.  It may well not have been reasonable to search specific accounts because one can easily create an account, and the terrorists may have many.  The government publicly revealing either the probable cause or the signature would tantamount to alerting terrorists that they are in fact investigation, and that they can be tracked.

The risk to civil liberties is that there are no terrorists at all, and this is just a fishing expedition, or worse, persecution of some form.  The FISC and its appellate courts are intended to provide some level of protection against abuse, but in all other cases, the public as a view to whether that abuse is actually occurring.  Many have complained about a lack of transparent oversight of the FISC, but the question is how to have that oversight without alerting The Bad Guys.

The situation gets more complex if one considers that other countries would want the same right to demand information from their mail service providers that the U.S. enjoys, as Yahoo’s own transparency report demonstrates.

In short we are left with a set of difficult compromises that pit gathering of intelligence on terrorists and other criminals against the risk of government abuse.  That’s not Yahoo!’s fault.  This is a hard problem that requires thoughtful consideration of these trade offs, and the timing is right to think about this.  Once again, the Foreign Intelligence Surveillance Act (FISA) will be up for reauthorization in Congress next year.  And in this case, let’s at least consider the possibility that the government is trying to fulfill its responsibility of protecting its citizens and residents, and Yahoo! is trying to be a good citizen in looking at each individual request on its merits and in accordance with relevant laws.


* No I don’t know the difference between “unique” and “highly unique” either.