New Paris Cyber-Accord: Nice words. What comes next?

The accord and Macron’s words are a bit “aspirational”.

Recently France has taken the initiative to produce what they call The Paris Call for Trust and Security.  This call has garnered signatures of  some 57 countries and and several hundred companies and organizations (including that of my own employer).*  What President Macron and others have recognized is that there is a risk of both state and non-state actors interfering in the lives of  everyday people, possibly causing them great harm.

Every day provides a new example of why protection of our institutions is necessary.  This video was made some time ago.  We’d like to think that security of our infrastructure has improved, but Marriott proved us wrong last week, with over half a billion customer records having been stolen.

The Paris Call seems to address itself to these sorts of civilian attacks, which to me is appropriate. In particular, it focuses on the following areas (I’m condensing just a bit):

  • Protection of critical infrastructure,
  • Protection of electoral processes (Gee, I wonder who that is aimed at),
  • IPR protection,
  • Tools development to prevent the spread of malware,
  • No hack-backs, where people attempt to take the offense as a either a defense or a means of deterrence,
  • Acceptance of international norms of behavior.

The Call does not create or call for the creation of any new mechanism to pursue these points, but rather the use of existing mechanisms.  Instead, what we appear to be witnessing is the creation of a voting bloc inside existing multilateral and multi-stakeholder processes, as well as a non-binding commitment among the signatories themselves to pursue these principles.  It’s all motherhood and apple pie until we understand what the actual instantiation of these principles means.  Does it mean, for instance, an end of free software in order to protect content providers?  Will it require content publishers to actively protect all rights of copyright holders, even if those holders are unknown?

Also, should these principles apply equally to civilians and the military ?  Let’s take for example the Stuxnet attack, where some state actor attacked Iran’s nuclear weapons facility.  Should that attack have been prevented by these principles?  To what end?  Helping Iran gain an offensive nuclear capability?  If the choice was a cyberattack against a military installation versus a physical attack, where people would surely die, I’ll take the cyber attack any time.

There is another big topic that isn’t covered.  Right now governments are all struggling with how to handle cross-border law enforcement.  That is- if someone in Jurisdiction A hacks into or uses a computer in Jurisdiction B to attack a person in a third Jurisdiction C,  who can reasonably ask Jurisdiction B for the data?  This is a massive topic that the Council of Europe has been attempting to address for years.  These are knotty issues, because of the limitations on the powers of each country relating to search and seizure.

In short, while this is nice text, it doesn’t seem to me to accomplish much on its own. 

It does seem to be a slap at Russia and China, two  notably absent countries.  Three other notably absent countries are the U.S., Israel, and Iran.  Coincidence?  I think not.


*The views of my employer surely vary from my own today.

Addressing the Department Gap in IoT Security

People in departments outside of IT aren’t paid to understand IT security. In the world of IoT, we need to make it easy for those people to do the right thing.

So, Mr. IT professional, you suffer from your colleagues at work connecting all sorts of crap to your network that you’ve never heard of?  You’re not alone.  As more and more devices hit the network, the ability to maintain control can often prove challenging.  Here are your choices for dealing with miscreant devices:

  1. Prohibit them and enforce the prohibition by firing anyone who attaches an unauthorized device.
  2. Allow them and suffer.
  3. Prohibit them but not enforce the prohibition.
  4. Provide an onboarding and approval process.

A bunch of companies I work with generally aim for 1 and end up with 3.  A bunch of administrators recognize the situation and fit into 2.  Everyone I talk to wants to find a way to scale 4, but nobody has, as of yet.  What does 4 involve?  Today, it means an IT person researching a given device, determining what networking requirements it has, creating firewall rules, and some associated policies, and establishing an approval mechanism for a device to connect.

This problem is exacerbated by the fact that many different enterprise departments have wide and varied needs, and the network stands as critical to many of them.  Furthermore, very few of those departments reports through the chief information officer, and chief information security officers often lack the attention their concerns receive.

I would claim that the problem is that incentives are not well aligned, were people in other departments even aware of the IT person’s concerns in the first place, and often they are not.  The person responsible for providing vending machines just wants to get the vending machines hooked up, while the person in charge of facilities just wants the lights to come on and the temperature to be correct.

What we know from hard experience is that the best way to address this sort of misalignment is to make it easy for everyone to do the right thing. What, then, is the right thing?

Prerequisites

It has been important pretty much forever for enterprises to be able to maintain an inventory of devices that connect to their networks.  This can be tied into the DHCP infrastructure or to the device authentication infrastructure.  Many such systems exist, the simplest of which is Active Directory.  Some are passive and snoop the network.  The key point is simply this: you can’t authorize a system if you can’t remember it.  In order to remember it, the device itself needs to have some sort of unique identifier.  In the simplest case, this is a MAC address.

Ask device manufacturers to help

Manufacturers need to make your life easier by providing you a description what the device’s communication requirements are.  The best way to do this is with Manufacturer Usage Descriptions (MUD).  When MUD is used, your network management system can retrieve a recommendation from the manufacturer, and then you can approve, modify, or refuse a policy.  By doing this, you don’t have to go searching all over random web sites.

Have a simple and accessible user interface for people to use

Once in place you now have a nice system that encourages the right thing to happen, without other departments having to do anything other than to identify the devices they want to connect.  That could be as simple as a picture of a QR code or otherwise entering a serial #.  The easier we can make it for people who know nothing about networking, the better all our lives will be.

Should Uber require a permit for testing?

The Wall Street Journal and others are reporting on the ongoing battle between Uber and state and local governments.  This time it’s their self-driving car.  Uber announced last week that they would not bother to seek a permit to test their car, claiming that the law did not require one.  The conflict took on a new dimension last week when one of Uber’s test vehicles ran a red light.

Is Uber right in not wanting to seek a permit?  Both production and operation of vehicles in the nearly all markets are highly regulated.  That’s because  auto accidents are a leading cause of death in the United States and elsewhere.  The good news is that number is falling.  In part that’s due to regulation, and in part it’s due to civil liability laws.  I’m confident that Uber doesn’t want to hurt people, and that their interest is undoubtedly to put out a safe service so that their reputation doesn’t suffer and their business thrives.  But the rush to market is sometimes too alluring.  With the pace of technology being what it is, Uber and others would be in a position to flood the streets with unsafe vehicles, possibly well beyond their ability to pay out damages.  That’s when regulations are required.

There are a few hidden points in all of this:

  • As governments consider what to do about regulating the Internet of Things, they should recognize that much of the Internet of Things is already regulated.  California did the right thing by incrementally extending the California Vehicle Code to cover self-driving vehicles, rather than come up with sweeping new regulations.  Regulations already exist for many other industries, including trains, planes, automobiles, healthcare, electrical plants.
  • We do not yet have a full understanding of the risks involved with self-driving cars.  There are probably many parts of the vehicle code that require revision.  By taking the incremental approach, we’ve learned, for instance, that there are places where the vehicle code might need a freshening up.  For instance, self-driving cars seem to be following the law, and yet causing problems for some bicyclists.
  • IoT regulation is today based on traditionally regulated markets.  This doesn’t take into account the full nature of the Internet, and what externalities people are exposed to as new products rapidly hit the markets.  This means, to me, that we will likely need some form of regulation over time.  There is not yet a regulation that would have prevented the Mirai attack.  Rather than fight all regulation as Uber does, it may be better to articulate the right principles to apply.  One of those is that there has to be a best practice.  In the case of automobiles, the usual test for the roads is this is whether the feature will make things more or less safe than the status quo.  California’s approach is to let developers experiment under limited conditions in order to determine an answer.

None of this gets to my favorite part, which is whether Uber’s service can be hacked to cause chaos on the roads.  Should that be tested in advance?  And if so how?  What are the best practices Uber should be following in this context?  Some exist.

More on this over time.

Learning from the Dyn attack: What are the right questions to ask?

The attack on DNS provider DYN’s infrastructure that took down a number of web sites is now old news.  While not all the facts are public, the press reports that once again, IoT devices played a significant role.  Whether that it is true or not, it is a foregone conclusion that until we address security of these devices, such attacks will recur.  We all get at least two swings at this problem: we can address the attacks from Things as they happen and we can work to keep Things secure in the first place.

What systems do we need to look at?

  • End nodes (Cameras, DVRs, Refrigerators, etc);
  • Home and edge firewall systems;
  • Provider network security systems;
  • Provider peering edge routers; and
  • Infrastructure service providers (like DYN)

In addition, researchers, educators, consumers and governments all have a role to play.

Roles of IoT

What do the providers of each of those systems need to do? 

What follows is a start at the answer to that question.

Endpoints

It’s easy to pin all the blame on the endpoint developers, but doing so won’t buy so much as a cup of coffee. Still, thing developers need to do a few things:

  • Use secure design and implementation practices, such as not hardcoding passwords or leaving extra services enabled;
  • Have a means to securely update their systems when a vulnerability is discovered;
  • Provide network enforcement systems Manufacturer Usage Descriptions so that the networks can enforce policies around how a device was designed to operate.

Home and edge firewall systems

There are some attacks that only the network can stop, and there are some attacks that the network can impede.  Authenticating and authorizing devices is critical.  Also, edge systems should be quite leery of devices that simply self-assert what sort of protection they require, because a hacked device can make such self-assertions just as easily as a healthy device.  Hacked devices have recently been taking advantage of a gaming mechanism in many home routers known as Universal Plug and Play (uPnP) which permits precisely the sorts of self-assertions should be avoided.

Provider network security systems

Providers need to be aware of what is going on in their network.  Defense in depth demands that they observe their own networks in search of malicious behavior, and provide appropriate mitigations.  Although there are some good tools out there from companies like Cisco such as Netflow and OpenDNS, this is still a pretty tall order.  Just examining traffic can be capital-intensive, but then understanding what is actually going on often requires experts, and that can get expensive.

Provider peering edge routers

The routing system of the Internet can be hijacked.  It’s important that service providers take steps to prevent that from happening.  A number of standards have been developed, but service providers have been slow to implement for one reason or another.  It helps to understand the source of attacks.  Implementing filtering mechanisms makes it possible for service providers to establish accountability for the sources of attack traffic.

Infrastructure providers

Infrastructure upon which other Internet systems rely needs to be robust in the face of attack.  DYN knows this.  The attack succeeded anyway.  Today, I have little advice other than to understand each attack and do what one can to mitigate it the next time.

Consumers

History has shown that people in their homes cannot be made to do much to protect themselves in a timely manner.  Is it reasonable, for instance, to insist that a consumer to spend money to replace an old system that is known to have vulnerabilities?  The answer may be that it depends just how old that system really is.  And this leads to our last category…

Governments

The U.S. CapitolGovernments are already involved in cybersecurity.  The question really is how involved with they get with IoT security.  If the people who need to do things aren’t doing them, either we have the wrong incentive model and need to find the right one, or it is likely that governments will get heavily involved.  It’s important that not happen until the technical community has some understanding as to the answers of these questions, and that may take some time.

And so we have our work cut out for us.  It’s brow furrowing time.  As I wrote above, this was just a start, and it’s my start at that.  What other questions need answering, and what are the answers?

Your turn.



Photo credits:
Capitol by Deror Avi – Own work, CC BY-SA 3.0
Router by Weihao.chiu from zh, CC BY-SA 3.0
DVR by Kabel Deutschland, CC BY 3.0
Router by Cisco systems – CC BY-SA 1.0

Looming wireless problems with IoT security

Security experts have two common laments:

  • Security is an afterthought, and
  • Security is hard to get right.

No place else has this been more true than in wireless security, where it took the better part of two decades to get us to where we are today.  “Wireless” can mean many different things.  It could mean 3G cellular service or Wifi or Bluetooth or something else.  In the context of Wifi, we have standards such as WPA Personal and WPA Enterprise that were developed at the IEEE.  Similarly, 3GPP has developed secure access standards for your phone through the use of a SIM card.  With either WPA Enterprise or 3G, you can bet that if your device starts to misbehave, it can be uniquely identified.

Unfortunately that’s not so much the case with other wireless standards, and in particular for IEEE’s 802.15.4, where security has for the time being been largely left to higher layers.  And that’s just fine if what we’re talking about is your Bluetooth keyboard.  But it’s not fine at all if we’re talking large number of devices, where one of them is misbehaving.

mesh-insecurity

Here we have a lighting network.  It might consist of many different light bulbs.  Maybe hundreds.  Now imagine a bad guy breaking into one of those devices and attacking the others.  Spot the bad guy.  In a wired world, assuming you have access to the switch, you can spot the device simply by looking at which port a connection came into.  But this is wireless, and mesh wireless at that.  In the case where each device has its own unique key, you can trace per session per device.  But if all devices use a shared key, you need to find other means.  A well hacked device isn’t going to give you many clues; it’s going to try to mimic a device that isn’t hacked, perhaps one that isn’t turned on or one that doesn’t even exist.

These attacks can be varied in nature.  If the mesh is connected to other networks, like enterprise networks, then attacks can be aimed at resources on those networks.  This might range from a form of a so-called “Snow Shoe” attack, where no one device generates a lot of traffic but the aggregate of hacked devices overwhelm a target, to something more destructive, like attempts to reconfigure critical infrastructure.

Some attacks aren’t even intended as such, as Raul Rojas discovered in 2009, when a single light bulb took down his IoT-enabled house.

What to do?

The most obvious thing to do is not to get into this situation in the first place.  From a traceability standpoint, network managers need to be able to identify the source of attacks.  Having unique wireless sessions between leaf and non-leaf nodes that are bound to source addresses is ideal.  Alternatively, all communications in a mesh could tunnel to non-leaf nodes that have strong diagnostic capabilities, like IPFIX and port spanning.  At that point administrators can at least log traffic to determine the source of attacks.  That’s a tall order for a light bulb, but it’s why companies like Cisco exist- to protect your infrastructure.

If none of these alternatives exist, poor network administrators (who might just be home owners like Mr. Rojas)  are forced into a position where they might need to consider the entire mesh a single misbehaving device, and disconnect it from the network.  And even that might not do the job: a smart piece of malware might notice and quiet itself until it can determine that the mesh has been re-connected.

Some careful thought is required as these capabilities develop.