Let’s not blame Yahoo! for a difficult policy problem

Yahoo!Many in the tech community are upset over reports from The New York Times and others that Yahoo! responded to an order issued by the Foreign Intelligence Surveillance Act Court (FISC) to search across their entire account base a specific “signatures” of people believed to be terrorists.

It is not clear what capabilities Yahoo! already has, but it would not be unreasonable to expect them to have the ability to scan incoming messages for spam and malware, for instance.  What’s more, we are all the better for this sort of capability.  Consider that around 85% of all email is spam, a small amount of which contains malware, and Yahoo! users don’t see most of that.  Much of that can be rejected without Yahoo! having to look at the content by just examining the source IP address of the device attempting to send Yahoo! mail, but in all likelihood they do look at some, as many systems do.  In fact one of the most popular open source systems in the early days known as SpamAssassin did just this.  The challenge from a technical perspective is to implement such a mechanism without the mechanism itself having a large target surface.

If the government asking for certain messages sounds creepy, we have to ask what a signature is.  A signature normally refers to characteristics of a communication that would either identify its source or that it has some quality.  For instance, viruses all have signatures.  In this case, what is claimed is that terrorists communicated in a certain way such that they could be identified.  According to The Times, the government demonstrated probably cause that this was true, and that the signature was “highly unique”*.  That is, the signature likely matches very few actual messages that the government would see, although we don’t know how small that number really is.  Yahoo! has denied having a capability to scan across all messages in their system, but beyond that not enough is public to know what they would have done.  It may well not have been reasonable to search specific accounts because one can easily create an account, and the terrorists may have many.  The government publicly revealing either the probable cause or the signature would tantamount to alerting terrorists that they are in fact investigation, and that they can be tracked.

The risk to civil liberties is that there are no terrorists at all, and this is just a fishing expedition, or worse, persecution of some form.  The FISC and its appellate courts are intended to provide some level of protection against abuse, but in all other cases, the public as a view to whether that abuse is actually occurring.  Many have complained about a lack of transparent oversight of the FISC, but the question is how to have that oversight without alerting The Bad Guys.

The situation gets more complex if one considers that other countries would want the same right to demand information from their mail service providers that the U.S. enjoys, as Yahoo’s own transparency report demonstrates.

In short we are left with a set of difficult compromises that pit gathering of intelligence on terrorists and other criminals against the risk of government abuse.  That’s not Yahoo!’s fault.  This is a hard problem that requires thoughtful consideration of these trade offs, and the timing is right to think about this.  Once again, the Foreign Intelligence Surveillance Act (FISA) will be up for reauthorization in Congress next year.  And in this case, let’s at least consider the possibility that the government is trying to fulfill its responsibility of protecting its citizens and residents, and Yahoo! is trying to be a good citizen in looking at each individual request on its merits and in accordance with relevant laws.


* No I don’t know the difference between “unique” and “highly unique” either.

How MUD could help against the Krebs Attack

CybercrimeIn the attack against krebsonsecurity.com, one of the systems that is said to have been used was the “H.264 Network DVR“.  This device accepts HTTP connections, and communicates outbound using FTP and EMail.  There may also be an undocumented protocol for a proprietary interface.

As I’ve previously discussed, use of Manufacturer Usage Descriptions (MUD) can limit the attack surface of a device, and it can also prevent devices from being used to source an attack.    MUD allows for manufacturers to define classes, and now one simply needs to fill them in on deployment.  From the manufacturer’s side, one needs to provide the file.  For the DVR in question, I used MudMaker to create a description that a network device could use to create appropriate network protections:

{
  "ietf-mud:meta-info": {
    "lastUpdate": "2016-10-02T08:28:19+02:00",
    "systeminfo": "DVR H.264",
    "cacheValidity": 1440
  },
  "ietf-acl:access-lists": {
    "ietf-acl:access-list": [
      {
        "acl-name": "mud-65333-v4in",
        "acl-type": "ipv4-acl",
        "ietf-mud:packet-direction": "to-device",
        "access-list-entries": {
          "ace": [
            {
              "rule-name": "entout0-in",
              "matches": {
                "ietf-mud:controller": "http://dvr264.example.com/controller"
              },
              "actions": {
                "permit": [
                  null
                ]
              }
            },
            {
              "rule-name": "entin0-in",
              "matches": {
                "ietf-mud:controller": "http://dvr264.example.com/controller",
                "protocol": 6,
                "source-port-range": {
                  "lower-port": 80,
                  "upper-port": 80
                }
              },
              "actions": {
                "permit": [
                  null
                ]
              }
            }
          ]
        }
      },
      {
        "acl-name": "mud-65333-v4out",
        "acl-type": "ipv4-acl",
        "ietf-mud:packet-direction": "from-device",
        "access-list-entries": {
          "ace": [
            {
              "rule-name": "entout0-in",
              "matches": {
                "ietf-mud:controller": "http://dvr264.example.com/controller"
              },
              "actions": {
                "permit": [
                  null
                ]
              }
            },
            {
              "rule-name": "entin0-in",
              "matches": {
                "ietf-mud:controller": "http://dvr264.example.com/controller",
                "protocol": 6,
                "source-port-range": {
                  "lower-port": 80,
                  "upper-port": 80
                }
              },
              "actions": {
                "permit": [
                  null
                ]
              }
            }
          ]
        }
      }
    ]
  }
}

What is left for the controller to do that is specific to this device is define which devices are in the class http://dvr64.example.com.  That might include the FTP-based logging system that this model uses, for instance, as well as those systems that are authorized to connect to the HTTP port.

The important part of that description is what you don’t see.  You don’t see any of the attack vectors used, because through this whitelist approach, you only specify what is permitted, and everything else aside from name service and time queries is explicitly denied.  This device uses a good few services, and so I haven’t specified each one in the example for brevity’s sake.

This may well have stopped the hacker from gaining access to the device in the first place, and would have stopped the device from being able to attack the blogger, and many other attacks as well.

Turning the Home Router from a Threat to a Helping Hand

lybid_1002The Federal Communications Commission is set to vote on a proposed rule that would require cable companies to offer consumers more choices about whether they use a rented cable box or home router or their own.  More choice is good, and one could make a strong argument that lack of consumer choice has retarded development of home routers.  However, this decision may come with a few pitfalls from a security perspective.

Home routers were recently a component of the attack against krebsonsecurity.com.  There are many reasons that this would be the case.  Some routers have as a blank password with user name “admin” that allows anyone to access them.  Others have well-known vulnerabilities in their software that has gone unpatched for years.  If the service provider is providing the router, then we can say that it is responsible for the device’s maintenance.  On the other hand, the consumer has a particularly bad track record of doing a good job protecting the device.

Second, because most consumers do not employ security professionals to protect devices in their homes, the service provider is in a good position to offer that protection.  It does require that the service provider have access to the home router to identify threats within the home itself.  By having some control over that device and having access to logging information, the home router is in a position to identify potential attacks within the home itself.  But the router itself needs some guidance to perform that task, and the router itself typically cannot retain all of the necessary knowledge.  Cloud services are useful for this purpose, whether managed by the SP or by some other entity.

Regardless of what the FCC orders, SPs are in the position of setting the standards necessary to connect a router to the Internet.  CableLabs has set several standards, one known as DOCSIS.  While the current specification has a limited security section, one could easily envision additional capabilities that would protect device within the home.  As new entrants such as Google and Ubiquiti develop additional capabilities, they may have more to say about security in the home.  If home users are to have a choice, one choice they should have is to allow service providers to protect them.


Picture courtesy Sergiy dk on Wikimedia CC BY-SA 3.0

Does Facebook Getting Money from a Spammer help?

CybercrimeAs many will have seen, Facebook won a court judgment today for $711 million from well-known spammer Sanford Wallace.  It’s always nice when a spammer gets told “stop that”, but as bad as some people might think Wallace is, he is a walk in the park compared to the real villains out there.  They are faceless, nameless, thugs who want to steal your money, your identity, and whatever else they think they can take from you and your family.  They have no scruples and cannot be easily traced.  The occasional bust makes the news across the world, which is one way of knowing that these miscreants are hard to find.  The other way is that your mailbox is still collecting spam, some of it dangerous.