Mat Honan’s latest piece on being hacked. The important lesson here is in the explanation of how interlocking relationships between your accounts can make your online accounts much more vulnerable.
Today Bruce Schneier writes up the actual harms caused by airport security policies imposed after 9/11. It’s easy to decry these policies for their stupidity and costs, but describing the ways they cause actual harm is more work. This piece is a must read — here’s a snippet:
This loss of trust—in both airport security and counterterrorism policies in general—is the first harm. Trust is fundamental to society. There is an enormous amount written about this; high-trust societies are simply happier and more prosperous than low-trust societies. Trust is essential for both free markets and democracy. This is why open-government laws are so important; trust requires government transparency. The secret policies implemented by airport security harm society because of their very secrecy.
Paul Vixie (the guy who wrote the BIND DNS server) talks about his efforts to clean up the effects of the DNS Changer malware, which changes the DNS settings on the host computer (and sometimes the routers they use). DNS Changer was, in essence, a black hat advertising network. If you paid them, they would alter the malware victims’ DNS searches to redirect them to sites that promoted your products.
After the DNS Changer network was taken down, Vixie’s job was to come in and stand up replacement DNS servers to take the place of the bogus ones, so that victims of the malware didn’t suddenly lose the ability to perform DNS lookups. In the meantime, the working group is trying to remove the malware from hundreds of thousands of devices before the new DNS servers are taken down by court order on June 9.
Interesting look at a tough problem.
Panic Software has a long post explaining code signing and Apple’s new Gatekeeper feature in OS X Mountain Lion. Gatekeeper provides a way for developers to digitally sign their applications, verifying their origin, and for those signatures to be revoked so that the applications cannot run any longer if they are shown to be compromised by malware. Users can decide for themselves whether they want to let their Mac run any application or only applications which have been signed. (Or only applications from the App Store, although I think you’d have to be crazy to do that.) What I find particularly interesting about this is that Apple had decided last year to implement much more draconian rules that would essentially force developers into the App Store by making that the only way that developers could distribute signed applications. Wil Shipley beseeched Apple to take another course and allow developers to sign apps themselves. Here’s the recommendation he made last November:
My suggestion is for Apple to provide certificates directly to developers and allow the developers to sign their own code. And, by doing this, Apple can then reasonably say, “Ok, now we’re going to, by default, not allow the user to run any code whose certificate wasn’t issued by us and signed by a real third-party developer (except the stuff the user checks in the control panel).”
Apple then has the power, if any app is found to be malware, to shut it down remotely, immediately. This is a power Apple doesn’t have now over malware, and that won’t come from more sandboxing or more code audits. I have shown the only way to achieve it is to require developers to sign their code with a certificate from Apple.
At the time, I read the post, linked to it, and thought that it made too much sense for Apple to do it. I was pleasantly surprised to see Apple take that advice.
Update: Nelson Minar reminds us that features like Gatekeeper require users to put a lot of trust in the gatekeeper. I think one reason people are happy about Gatekeeper is that it’s such a retreat from Apple’s previous untenable position.
Daniel Jalkut’s post on Gatekeeper is also worth reading. Gatekeeper is important because it’s a step back from Apple’s previous decision to essentially force developers to distribute their apps via the App Store. That was problematic because App Store apps will be required to operate within a very limited Sandbox. Daniel Jalkut argues that the next step for Apple should take is to greatly increase the rights granted to apps in the Sandbox. Even though Apple has climbed back from its stance that would force developers into the App Store (and Sandbox), it is still making some new features of the OS available only to apps that are distributed through the App Store, so it’s important that the Sandbox be flexible enough to satisfy as many independent developers as possible.
One weakness of many Web services that require authentication, including the ones I’ve built in the past, is that the username and password of the user making the request are simply included as request parameters. Alternatively, some use basic authentication, which transmits the username and password in an HTTP header encoded using Base64. Basic authentication obscures the password, but doesn’t encrypt it.
This week I learned that there’s a better way — using a Hash-based Message Authentication Code (or HMAC) to sign service requests with a private key. An HMAC is the product of a hash function applied to the body of a message along with a secret key. So rather than sending the username and password with a Web service request, you send some identifier for the private key and an HMAC. When the server receives the request, it looks up the user’s private key and uses it to create an HMAC for the incoming request. If the HMAC submitted with the request matches the one calculated by the server, then the request is authenticated.
There are two big advantages. The first is that the HMAC allows you to verify the password (or private key) without requiring the user to embed it in the request, and the second is that the HMAC also verifies the basic integrity of the request. If an attacker manipulated the request in any way in transit, the signatures would not match and the request would not be authenticated. This is a huge win, especially if the Web service requests are not being made over a secure HTTP connection.
There’s one catch that complicates things.
For the signatures to match, not only must the private keys used at both ends of the transaction match, but the message body must also match exactly. URL encoding is somewhat flexible. For example, you may choose to encode spaces in a query string as
%20. I may prefer to use the
+ character. Furthermore, in most cases browsers and Web applications don’t care about the order of HTTP parameters.
are functionally the same, but the crypto signature of the two will not be.
Another open question is where to store the signature in the request. By the time the request is submitted to the server, the signature derived from the contents of the request will be mixed in with the data that is used to generate the signature. Let’s say I decide to include the HMAC as a request parameter. I start with this request body:
I wind up with this one:
In order to calculate the HMAC on the server, I have to remove the incoming HMAC parameter from the request body and calculate the HMAC using the remaining parameters. This is where the previous issue comes into play. If the HMAC were not in the request, I could simply calculate the signature based on the raw incoming request. Once I start manipulating the incoming request, the chances of reconstructing it imperfectly rise, possibly introducing cases where the signatures don’t match even though the request is valid.
This is an issue that everyone implementing HMAC-based authentication for a Web service has to deal with, so I started looking into how other projects handled it. OAuth uses HMAC, with the added wrinkle that the signature must be applied to POST parameters in the request body, query string parameters, and the OAuth HTTP headers included with the request. For OAuth, the signature can be included with the request as an HTTP header or as a request parameter.
This is a case where added flexibility in one respect puts an added burden on the implementor in others. To make sure that the signatures match, OAuth has very specific rules for encoding and ordering the request data. It’s up to the implementor to gather all of the parameters from the query string, request body, and headers, get rid of the
oauth_signature parameter, and then organize them based on rules in the OAuth spec.
Amazon S3′s REST API also uses HMAC signatures for authentication. Amazon embeds the user’s public key and HMAC signature in an HTTP header, eliminating the need to extract it from the request body. In Amazon’s case, the signed message is assembled from the HTTP verb, metadata about the resource being manipulated, and the “Amz” headers in the request. All of this data must be canonicalized and added to the message data to be signed. Any bug in the translation of those canonicalization rules into your own codes means that none of your requests will be authenticated by Amazon.com.
Amazon uses the
Authorization header to store the public key and HMAC. This is also the approach that Microsoft recommends. I think it’s superior to the parameter-based approach taken by OAuth. It should be noted that the Authorization header is part of the HTTP specification and if you’re going to use it, you should do so in a way that complies with the standard.
For my service, which is simpler than Amazon S3 or OAuth, I’ll be using the Authorization header and computing the HMAC based on the raw incoming request.
I realize that HMAC may not be new to many people, but it is to me. Now that I understand it, I can’t imagine using any of the older approaches to build an authenticated Web service.
Regardless of which side of the Web service transaction you’re implementing, calculating the actual HMAC is easy. Normally the SHA-1 or MD5 hashing algorithms are used, and it’s up to the implementor of the service to decide which of those they will support. Here’s how you create HMAC-SHA1 signatures using a few popular languages.
PHP has a built-in HMAC function:
hash_hmac('sha1', "Message", "Secret Key");
In Java, it’s not much more difficult:
Mac mac = Mac.getInstance("HmacSHA1"); SecretKeySpec secret = new SecretKeySpec("Secret Key".getBytes(), "HmacSHA1"); mac.init(secret); byte digest = mac.doFinal("Message".getBytes()); String hmac = Hex.encodeHexString(digest);
In that case, the
Hex class is the Base64 encoder provided by Apache’s Commons Codec project.
In Ruby, you can use the HMAC method provided with the OpenSSL library:
DIGEST = OpenSSL::Digest::Digest.new('sha1') Base64.encode64(OpenSSL::HMAC.digest(DIGEST, "Secret Key", "Message"))
Lots of thoughtful posts are cropping up about the new restrictions Apple plans to implement for OS X applications that will be distributed through the App Store. The occasion is, I suppose, the news that Apple is pushing back the deadline for all applications distributed through the App Store to be Sandbox-compliant from the middle of this month to March 2012.
For a basic rundown of the new rules and what they mean, check out this post from Pauli Olavi Ojala.
For an argument that Apple could take a more realistic, less restrictive approach to securing applications, see Will Shipley. In it, he explains why entitlements and code auditing may be useful in theory, but certificates are a more straightforward solution:
But, in the real world, security exploits get discovered by users or researchers outside of Apple, and what’s important is having a fast response to security holes as they are discovered. Certificates give Apple this.
His proposed solution makes a lot of sense, I’d love to see Apple adopt it.
Ars Technica’s Infinite Loop blog has a useful post on the sandbox features in OS X Lion as well.
Kellan Elliott-McCrea has a great post about the high cost of false positives when it comes to building software that detects fraud, spam, abuse, or whatever. The cost of false positives is explained by the base rate fallacy. The BBC explains the base rate fallacy very well. Here’s a snippet:
If 3,000 people are tested, and the test is 90% accurate, it is also 10% wrong. So it will probably identify 301 terrorists – about 300 by mistake and 1 correctly. You won’t know from the test which is the real terrorist. So the chance that our man in the mac is the real thing is 1 in 301.
Anybody who wants to talk about screening systems without an understanding of the base rate fallacy needs to do more homework.
At work, we’re switching things to encrypt a lot of information in our databases for security reasons. The project has been time consuming and painful, and in the end, our database is far less usable from a developer’s standpoint than it was before. Soon the days when I can quickly diagnose issues on the production system with a few well-placed SELECT statements will be a thing of the past.
As far as the implementation goes, I’ll tell Hibernate users who want to implement an encryption system that there’s only one way to go — UserTypes. Don’t bother with anything else.
What this project really has me thinking about, though, is the high cost of security. It ties into something from the Bill James interview that I linked to the other day. Here was his response to the question of whether we overestimate or underestimate the importance of crime:
We underestimate it, because it’s our intent to underestimate it. We only deal with it indirectly. We all do so many things to avoid being the victims of crime that we no longer see those things, so we don’t see the cost of it. Just finding a safe place for us to have this conversation, for example — we needed a quiet place, but before that, we needed to find a safe place. A hotel lobby is what it is because of the level of security. I’ve checked out of this hotel, but I’m still sitting here in the third-floor lobby, because it’s safe. When you buy something, it’s wrapped in seven layers of packaging in order to make it harder to steal.
I think that people are generally excessively afraid of crime but underestimate the day to day costs that crime imposes. In software engineering, we spend a lot of time and effort on security. If everyone were honest, we wouldn’t need passwords, encryption, or any of the other stuff that occupies a lot of time on every project. We’d still need to take precautions against damage caused by user error, but most of the hours we spend on security could be spent on other things.
The other cost of security, beyond implementation time, is the ongoing cost related to the inconvenience of security. Whether it’s the time we take to unlock our screen or set up SSH tunnels or deal with the fact that we have to decrypt data in the database in order to see it, it all counts. Security is almost always a form of technical debt.
In many cases security precautions are necessary (or even mandated by law), but it’s important to be vigilant and not add more of it than is necessary, because it’s almost always painful in the moment and forever thereafter.
The New York Times has a report on an FBI raid that knocked some of my favorite sites offline yesterday. The FBI visited a colo facility and seized at least one full rack of servers leased by DigitalOne, taking down sites like Instapaper and Pinboard. Apparently they were going after a specific host but they had no idea how to seize only the hardware associated with that host, and in the age of virtualization, going after one VM could still cause many hosts to be taken down.
John Borland at the Wired Threat reports on a talk by Bruce Dang, the engineer at Microsoft whose job it was to break down the Stuxnet worm. It’s an interesting look at exactly which vulnerabilities Stuxnet exploits, and how Microsoft’s security team broke down the problem.
A video of the talk will eventually be posted at the Chaos Computer Congress Web site. I’m going to try to remember to go back and watch it.
Update: Video of the talk is available here.