rc3.org

Strong opinions, weakly held

Tag: software development (page 4 of 16)

Using HMAC to authenticate Web service requests

One weakness of many Web services that require authentication, including the ones I’ve built in the past, is that the username and password of the user making the request are simply included as request parameters. Alternatively, some use basic authentication, which transmits the username and password in an HTTP header encoded using Base64. Basic authentication obscures the password, but doesn’t encrypt it.

This week I learned that there’s a better way — using a Hash-based Message Authentication Code (or HMAC) to sign service requests with a private key. An HMAC is the product of a hash function applied to the body of a message along with a secret key. So rather than sending the username and password with a Web service request, you send some identifier for the private key and an HMAC. When the server receives the request, it looks up the user’s private key and uses it to create an HMAC for the incoming request. If the HMAC submitted with the request matches the one calculated by the server, then the request is authenticated.

There are two big advantages. The first is that the HMAC allows you to verify the password (or private key) without requiring the user to embed it in the request, and the second is that the HMAC also verifies the basic integrity of the request. If an attacker manipulated the request in any way in transit, the signatures would not match and the request would not be authenticated. This is a huge win, especially if the Web service requests are not being made over a secure HTTP connection.

There’s one catch that complicates things.

For the signatures to match, not only must the private keys used at both ends of the transaction match, but the message body must also match exactly. URL encoding is somewhat flexible. For example, you may choose to encode spaces in a query string as %20. I may prefer to use the + character. Furthermore, in most cases browsers and Web applications don’t care about the order of HTTP parameters.

foo=one&bar=two&baz=three

and

baz=three&bar=two&foo=one

are functionally the same, but the crypto signature of the two will not be.

Another open question is where to store the signature in the request. By the time the request is submitted to the server, the signature derived from the contents of the request will be mixed in with the data that is used to generate the signature. Let’s say I decide to include the HMAC as a request parameter. I start with this request body:

foo=one&bar=two&baz=three

I wind up with this one:

foo=one&bar=two&baz=three&hmac=de7c9b8 ...

In order to calculate the HMAC on the server, I have to remove the incoming HMAC parameter from the request body and calculate the HMAC using the remaining parameters. This is where the previous issue comes into play. If the HMAC were not in the request, I could simply calculate the signature based on the raw incoming request. Once I start manipulating the incoming request, the chances of reconstructing it imperfectly rise, possibly introducing cases where the signatures don’t match even though the request is valid.

This is an issue that everyone implementing HMAC-based authentication for a Web service has to deal with, so I started looking into how other projects handled it. OAuth uses HMAC, with the added wrinkle that the signature must be applied to POST parameters in the request body, query string parameters, and the OAuth HTTP headers included with the request. For OAuth, the signature can be included with the request as an HTTP header or as a request parameter.

This is a case where added flexibility in one respect puts an added burden on the implementor in others. To make sure that the signatures match, OAuth has very specific rules for encoding and ordering the request data. It’s up to the implementor to gather all of the parameters from the query string, request body, and headers, get rid of the oauth_signature parameter, and then organize them based on rules in the OAuth spec.

Amazon S3’s REST API also uses HMAC signatures for authentication. Amazon embeds the user’s public key and HMAC signature in an HTTP header, eliminating the need to extract it from the request body. In Amazon’s case, the signed message is assembled from the HTTP verb, metadata about the resource being manipulated, and the “Amz” headers in the request. All of this data must be canonicalized and added to the message data to be signed. Any bug in the translation of those canonicalization rules into your own codes means that none of your requests will be authenticated by Amazon.com.

Amazon uses the Authorization header to store the public key and HMAC. This is also the approach that Microsoft recommends. I think it’s superior to the parameter-based approach taken by OAuth. It should be noted that the Authorization header is part of the HTTP specification and if you’re going to use it, you should do so in a way that complies with the standard.

For my service, which is simpler than Amazon S3 or OAuth, I’ll be using the Authorization header and computing the HMAC based on the raw incoming request.

I realize that HMAC may not be new to many people, but it is to me. Now that I understand it, I can’t imagine using any of the older approaches to build an authenticated Web service.

Regardless of which side of the Web service transaction you’re implementing, calculating the actual HMAC is easy. Normally the SHA-1 or MD5 hashing algorithms are used, and it’s up to the implementor of the service to decide which of those they will support. Here’s how you create HMAC-SHA1 signatures using a few popular languages.

PHP has a built-in HMAC function:

hash_hmac('sha1', "Message", "Secret Key");

In Java, it’s not much more difficult:

Mac mac = Mac.getInstance("HmacSHA1");
SecretKeySpec secret = 
    new SecretKeySpec("Secret Key".getBytes(), "HmacSHA1");

mac.init(secret);
byte[] digest = mac.doFinal("Message".getBytes());

String hmac = Hex.encodeHexString(digest);

In that case, the Hex class is the Base64 encoder provided by Apache’s Commons Codec project.

In Ruby, you can use the HMAC method provided with the OpenSSL library:

DIGEST  = OpenSSL::Digest::Digest.new('sha1')

Base64.encode64(OpenSSL::HMAC.digest(DIGEST, 
  "Secret Key", "Message"))

There are also libraries like crypto-js that provide HMAC support for JavaScript.

I regret that I didn’t use Google Code Search more

Miguel de Icaza writes about the bad news that Google is shutting down Code Search. In it, he lists a number of things Code Search was useful for that never really occurred to me. I hate missing out. I particularly regret not taking advantage of it when I was wrestling with connection and socket timeouts with Commons HttpClient awhile back.

Google Reader pays the strategy tax

In 2001, Dave Winer wrote about the strategy tax:

Ben explained that sometimes products developed inside a company such as Microsoft have to accept constraints that go against competitiveness, or might displease users, in order to further the cause of another product. I recognized the concept but had never heard the term.

Google is retiring all of the social features in Google Reader in order to push users to Google Plus instead. Discontinuing these features is bad — I find that people use them very effectively to share interesting stuff and would argue that the shared links from my Google Reader friends are generally more interesting than any single blog that I read. Even worse, Google is going to delete everyone’s past shared links in a few days. So people who have been using link sharing as a sort of blog will soon see their entire archive deleted.

Why? In order to prop up Google Plus. I wouldn’t argue that the strategy tax is never worth paying, but I would say that it’s higher than most companies realize. The good news for Google is that there’s not much competition in the news reader space any more, so it’s not like Google Reader users are going to rush off to use some other product as a result.

Update: I can’t find confirmation that archived shared links will be deleted and I don’t want to be a rumor monger. Hopefully I was just wrong about that.

Update: Courtney Stanton explains the value of Google Reader’s social features. (via Andy Baio) After reading it I realize that I wasn’t really taking advantage of Google Reader.

Managing the complexity of your software development process

GitHub has one of the most interesting approaches to product development and software development of anyone in the industry. Zach Holman posted an overview of their process and talked about how they have maintained it as the team has grown. Toward the end, he writes:

This stuff doesn’t come easily, but it unfortunately leaves easily. Figuring out ways to streamline, to improve your process, to grow your company as you grow your employees is a constant struggle. It’s something that should be continually re-evaluated.

Just as it’s a lot easier to add features to software than it is to remove them, so too is it a lot easier to make software development processes more complex than it is to simplify them.

Writing code other people can understand

Brent Simmons writes about how wrong he was to assume that nobody would ever see code he wrote, and how coding with the assumption that other people will eventually work on his code makes him a better developer:

But now I write code with the absolute certain knowledge that it will end up in somebody else’s hands. I could be wrong, yes, but I’ve learned that it helps me write better and more-maintainable code if I just assume from the start that somebody else, most likely a friend, will end up working on that code base.

It’s hard to overstate the importance of this approach. There are very specific coding habits that solo developers tend to pick up, and the longer many developers work solo, the worse they become.

Starting with the command line first

Bagcheck, a company I’d never heard of, was acquired by Twitter today. In Luke Wroblewski’s announcement of the deal, he lists some of the things they learned in building the startup. One idea that really captured my imagination was building features starting with the command line interface:

We always add features to the Bagcheck API (and thereby command line interface) first. This provides us with a way to start using new things before we invest time in how they’ll look and work in a graphical user interface. In fact, the first bags ever made on Bagcheck were lovingly typed into the CLI character by character. Including my complete mountain biking set-up.

Obviously, we decided creating bags needed an easier solution than pure text entry! But starting with just the essential actions in the CLI gave us a great understanding of how bags could be created and managed in more capable situations. In fact, starting with the API and command line interface always forces us to distill things to their core essence and to understand them simply. What can go in and what will come out? That makes designing more enhanced versions of the same features much easier. We know what they need to do and why.

This reminds me of Josh Bloch’s talk on how to design a good API. In it, he argues that you should start coding APIs by writing a client for the API rather than implementing the back end of the API.

Designing from the API up is similar to designing from the schema up, either way, you’re starting with a data model, although in the API case you’re starting with a data model and verbs rather than just the data structure.

How I got automated texts to originate from one number

Being that we live in a DevOps world and all, the application I work on sends me texts when it looks like things are going wrong in production, even though I am a developer and not a systems administrator. For months, I have been mildly annoyed that these texts originate from a variety of phone numbers preventing my phone from categorizing them nicely and forcing me to delete them individually to keep my texts organized.

The texts have been sent through the email-to-text gateway provided by AT&T. My scripts email a message to [email protected] and it arrives as a text message on my phone. Unfortunately, there’s no way to control the originating phone number when you use that approach.

It occurred to me that I may be able to use Google Voice to get around this problem. You can send texts from a Google Voice account, and I figured they had some kind of API that would let you do it. I went and set up a Google Voice account associated with my work email and then quickly discovered that there is no Web service that you can use to access Google Voice.

However, some enterprising person has created a Python library that accesses Google Voice through the regular Web interface. I did some testing and found that I could send the texts using the library. Unfortunately, I’m not allowed to install random Python libraries on our production servers. I had been hoping I could just use the curl utility to send the texts so that I could avoid having to ask our systems administrator to install anything but the Python library was the only option.

Fortunately, I have my own virtual server, so I wrote a simple Python CGI script that sends the texts and installed it on that server. Here’s the source:

#!/usr/bin/env python

import cgitb, cgi

from googlevoice import Voice
from googlevoice.util import input

cgitb.enable()
form = cgi.FieldStorage()

phoneNumber = form.getfirst("phone", "")
text = form.getfirst("text", "")

if len(phoneNumber) == 10 and len(text) > 0:
    voice = Voice()
    voice.login("USERNAME", "PASSWORD")
    voice.send_sms(phoneNumber, text)

    print "Content-Type: text/html"     
    print  
    print "<html><body>SMS Sent</body></html>"

So now my monitoring script sends the alert using curl to access the CGI script on my personal server, which in turn uses pygooglevoice to access Google Voice via screen scraping, which then sends me a text from a specific phone number. I don’t rate this approach very highly in terms of robustness, but at least my texts are now properly organized.

The opportunity cost of a broad skillset

Tim Bray laments the feeling of productivity you get when you work with tool set that you’re intimately familiar with. He talks about constantly experimenting with the next great thing as part of his job, specifically the downside:

But it means that I’ve been sort of a perpetual newbie. There’s another big piece of the software experience, one I haven’t shared in for years: where you’re concentrating on some problem and you know the codebase and tools, so your time goes more into doing and less into learning. In fact, that kind of work, in particular maintaining a large running production system, is where most of my professional colleagues spend most of their years of service.

That captures what I always liked least when I worked as a consultant. Moving from one project to the next, often on a different technology stack, meant never really feeling comfortable with what you’re working on.

On the other hand, working on maintaining a large, established system is often frustrating in its own way. You’re saddled with a code base that’s tough to change, with libraries that are often out of date, and with processes that aren’t cutting edge.

I have a friend who has worked at the same small company for nearly 10 years, and they seem to rarely update anything that currently works. I describe his skill set as being trapped in amber. He is a complete master of the way Java applications were built ten years ago, but the industry has changed a lot in that time, and he’s missed out on benefitting from those changes because the company he works for is hidebound.

Keeping an existing application up to date is difficult. I’d love to see some statistics on how many applications that were built in Rails 1 have been upgraded to Rails 2 or Rails 3.

In the end, this is one of the toughest parts of a software development career to manage. Walking the line between having the sorts of deep skills that enable you to be spectacularly productive and having a wide skill set that enables you to contribute in more situations is tough. Personally, I’ve always envied developers who are hyper-specialized. I know a guy who’s an expert on C compilers and always has a job with whatever company is trying to push gcc forward. If nothing else, he doesn’t have to sit around wondering whether he’s making a big mistake by not picking up Scala.

Who says programmers aren’t superheroes?

I just wanted to point you to a post I really enjoyed, under the innocuous name Safari Keychain Woes. In it, developer Daniel Jalkut explains how he used OS X development tools to figure out why Safari 5.1 wasn’t filling in passwords properly from his Keychain. Interesting from beginning to end and a great illustration of what you can do when you understand how things really work.

Screening systems and the base rate fallacy

Kellan Elliott-McCrea has a great post about the high cost of false positives when it comes to building software that detects fraud, spam, abuse, or whatever. The cost of false positives is explained by the base rate fallacy. The BBC explains the base rate fallacy very well. Here’s a snippet:

If 3,000 people are tested, and the test is 90% accurate, it is also 10% wrong. So it will probably identify 301 terrorists – about 300 by mistake and 1 correctly. You won’t know from the test which is the real terrorist. So the chance that our man in the mac is the real thing is 1 in 301.

Anybody who wants to talk about screening systems without an understanding of the base rate fallacy needs to do more homework.

Older posts Newer posts

© 2024 rc3.org

Theme by Anders NorenUp ↑