Everything you needed to know about backscatter
8

Everything you needed to know about backscatter

Bruce Schneier has rounded up all the links on the backscatter X-ray scanners and related issues. Bullet points:

  • The health risks of the scanners are overblown.
  • The claims that the scanning/groping will make flying safer are even more overblown.
  • The deployment of these scanners has more to do with lobbying than with a rational evaluation of the best way to make flying safer.

In this piece (not yet linked by Schneier), TSA screeners surveyed say that conducting the more invasive patdowns makes their job worse. My inclination in the face of this new scanning is to request the patdown for exactly that reason. Walking through the machine imposes a cost on the person being scanned, and no cost on the person doing the scanning. The patdown sucks for the person conducting the patdown and the person being patted down. Seems more fair to me.

As far as predictions go, my guess is that the money has been spent and we are not likely to see the government back off on the scanning. As irritated as people are now, they’ll eventually come to accept it, and it will become one more permanent contributor to the horrible experience that air travel has become.

When should you change your passwords?
0

When should you change your passwords?

One of my closely held beliefs is that expiring passwords reduce rather than increase security because the more often you have to change your passwords, the less likely you are to remember them. That is offset by the fact that people tend to use one password everywhere, so if you force people to change them, that pattern can be broken to some extent.

This week, Bruce Schneier has an essay on the subject. Here’s his bottom line, but read the whole thing:

So in general: you don’t need to regularly change the password to your computer or online financial accounts (including the accounts at retail sites); definitely not for low-security accounts. You should change your corporate login password occasionally, and you need to take a good hard look at your friends, relatives, and paparazzi before deciding how often to change your Facebook password. But if you break up with someone you’ve shared a computer with, change them all.

Blizzard continues to innovate on the security front
14

Blizzard continues to innovate on the security front

It would probably surprise people to learn that Blizzard, a game company, provides better security options for players of its games (World of Warcraft and now Starcraft) than nearly all banks and financial services companies do for their customers. The problem Blizzard faces is that people steal World of Warcraft accounts all the time, either to use the characters to farm gold, or to just strip all of the cash and things that can be sold from the account and pocket the cash.

A number of methods are used to steal passwords, including phishing, catching the passwords using key loggers, and just brute forcing them. Blizzard’s first big attempt to solve the problem was to give users the option of protecting their account using two factor authentication — their password and an authenticator that is tied to the account. The authenticator is a key fob (or an phone app) that generates a number every few seconds that must be entered in order to log in. Once an authenticator is tied to your account, getting your password stolen is no longer a problem.

Despite the fact that the authenticator app is free and the physical authenticator only costs $6, many players do not use them, and accounts still get stolen all the time. Indeed, account thieves almost always attach their own authenticator to compromised accounts as soon as they’ve been compromised, making it that much more difficult for players to get them back. (I shudder to think about how much money Blizzard spends dealing with account theft.)

To enable players who haven’t gotten an authenticator to secure their accounts, Blizzard has introduced a dial-in authenticator. With it, you can assign a phone number to your account. If there’s something unusual about an authentication attempt, you will be required to dial in to a toll free number from that phone and enter a PIN in order to log in successfully.

There’s bound to be an interesting article written about the economics of account security that explains why Blizzard finds it more worthwhile to implement robust authentication solutions when so many businesses that are susceptible to financial fraud do not. Are people that much more likely to steal your World of Warcraft characters than they are to steal your Amazon.com account and use the credit cards you’ve saved there? Or is it that people are more willing to go to extra trouble to secure their game accounts?

Update: There are lots of smart comments about this at Hacker News as well.

The growing misperception of HTML5
4

The growing misperception of HTML5

Today the New York Times Opinionator blog ran a piece by Robert Wright made the following assertion about HTML5:

In principle, HTML 5 will allow sites you visit to know your physical location and will make it easier for them to keep track of your browsing and shopping history.

That assertion is based on this news article from the Times, which says:

In the next few years, a powerful new suite of capabilities will become available to Web developers that could give marketers and advertisers access to many more details about computer users’ online activities. Nearly everyone who uses the Internet will face the privacy risks that come with those capabilities, which are an integral part of the Web language that will soon power the Internet: HTML 5.

All of this talk is about one piece of HTML5, client storage. For the details, check out Mark Pilgrim’s chapter on local storage in Dive Into HTML5.

There are two points to make. The first is that Web sites won’t have access to any information that they don’t have already already. In that sense, the talk about “access to many more details” is misleading. It’s not that Web sites will have access to new information, but rather that they’ll have a new place to store information that they already collect that may make it more convenient for them.

For example, if I don’t share my current location with FourSquare, they won’t suddenly be able to retrieve it if I use a browser that supports local storage. However, if I do give them access to my current location, they could store it in local storage on my own computer rather than using their own resources to store it on their server. In that sense, the information may suddenly be worth storing and easier to access, but it’s information they could already obtain and store on their own servers if they chose to do so. This aspect of local storage subjects users to no real risk beyond the risk already posed by cookies or other vectors for storing information about users.

What’s really gotten people wound up is evercookie (mentioned in the New York Times story), a proof of concept that demonstrates how the variety of ways Web sites can store information on the client can be exploited so that it’s nearly impossible to delete tracking cookies. Browser cookies are one way to store information on the client, as is local storage. Flash Local Shared Objects (also known as Flash cookies) can also store information on behalf of Web sites on your computer. evercookie uses a number of other methods for storing information as well. The nefarious thing about it is that when the information is deleted in one of these locations, evercookie replicates it again from another location where it is still stored. So if I delete my browser cookie, evercookie will copy that information from Flash and put it back in place. If I delete the Flash cookie, it will look in one of the other locations where it stashes information and copy it back again.

Using tricks like this to make it difficult for users to prevent Web sites from tracking them is unethical. Web sites who take this approach should be classified as spyware. But the existence of these techniques has nothing to do with HTML5.

What concerns me is that we’re on a path toward HTML5 being perceived negatively by regular users because the only thing they’ve heard about it is that it is likely to compromise their privacy. This perception could become a major stumbling block on the road to wider usage of browsers with HTML5 support. As developers, it’s important to educate users and perhaps more importantly, the media, so that people don’t conjure up risks where they don’t exist and damage the HTML5 brand in the process.

Your password should not be “password”
7

Your password should not be “password”

Today I got to look at the user table in an application with passwords stored as plain text. Out of around 7100 users, over 170 have the password “password.” Around 10 other users had heard that your passwords should contain letters and numbers, and thoughtfully chose the password “password1.” Needless to say, this application should probably store hashes of the passwords rather than storing them in plain text and also use some basic test of strength for the passwords that requires more than just lower case letters. What the experience left me with, though, was a burning desire to thwart users who specifically want to use the word “password” or any variation thereof as a password. I even want to create a special error message just for them, just to let them know that their combination of laziness and cleverness is not appreciated.

Here’s a regular expression that matches many, many variations on the word password:

/^p[a4@][s5][s5]w[o0]rd(\d*|\W*)$/i

You too can stamp out the blight of your users using “password” as their password.

Update: Accounted for people who substitute the letter “s” with the number “5.”

Update: Now “p@ssword” is not allowed, either.

Bruce Schneier on Stuxnet
0

Bruce Schneier on Stuxnet

I’ve been transfixed by the Stuxnet worm since I heard about it. If you’re not up on all things Stuxnet, check out Bruce Schneier’s blog post explaining what we do and don’t know about Stuxnet and how it works. Here’s why people think Stuxnet was created by a government agency:

Stuxnet doesn’t act like a criminal worm. It doesn’t spread indiscriminately. It doesn’t steal credit card information or account login credentials. It doesn’t herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn’t threaten sabotage, like a criminal organization intent on extortion might.

Read the whole thing.

Update: Security researcher Steve Bellovin’s Stuxnet post is informative as well. (Via @medley on Twitter.)

Car alarms don’t work
1

Car alarms don’t work

As I was listening to a car alarm go off last night (and again this morning), I wondered whether as a security measure they are effective at all. My guess was that the massive number of false positives insures that they go completely ignored when they go off. I was right. Transportation Alternatives has the numbers. Don’t go around thinking The Club is a better choice, either. Professional thieves target cars with The Club so that they can avoid walking around with a pry bar.

Be suspicious of the worst-case
1

Be suspicious of the worst-case

Bruce Schneier cautions people to be wary of worst-case scenarios:

There’s a certain blindness that comes from worst-case thinking. An extension of the precautionary principle, it involves imagining the worst possible outcome and then acting as if it were a certainty. It substitutes imagination for thinking, speculation for risk analysis, and fear for reason. It fosters powerlessness and vulnerability and magnifies social paralysis. And it makes us more vulnerable to the effects of terrorism.

Worst-case thinking means generally bad decision making for several reasons. First, it’s only half of the cost-benefit equation. Every decision has costs and benefits, risks and rewards. By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes.

I never really thought about the fundamental laziness involved in obsessing over the worst-case before.

Designing a no-fly list
8

Designing a no-fly list

One news item arising from the arrest of accused Times Square bomber Faisal Shahzad was that Emirates Airlines didn’t update their copy of the no-fly list soon enough after Shahzad’s name was added to prevent him from buying a ticket or boarding a flight out of the country. A non-programmer friend of mine was wondering why the airlines keep their own copy of the no-fly list rather than accessing some centralized resource that always has the most up-to-date list of names, and I thought I’d take a stab at explaining a few of the reasons why that may be the case.

The first question is, what’s a no-fly list? In short, it’s a list of names that airlines use some algorithm to match against. I have no idea how this part works, but it’s not really important. When someone tries to purchase a ticket or board a plane, the system should run their name against the list and return some kind of indication of what action should be taken if there’s a match. In matching against this kind of list, fuzzy matches will return more false positives, and stricter matches will do a poor job of accounting for things like alternate spellings and people adding or leaving out their middle names.

The question at hand, though, is how best to provide access to the no-fly list. These days, a developer creating a no-fly list from scratch would probably think about it as a Web service. Airlines would simply submit the names they wanted to check to the service, which would handle the matching and return a result indicating whether the person is on the list, or more specifically, which list they’re on. There are a number of advantages to this approach:

  1. A centralized list is always up to date. New names are added immediately and scrubbed names are removed immediately.
  2. The government can impose a standard approach to name matching on all of the list’s end users, avoiding problems with airlines creating their own buggy implementations.
  3. This approach offers more privacy to the people on the list, some of whom shouldn’t be on there. If you’re on the list but you never try to fly, nobody will know that you’re on there except the government agency compiling the list.

Given the strengths of this approach, why would the government instead allow each airline to maintain its own copy of the list, distributing updates as the list changes? I can think of a few reasons.

If the access to the list is provided by a centralized Web service, every airline endpoint must have the appropriate connectivity to communicate with that service. For reasons of security and cost, most airline systems are almost certainly deployed on private networks that don’t have access to the Internet. To get this type of system to work, the airline would have to provide direct access to the government service, an internal proxy, or some kind of direct connection to the government network that bypasses the Internet. All of those solutions are impractical.

Secondly, communicating with a central service poses a risk in terms of reliability. If the airlines can’t connect to the government service, do they just approve all of the ticket purchases and boarding requests that are made? If not, do the airline’s operations grind to a halt until communication is restored? The government probably doesn’t want to make all of the airlines dependent on the no-fly list service in real time.

And third, a centralized service opens up the airlines to a variety of attacks that aren’t available if they maintain their own copies. Both denial of service attacks and some man in the middle attacks could be used to prevent airlines from accessing the no-fly list, or to return bad information for requests to the no-fly list.

From an implementation standpoint, it’s easier for the airlines to maintain the lists themselves and to integrate that list into their own systems. Doing so is more robust, and the main risks are buggy implementations and out of date data. I wonder what sorts of testing regimes the government has in place to make sure that consumers of the no-fly list are using it properly? How do they test the matching algorithm that compares the names of fliers to names on the list?