A test of the Visionics face recognition system in Palm Beach International Airport has been a failure. Not only did the system fail to flag the people it was supposed to, but it also generated a bunch of false positives. This outcome was totally predictable, and in fact, was predicted by Bruce Schneier last September. I quote:

Suppose this magically effective face-recognition software is 99.99 percent accurate. That is, if someone is a terrorist, there is a 99.99 percent chance that the software indicates “terrorist,” and if someone is not a terrorist, there is a 99.99 percent chance that the software indicates “non-terrorist.” Assume that one in ten million flyers, on average, is a terrorist. Is the software any good?

No. The software will generate 1000 false alarms for every one real terrorist. And every false alarm still means that all the security people go through all of their security procedures. Because the population of non-terrorists is so much larger than the number of terrorists, the test is useless. This result is counterintuitive and surprising, but it is correct. The false alarms in this kind of system render it mostly useless. It’s “The Boy Who Cried Wolf” increased 1000-fold.

Currently the system generates false positives about 1% of the time, so it’s much, much worse than the hypothetical scenario that Schneier describes.