rc3.org

Strong opinions, weakly held

Tag: security (page 1 of 7)

Assessing iPhone security

Despite this, “best” does not mean “impregnable”.  The FBI claims that iPhones are “bricks” containing no useful information and Apple claims that iMessage is “end-to-end” secure.  Neither is the case.

In iPhones, the FBI, and Going Dark, security researcher explains which threats an iPhone protects you from. As a device, it’s secure, but how you use it determines how much information you expose. In short, the iPhone is normally used as part of a system with many potentially leaky components.

My favorite Heartbleed coverage

Everybody is probably already familiar with the Heartbleed bug in OpenSSL that was disclosed this week. I saw two explanations that really impressed me with their clarity. The first was Randall Munroe’s XKCD comic. In a few panels, it illustrates exactly what the problem is. The second is for those who prefer text, and was written by Rusty Foster, formerly of Kuro5hin, who did a great job of explaining Heartbleed in the New Yorker.

Brent Simmons wrote up one possible implication of the bug, which is that writing software in C is no longer worth the risk.

How to report on information security

Today the New York Times has another Edward Snowden story, this one by David Sanger and Eric Schmitt. It discusses the means he used to harvest millions of documents from the NSA’s internal network, and runs under the headline Snowden Used Low-Cost Tool to Best NSA. Good security reporting focuses on the conflicting goals come into play when designing secure systems, rather than retreating into counterfactual thinking.

Here’s the sort of counterfactual thinking I’m talking about:

Mr. Snowden’s “insider attack,” by contrast, was hardly sophisticated and should have been easily detected, investigators found.

And:

Agency officials insist that if Mr. Snowden had been working from N.S.A. headquarters at Fort Meade, Md., which was equipped with monitors designed to detect when a huge volume of data was being accessed and downloaded, he almost certainly would have been caught.

And:

Officials say web crawlers are almost never used on the N.S.A.’s internal systems, making it all the more inexplicable that the one used by Mr. Snowden did not set off alarms as it copied intelligence and military documents stored in the N.S.A.’s systems and linked through the agency’s internal equivalent of Wikipedia.

When telling a story about security, or any system, there are three aspects that are involved – the intended functionality, security (and safety in general), and cost. Here’s the only sentence from the story that even hints at these tradeoffs:

But he was also aided by a culture within the N.S.A., officials say, that “compartmented” relatively little information.

The NSA built a system for sharing information internally out of off the shelf Web technology (which almost certainly lowered costs substantially), and provided broad access to it for the same reason that any organization tries to improve communcation through transparency. They wound up with a system that was no doubt difficult to secure from people like Edward Snowden.

While the crawler Snowden used might possibly have been easy to detect, writing a crawler that is difficult to detect is not particularly challenging. A Web crawler is pretty straightforward. It downloads a Web page, extracts all of the links, and then follows the links and repeats the process. It recursively finds every Web page reachable from the page where it starts. On the real Internet, crawlers identify themselves and follow the robot exclusion standard, a voluntary code of conduct for people who write programs to crawl the Web. There’s no reason it has to be that way, though. Browsers (or crawlers) identify themselves with a user agent, and when you request a Web page, you can use any user agent you want.

The point is that there’s nothing any specific request from a crawler that would make it easy to detect. Secondarily, there’s the nature of the traffic. The recursive nature of the requests from the crawler might also be suspicious, but detecting that sort of thing is a lot more difficult, and those patterns could be obfuscated as well. If you have months to work, there are a lot of options for disguising your Web crawling activity.

Finally, there’s the sheer volume of data Snowden downloaded. Snowden literally requested millions of URLs from within the NSA. Again, there are ways to hide this as well, especially if you can run the crawler from multiple systems, but if you’re going to download over a million pages, it’s difficult to disguise the fact that you have done so. Detecting such activity would still require some system to monitor traffic volumes.

Somehow Snowden also managed to take the information his crawler gathered out of the NSA. That seems like another interesting breakdown in the NSA’s security protocols.

The article has plenty of discussion of why Snowden should have been detected, but very little about why he wasn’t, and even less about how the desire to secure the system is at odds with the other goals for it. The thing is, the journalists involved didn’t need to rely on the NSA to give them any of this information. Anyone familiar with these sorts of systems could have walked them through the issues.

Any article about security (or safety) should focus on the conflicts that make building a secure system challenging. The only way to learn from these kinds of incidents is by understanding those conflicts. One thing I do agree with in the story is that Snowden’s approach wasn’t novel or innovative. That’s why the story of the tradeoffs inherent in the system is the only interesting story to tell.

The magnitude of Adobe’s data breach

I didn’t really pay much attention when Adobe’s massive data breach was first reported, but now that all of the details have emerged, we know that the scope of the breach is truly spectacular. The Naked Security blog has the details. This episode is particularly sad because the best practices around password storage are well understood. Even though practices like using slow hashing algorithms are pretty new, and I wouldn’t have expected Adobe to have adopted them, the basic approach of storing a salted hash has been in wide use for quite some time.

I hope Adobe conducts a productive investigation of the incident and shares the systemic failures that led to the breach — not just the user database being stolen, but also the decision not to migrate to a more secure method of password storage over time. My guess is that Adobe not only has many Web properties, but also native applications that need to authenticate, and that they probably weren’t abstracted cleanly from the database used to store the encrypted passwords, so migrating to a new system was always deemed to be too low priority to be worth the extensive effort required.

The IETF is on the case

IETF chairman Jari Arkko and Stephen Farrell, IETF Security Area Director, comment on how future Internet standards will respond to the threat of pervasive monitoring (a.k.a. the NSA). The fact that they openly refer to pervasive monitoring as a threat to be countered is a very good sign.

All of this is a long way of saying that I was totally unprepared for today’s bombshell revelations describing the NSA’s efforts to defeat encryption. Not only does the worst possible hypothetical I discussed appear to be true, but it’s true on a scale I couldn’t even imagine. I’m no longer the crank. I wasn’t even close to cranky enough.

Matthew Green: On the NSA. Click through for a good overview of the likely methods and vectors for attack in the SSL ecosystem.

The risks of a dead man’s switch

Bruce Schneier considers Edward Snowden’s dead man’s switch, which will trigger the wide release of his trove of documents if he’s killed:

I would be more worried that someone would kill me in order to get the documents released than I would be that someone would kill me to prevent the documents from being released.

That’s the security mindset.

Taking a stab at verifiable anonymity

The New Yorker is the first publication to create an anonymous drop box for sources based on Strongbox, an anonymized document sharing tool by Kevin Poulsen and Aaron Swartz. What the architecture really shows is how difficult it is to achieve anonymity and security on the Internet, given the amount of data exhaust created by just about any action online. If nothing else, this underscores what an amazing technical achievement Bitcoin is.

Linode post on getting hacked

Linode’s security incident report

Back on April 12, my Web host, Linode, sent me an email letting me know that I needed to reset my password without any further details. Today they announced that their user management application was hacked and that the hackers were able to download their full database, including hashed passwords and encrypted credit card information. The hackers also have the public and private keys to the credit card database. They can obtain the credit cards if they can brute force the passphrase for the private key. When it comes to security, taking shortcuts is death.

Why’s SQL injection so prevalent?

Why are SQL injection vulnerabilities so prevalent? Because most of the PHP/MySQL documentation uses examples with SQL injection vulnerabilities and no discussion of the potential risks.

Older posts

© 2024 rc3.org

Theme by Anders NorenUp ↑