IETF chairman Jari Arkko and Stephen Farrell, IETF Security Area Director, comment on how future Internet standards will respond to the threat of pervasive monitoring (a.k.a. the NSA). The fact that they openly refer to pervasive monitoring as a threat to be countered is a very good sign.
All of this is a long way of saying that I was totally unprepared for today’s bombshell revelations describing the NSA’s efforts to defeat encryption. Not only does the worst possible hypothetical I discussed appear to be true, but it’s true on a scale I couldn’t even imagine. I’m no longer the crank. I wasn’t even close to cranky enough.
Matthew Green: On the NSA. Click through for a good overview of the likely methods and vectors for attack in the SSL ecosystem.
There have been many depressing if not altogether unexpected revelations since Glenn Greenwald broke the story of Edward Snowden’s NSA leaks. Reporters have been working overtime to dig into NSA programs for snooping on electronic conversations. Today’s New York Times story on the NSA compromising SSL is perhaps the biggest. SSL is the secure protocol browsers use to communicate with Web servers. It is the foundation of secure commerce on the Web.
Here’s the crux of the story:
Beginning in 2000, as encryption tools were gradually blanketing the Web, the N.S.A. invested billions of dollars in a clandestine campaign to preserve its ability to eavesdrop. Having lost a public battle in the 1990s to insert its own “back door” in all encryption, it set out to accomplish the same goal by stealth.
The agency, according to the documents and interviews with industry officials, deployed custom-built, superfast computers to break codes, and began collaborating with technology companies in the United States and abroad to build entry points into their products. The documents do not identify which companies have participated.
The N.S.A. hacked into target computers to snare messages before they were encrypted. And the agency used its influence as the world’s most experienced code maker to covertly introduce weaknesses into the encryption standards followed by hardware and software developers around the world.
For more, see this article by Bruce Schneier. As he points out, the NSA has subverted these protocols by cheating the system, not through a cryptanalytic attack on SSL itself. We, as builders of the Internet, need to figure out what we can and can’t trust at this point. If anything, this shows that more than ever before, closed source software is fundamentally untrustworthy.
For more on Edward Snowden, see this piece by Jay Rosen.
The biggest pushback I got for my Seven Signs of Dysfunctional Engineering Teams post was in response to my point that dysfunctional teams favor process over tools. They argue that it’s easier to change a process as opposed to a tool, and that the more important distinction is in creating a “good process” rather than a “bad process.”
Creating processes is the default in any organization. Creating process starts out simply by reminding people of things, “Make sure you turn out the lights before you leave the room.” Or, “Please email the international team before you launch any features on the UK site.” The establishment of new processes is normal and completely expected.
Atul Gawande became famous for writing about how process can save lives, talking about how discovering best practices and codifying them can provide massive benefits in a medical setting. The New York Times published a really interesting article about Toyota donating efficiency to the Food Bank for New York City. Process improvement is great, and useful.
However, as engineers working in the world of software, we should be building tools to automate these processes whenever it makes sense. Why write down a checklist if you can write a script to execute it? By building tools we can make doing the right thing the path of least resistance. Automation is no panacea, but it can be a powerful productivity multiplier, especially for managers.
Here’s an example. I worked at a company that had a large set of regression tests that were run before every release, and no automated testing regime. When we were ready to release, the entire product team spent at least a day reading tests in from an Excel spreadsheet and running them against the system. Running them by hand created the opportunity for humans to notice problems that may have been beyond the checks in automated tests. On the other hand, this process was incredibly slow, and had the effect of creating a serious disincentive for pushing new releases, which was a problem unto itself.
The sign of dysfunction I was talking about was that an organization fails to recognize and exploit opportunities to create or obtain tools to replace processes or automate redundant tasks. This can happen for a lot of reasons, but the most pernicious is an unwillingness on the part of management to invest resources in building internal software.
That’s what I was talking about.
Marc Hedlund on the less immediately tangible but in many ways more rewarding products of being a manager. The difference between good jobs and bad jobs is the manager you report to, a lot of the time. Most people spend more time at work on any other single thing. As a manager, your responsibility is to make that time more fulfilling, enjoyable, and productive. I find it’s a good gig.
One of the rules of modern life is that you can’t choose your billionaires; they choose you.
Paul Ford explains why Jeff Bezos would purchase the Washington Post
Glenn Greenwald’s partner was returning to Brazil by way of London when he was detained under a terrorism law. He was then questioned not about terrorist plots but about Greenwald’s reporting work and the electronic media he was carrying. In their attempts to squelch reporting about how terrorism laws are abused, the UK is demonstrating the typical form that abuse takes.
For those who grew up with the internet, the people they know online and the people they know offline are often one and the same. We interact all day on Facebook with friends we first met face-to-face, and we meet our Twitter followers for drinks. The internet is real life, and that’s all there is to it.
Summer Anne Burton explains to the dim why cheating online is just cheating.
In response to my previous post on engineering culture, Bill Higgins asked in the comments:
Recently I worked on a project with multiple sites, and one of our toughest problems was the vast cultural differences between the sites. As a trivial example, one of the sites was militant about test automation and another site barely paid lip service to it.
So it seems like there is some happy medium between “multiple, incompatible cultures” and “monoculture”. I would be interested to hear your thoughts on where cultural homogeneity is helpful and where it is harmful.
It seems to me that the problem here is not diversity but rather failure to reach consensus. One big challenge when it comes to building engineering teams is figuring out which things everyone has to agree on and which things everyone can do their own way. For example, if you’re a Python shop, everyone can use the text editor of their choosing, but everyone has to use spaces or tabs, mixing the two is not an option.
Similarly, test driven development only works if everyone on the team practices it. If you’re not writing tests, it’s really easy to write code that is nearly impossible to test. Similarly, if there’s no continuous integration infrastructure, the people who aren’t writing tests will regularly check in code that breaks the existing tests. Teams have to reach some kind of consensus about these issues with team-wide implications.
To me this is a separate issue from avoiding monoculture. Teams have to reach consensus in order to function well. I read someone who said that in interviews, they look for people who are willing to “disagree and commit” when they can’t reach consensus. Oddly, I find that these situations arise as often on teams with no demographic diversity (read, composed entirely of white guys) as they do on more diverse teams.