When I mentioned seeking out people who are intrinsically motivated to work with a few weeks ago, my friend Jason commented with a link to Miya Tokumitsu’s essay In the Name of Love in Jacobin magazine. It has since been picked up by Slate. Here’s her take on “Do What You Love”:
By keeping us focused on ourselves and our individual happiness, DWYL distracts us from the working conditions of others while validating our own choices and relieving us from obligations to all who labor, whether or not they love it. It is the secret handshake of the privileged and a worldview that disguises its elitism as noble self-betterment. According to this way of thinking, labor is not something one does for compensation, but an act of self-love. If profit doesn’t happen to follow, it is because the worker’s passion and determination were insufficient. Its real achievement is making workers believe their labor serves the self and not the marketplace.
I agree that DWYL is not a valid policy prescription. If we could magically sort everyone so that they could earn a living wage pursuing the activity that they love most, we would not have a functioning society, much less a functioning economy. So we have to accept the reality that for many people, perhaps the large majority of people, a job is just a job.
Given that, we need labor policy that does not assume that work is its own reward. Work week regulations, paid vacations, parental leave, and other policies are all intended to provide respite to workers from exploitative employers. Minimum wage laws put some floor on how much you must pay workers, regardless of how much they love their jobs.
For those of us who are managers, it’s important not to exploit the fact that some people love their jobs and will work all the time if you let them. It’s important for people to keep a healthy schedule and take their vacation time, even if they would prefer not to. Not only is overwork unsustainable, but it creates a dynamic in which people who would prefer to maintain something resembling a balanced life wind up working too much because they’re afraid to fall behind in a perceived competition with their less restrained coworkers.
An enthusiastic manager who works 70 hours a week sends a message to even the least enthused person that working lots of hours is expected. This oblivious manager may just assume that everyone on the team is happy to work those hours, given that they’re doing so without even being asked.
As a matter of government or corporate policy, we’ve got to take these factors into account.
However, if I am talking to an individual, rather than talking policy, my advice will always to be to do what you love (or, do what you can’t not do) if you can get away with it. We spend at least 2,000 hours a year at work. If it’s up to me, I’ll spend that time doing work that I enjoy. I want other people to do work they enjoy as well. To the extent that I have influence over who I work with, I seek out people who also find joy and fulfillment in their day to day work.
It seems to me that a discussion of labor policy is ideally suited to analysis from behind John Rawls’ veil of ignorance. We should think about these matters from the perspective of someone who has no idea whether they will like their job or hate it, or whether it may be at a desk or in a coal mine or tomato field. At the same time, when you’re looking for a job, try to find one you’ll enjoy and find fulfilling, and be thankful if you manage to do so. It’s not that common.
How to report on information security
Today the New York Times has another Edward Snowden story, this one by David Sanger and Eric Schmitt. It discusses the means he used to harvest millions of documents from the NSA’s internal network, and runs under the headline Snowden Used Low-Cost Tool to Best NSA. Good security reporting focuses on the conflicting goals come into play when designing secure systems, rather than retreating into counterfactual thinking.
Here’s the sort of counterfactual thinking I’m talking about:
And:
And:
When telling a story about security, or any system, there are three aspects that are involved – the intended functionality, security (and safety in general), and cost. Here’s the only sentence from the story that even hints at these tradeoffs:
The NSA built a system for sharing information internally out of off the shelf Web technology (which almost certainly lowered costs substantially), and provided broad access to it for the same reason that any organization tries to improve communcation through transparency. They wound up with a system that was no doubt difficult to secure from people like Edward Snowden.
While the crawler Snowden used might possibly have been easy to detect, writing a crawler that is difficult to detect is not particularly challenging. A Web crawler is pretty straightforward. It downloads a Web page, extracts all of the links, and then follows the links and repeats the process. It recursively finds every Web page reachable from the page where it starts. On the real Internet, crawlers identify themselves and follow the robot exclusion standard, a voluntary code of conduct for people who write programs to crawl the Web. There’s no reason it has to be that way, though. Browsers (or crawlers) identify themselves with a user agent, and when you request a Web page, you can use any user agent you want.
The point is that there’s nothing any specific request from a crawler that would make it easy to detect. Secondarily, there’s the nature of the traffic. The recursive nature of the requests from the crawler might also be suspicious, but detecting that sort of thing is a lot more difficult, and those patterns could be obfuscated as well. If you have months to work, there are a lot of options for disguising your Web crawling activity.
Finally, there’s the sheer volume of data Snowden downloaded. Snowden literally requested millions of URLs from within the NSA. Again, there are ways to hide this as well, especially if you can run the crawler from multiple systems, but if you’re going to download over a million pages, it’s difficult to disguise the fact that you have done so. Detecting such activity would still require some system to monitor traffic volumes.
Somehow Snowden also managed to take the information his crawler gathered out of the NSA. That seems like another interesting breakdown in the NSA’s security protocols.
The article has plenty of discussion of why Snowden should have been detected, but very little about why he wasn’t, and even less about how the desire to secure the system is at odds with the other goals for it. The thing is, the journalists involved didn’t need to rely on the NSA to give them any of this information. Anyone familiar with these sorts of systems could have walked them through the issues.
Any article about security (or safety) should focus on the conflicts that make building a secure system challenging. The only way to learn from these kinds of incidents is by understanding those conflicts. One thing I do agree with in the story is that Snowden’s approach wasn’t novel or innovative. That’s why the story of the tradeoffs inherent in the system is the only interesting story to tell.