The high cost of unpaid conference speakers
2

The high cost of unpaid conference speakers

One of the more interesting debates that’s cropping up off and on around the Web and Twitter is whether it’s unfair for conferences to invite speakers without compensating them beyond free attendance at the event. Remy Sharp’s argument can be inferred from the title of his post, You’re paying to speak. This reminded me of a post by Andy Budd from last August, Paying Speakers is Better for Everybody. Budd’s post is a bit less inflammatory and comes at it from the perspective of what a conference organizer gains by paying speakers:

As an organiser I think paying speakers is actually a very good idea, whether they ask for it or not. This is because it changes the relationship from a voluntary one to a business one. When you’re not paying somebody you really can’t expect them to put a lot of effort into their talks, help you promote the event or respond to your emails quickly (a constant bugbear for organisers). However by paying speakers for services, you set up a different relationship and a level of expectation that makes your life easier and the quality of your event better. We’re not talking huge piles of cash in un-marked bills btw. Sometime a few hundred dollars or a voucher from Amazon is enough to make a speaker feel valued.

My friend Alex King disagrees (with Sharp, at least):

This sort of entitlement crap really irks me. No one is making you speak at a conference; it’s a choice.

Here’s what I think – it has nothing to do with fairness. Conferences often don’t pay speakers because they can attract attendees to their events with a slate of speakers who are willing to appear for free. It’s as simple as that. There are a lot of reasons to want to speak at a conference, not all of them driven by a clear financial motive. I like to give talks for the same reason that I like writing blog posts – I think it’s fun to participate in the marketplace of ideas, and standing up and sharing those ideas with an audience is a great experience. Speaking to a group makes me pretty nervous, but I do it anyway because the response is usually pretty great. I would also add that assuming you choose the right events, attending as a speaker is really fun. Most people at the event know who you are, and many of them want to talk with you about your talk, which is on a topic that interests you enough to write a talk about it.

Preparing for a talk is pretty fun, too. It’s an opportunity to really think deeply about a subject and figure out how to present it in a useful way. Yeah, it’s a lot of work, but the rewards are intrinsic. If you disagree with me and feel like that’s not enough of a reward to do it, that’s OK. Ask for a fee, it can’t hurt. I just think that treating this as a matter of dollars and cents misses a lot of what’s going on with the relationship between speakers and conferences.

I should add that I am really lucky. I work for a company (Etsy) that will pay for me to go to a conference and give a talk without any expectation that I will pitch anything to the audience. I already attend few conferences, and I’d attend even fewer if I were spending my own money on it, whether I was speaking or not.

This is the real point I want to make for conference organizers. When you don’t pay speakers, you limit your pool of possible speakers to those who are self-funded, or, more likely, are paid to attend by their employer. That immediately eliminates a lot of potentially interesting voices. It also, in all likelihood, reduces the diversity of your group of speakers. It also makes it more likely that a greater percentage of your talks will be thinly veiled product pitches or painfully obvious recruiting pitches rather than well-prepared talks on inherently interesting subjects.

In the end, I agree with Andy Budd. Paying speakers is better for everybody. At the same time, I think the best speakers are usually those who would give excellent presentations regardless of whether not anybody is paying them.

You know what they say about assuming
1

You know what they say about assuming

Seventeen year old Sara Sakowitz writes in the Washington Post about the attrition that results from gender stereotypes for young women:

When I looked around the arena at my robotics competition, I counted only three other girls out of over a thousand high school students working on their teams’ robots. Glancing at the bleachers, I watched girls parading as mascots, girls cheering for their teams, and girls dancing in the stands. But I didn’t see girls on the competition floor. Maybe in the next few years that gender balance will change, and the timid girls in the bleachers will be replaced by fearless women who are undaunted by society’s confining expectations. Someday, my all-girls team will not be the exception to the unspoken rule, but until then, we have to keep breaking it.

In the Internet industry we see the same sort of thing when people assume that women they meet are designers, product managers, or front-end developers rather than “real engineers.” Getting rid of one’s preconceived notions is difficult, but we can start by keeping them to ourselves.

Redirecting a bit of Amazon’s money to charity
0

Redirecting a bit of Amazon’s money to charity

One way to do a tiny amount of good in the world (assuming you shop on Amazon.com) is to opt in to Amazon Smile. Once you have done so, Amazon will donate 0.5% of the money you spend on the site to the charity you select. I’m set up to donate to the Environmental Defense Fund currently. The catch is that for the donation to be applied, you have to access the site through “smile.amazon.com” rather than “www.amazon.com.” Amazon will remind you of that a few times, but then it goes away, and of course when you follow links to Amazon, they’re always to the main URL.

Fortunately, there are browser plugins that will make sure you’re always browsing through smile.amazon.com. For Firefox, there’s Amazon Smile Redirector. For Chrome, there’s Smile Always. It looks like a version of Smile Always for Safari is under development, but it’s not available in Apple’s extension gallery. If anyone knows of such an extension for Safari, post in the comments and I’ll add it.

Unfortunately, this seems to be a US-only program currently. Hopefully Amazon will expand it to other countries eventually.

The fraud ratchet
5

The fraud ratchet

I want to write a bit about businesses that make their money through fraud, inspired by Jon Bell’s post The Graph That Changed Me. In it, he talks about RealNetworks. RealNetworks was one of the first companies that provided streaming media infrastructure. They created proprietary streaming audio and video prodocols. They offered a free version of their client, and tried to make money by selling licenses for premium versions of the client and their streaming server. More importantly, they were pioneers in bundling unwanted software with their client downloads in exchange for cash.

As Bell’s post points out, the money they made this way was a substantial part of Real’s business. While people at Real hated the shady business, they were in, their jobs were also dependent on it. Bell’s manager showed him a graph with a big dip in the middle and then explained the implications:

“That’s what happens when we do the right thing”, he said while pointing at the drop, “and that’s how much money we lose. We tried it just to see how bad it was for our bottom line. And this is what the data tells us.”

The ratchet effect is one of my favorite metaphors, and it applies perfectly to companies that make fraud part of their business model. Bell’s manager went on to inadvertently explain how the ratchet effect prevented RealNetworks from abandoning their shady practices. What’s particularly depressing is that RealNetworks was in many ways an innovator and influencer in teaching the rest of the industry how to exploit people’s need to download your software to earn money through fraud. This fraud-based business model is alive and well today.

Scott Hanselman wrote last week about Download.com’s “download wrapper,” a piece of malware that they attempt to foist on every unsuspecting user who uses the site for its intended purpose. Similarly, there’s the Dark Patterns site, which catalogs the practices Bell and Hanselman wrote about, along with many others. As much as the “app store” model of distributing software depresses me, it remains an infinitely superior alternative to “free” distribution funded through deceptive business practices.

The main thing I’d suggest is that if you work for (or run) a company that engages in these practices, it’s already too late. The ratchet effect all but insures that once a company goes down this road, it is nearly impossible to reverse course. If this sort of thing bothers you (and it should), you might want to seek other work.

I’d also recommend not using software from any company who engages in these practices. Awareness of these practices makes it likely that you can make your way through the minefield when you install the software, but you’re being subsidized by the portion of the user base that is being defrauded. You can also assume that companies that engage in these practices will eventually sell out completely and just install malware on your computer without asking you.

We should be exposing and shaming companies that engage in these practices to the extent that we can stand to. Sites that review software should always take care to mention when the installers attempt to foist unwanted crap upon the user, and mark them down accordingly. This business model isn’t going away, but those of us who are familiar with it should not be enablers.

How to report on information security
0

How to report on information security

Today the New York Times has another Edward Snowden story, this one by David Sanger and Eric Schmitt. It discusses the means he used to harvest millions of documents from the NSA’s internal network, and runs under the headline Snowden Used Low-Cost Tool to Best NSA. Good security reporting focuses on the conflicting goals come into play when designing secure systems, rather than retreating into counterfactual thinking.

Here’s the sort of counterfactual thinking I’m talking about:

Mr. Snowden’s “insider attack,” by contrast, was hardly sophisticated and should have been easily detected, investigators found.

And:

Agency officials insist that if Mr. Snowden had been working from N.S.A. headquarters at Fort Meade, Md., which was equipped with monitors designed to detect when a huge volume of data was being accessed and downloaded, he almost certainly would have been caught.

And:

Officials say web crawlers are almost never used on the N.S.A.’s internal systems, making it all the more inexplicable that the one used by Mr. Snowden did not set off alarms as it copied intelligence and military documents stored in the N.S.A.’s systems and linked through the agency’s internal equivalent of Wikipedia.

When telling a story about security, or any system, there are three aspects that are involved – the intended functionality, security (and safety in general), and cost. Here’s the only sentence from the story that even hints at these tradeoffs:

But he was also aided by a culture within the N.S.A., officials say, that “compartmented” relatively little information.

The NSA built a system for sharing information internally out of off the shelf Web technology (which almost certainly lowered costs substantially), and provided broad access to it for the same reason that any organization tries to improve communcation through transparency. They wound up with a system that was no doubt difficult to secure from people like Edward Snowden.

While the crawler Snowden used might possibly have been easy to detect, writing a crawler that is difficult to detect is not particularly challenging. A Web crawler is pretty straightforward. It downloads a Web page, extracts all of the links, and then follows the links and repeats the process. It recursively finds every Web page reachable from the page where it starts. On the real Internet, crawlers identify themselves and follow the robot exclusion standard, a voluntary code of conduct for people who write programs to crawl the Web. There’s no reason it has to be that way, though. Browsers (or crawlers) identify themselves with a user agent, and when you request a Web page, you can use any user agent you want.

The point is that there’s nothing any specific request from a crawler that would make it easy to detect. Secondarily, there’s the nature of the traffic. The recursive nature of the requests from the crawler might also be suspicious, but detecting that sort of thing is a lot more difficult, and those patterns could be obfuscated as well. If you have months to work, there are a lot of options for disguising your Web crawling activity.

Finally, there’s the sheer volume of data Snowden downloaded. Snowden literally requested millions of URLs from within the NSA. Again, there are ways to hide this as well, especially if you can run the crawler from multiple systems, but if you’re going to download over a million pages, it’s difficult to disguise the fact that you have done so. Detecting such activity would still require some system to monitor traffic volumes.

Somehow Snowden also managed to take the information his crawler gathered out of the NSA. That seems like another interesting breakdown in the NSA’s security protocols.

The article has plenty of discussion of why Snowden should have been detected, but very little about why he wasn’t, and even less about how the desire to secure the system is at odds with the other goals for it. The thing is, the journalists involved didn’t need to rely on the NSA to give them any of this information. Anyone familiar with these sorts of systems could have walked them through the issues.

Any article about security (or safety) should focus on the conflicts that make building a secure system challenging. The only way to learn from these kinds of incidents is by understanding those conflicts. One thing I do agree with in the story is that Snowden’s approach wasn’t novel or innovative. That’s why the story of the tradeoffs inherent in the system is the only interesting story to tell.

Yes, you should do what you love
1

Yes, you should do what you love

When I mentioned seeking out people who are intrinsically motivated to work with a few weeks ago, my friend Jason commented with a link to Miya Tokumitsu’s essay In the Name of Love in Jacobin magazine. It has since been picked up by Slate. Here’s her take on “Do What You Love”:

By keeping us focused on ourselves and our individual happiness, DWYL distracts us from the working conditions of others while validating our own choices and relieving us from obligations to all who labor, whether or not they love it. It is the secret handshake of the privileged and a worldview that disguises its elitism as noble self-betterment. According to this way of thinking, labor is not something one does for compensation, but an act of self-love. If profit doesn’t happen to follow, it is because the worker’s passion and determination were insufficient. Its real achievement is making workers believe their labor serves the self and not the marketplace.

I agree that DWYL is not a valid policy prescription. If we could magically sort everyone so that they could earn a living wage pursuing the activity that they love most, we would not have a functioning society, much less a functioning economy. So we have to accept the reality that for many people, perhaps the large majority of people, a job is just a job.

Given that, we need labor policy that does not assume that work is its own reward. Work week regulations, paid vacations, parental leave, and other policies are all intended to provide respite to workers from exploitative employers. Minimum wage laws put some floor on how much you must pay workers, regardless of how much they love their jobs.

For those of us who are managers, it’s important not to exploit the fact that some people love their jobs and will work all the time if you let them. It’s important for people to keep a healthy schedule and take their vacation time, even if they would prefer not to. Not only is overwork unsustainable, but it creates a dynamic in which people who would prefer to maintain something resembling a balanced life wind up working too much because they’re afraid to fall behind in a perceived competition with their less restrained coworkers.

An enthusiastic manager who works 70 hours a week sends a message to even the least enthused person that working lots of hours is expected. This oblivious manager may just assume that everyone on the team is happy to work those hours, given that they’re doing so without even being asked.

As a matter of government or corporate policy, we’ve got to take these factors into account.

However, if I am talking to an individual, rather than talking policy, my advice will always to be to do what you love (or, do what you can’t not do) if you can get away with it. We spend at least 2,000 hours a year at work. If it’s up to me, I’ll spend that time doing work that I enjoy. I want other people to do work they enjoy as well. To the extent that I have influence over who I work with, I seek out people who also find joy and fulfillment in their day to day work.

It seems to me that a discussion of labor policy is ideally suited to analysis from behind John Rawls’ veil of ignorance. We should think about these matters from the perspective of someone who has no idea whether they will like their job or hate it, or whether it may be at a desk or in a coal mine or tomato field. At the same time, when you’re looking for a job, try to find one you’ll enjoy and find fulfilling, and be thankful if you manage to do so. It’s not that common.

Describing my ideal dotfile installer
6

Describing my ideal dotfile installer

One of my goals is to perfect my dotfile setup. I suppose this means that I’m not quite all in as a manager yet. Every self-respecting developer has their dotfiles in a public repository on GitHub, but I have a couple of requirements beyond just being able to clone my dotfiles on any machine. They are:

  • I have some dotfile information that can go into a public repository, and some configuration and preferences that must live behind the firewall for use on work computers. I’d like for the behind the firewall stuff to be a supplement to my public dotfile information. For example, my .vimrc should be the same everywhere. My .bashrc should be mostly the same, with a few additions of stuff for work. I may also have some dotfiles that are exclusive to my work environment. Any setup should understand how to clone files first from my public repo, and then apply the dotfiles from the private repository.
  • Homebrew can be bootstrapped using a single shell command. (Check it out under the Install Homebrew heading on the Homebrew page.) I’d like to have something similar for my dotfiles. Not only would it be cool, but it would also be useful at work. All of our servers at work are managed using Chef. It’s possible to check your dotfiles into our Chef repository and have them deployed automatically on all of our servers, so that when you log into a new server, it already has your preferred environment set up. I have two issues with just putting my dotfiles into Chef. The first is that I don’t want to clutter up the Chef repo with my junk. If everyone does that, our Chef runs will slow down. The second is that I don’t want to have to update them in two places whenever I make changes. If I have a bootstap command that pulls all of my dotfiles from their regular location, I can just put that into Chef.

I’m going to start hacking away on creating a dotfile installer that meets both of these requirements and blogging about my progress. My public dotfiles are in a GitHub repo. I’ll let you know how it goes.