rc3.org

Strong opinions, weakly held

Month: December 2013

Some thoughts on delegating

John D. Cook has a post on one of my favorite management topics, when to delegate.

My thoughts on delegation are probably fairly radical. The first principle is that if a project strikes me as interesting to work on, I must delegate it immediately. Managers almost never have the bandwidth to tackle interesting technical problems, it’s the nature of the beast. When I delegate projects, I can remain engaged to a degree that allows me to learn and to hopefully add some value, without stalling progress because I’m too busy dealing with the other million distractions that are part of manager life. I tend to stick with working on the tasks that are too boring or repetitive to bother delegating to other people – they’re generally about the right size to get out of the way when I’m between other things.

The second principle is, delegate everything, especially if you’re in a new role. When you take on a new role, there is no way to estimate how much work is going to be coming your way, or to list what all the demands on your time will be. All of the stuff you were doing is either going to keep you from succeeding in your new role, or just isn’t going to get done, unless you can delegate it to someone else. Of course, in reality you’ll never be able to delegate everything, but proceeding as though you can will hopefully keep you mindful of the fact that you should always be looking for opportunities to delegate.

The point I’m really getting at is that most people are far too reluctant to delegate, and far too selfish when it comes to delegating. Usually when you’re in a position to delegate, you are also at least partially responsible for the professional well-being of the people you’re delegating things to. The things you delegate should hurt a little to give away.

Paul Graham is wrong about it being “too late”

Katie Siegel: It’s Not “Too Late” for Female Hackers

Paul Graham’s comments in an interview behind the paywall at The Information (and unflatteringly excerpted by ValleyWag) have been the subject of much discussion this week. Siegel’s post gets at what was really problematic about them. I’m one of those people who started programming at age 13, and it has always been easy for me to advocate hiring people like myself, for exactly the reasons Paul Graham gives. Siegel ably makes the point that this line of thinking is unfair to everybody who doesn’t fall into his select group, but I’d argue further that it’s also just bad business. More on that later.

Update: Paul Graham says he was misquoted. I’m glad Graham cleared things up. In his update, Graham says:

If you want to be really good at programming, you have to love it for itself.

That’s a sentiment I agree with completely. One reliable indicator that a person loves programming for itself is that they started as a kid. The problem arises when we treat that as the only reliable indicator, which Graham does, at least based on this passage from the transcript:

The problem with that is I think, at least with technology companies, the people who are really good technology founders have a genuine deep interest in technology. In fact, I’ve heard startups say that they did not like to hire people who had only started programming when they became CS majors in college. If someone was going to be really good at programming they would have found it own their own. Then if you go look at the bios of successful founders this is invariably the case, they were all hacking on computers at age 13.

Great reads of 2013: why you should A/B test on the Web

I enjoy the great privilege of working with Dan McKinley, whose blog is at mcfunley.com. He doesn’t post very often and his posts are reliably great. I want to call out one post specifically … Testing to Cull the Living Flower. Here’s one sentence:

When growth is an externality, controlled experiments are the only way to distinguish a good release from a bad one.

In it, he argues for why sites that are enjoying rapid growth must use A/B testing to build features if they want to be confident that they are making a measurable, positive impact on that growth rate.

Great reads of 2013: Jay Kreps on logs

For the past 18 months or so, I’ve been working on data engineering at Etsy. That involves a pretty wide array of technologies – instrumenting the site using JavaScript, batch processing logs in Hadoop, warehousing data in an analytics database, and building Web-based tools for analytics, among other things. One of the persistently tough problems is moving data between systems in an accurate and timely fashion, sometimes transforming it into a more useful form in the process.

Fortunately, Etsy is anything but the only company in the world with this problem. When I was preparing for 2014, I scheduled meetings with my counterparts at a number of other companies to learn what they were doing in terms of data and analytics. When I came back, I had a pretty good idea of what we needed to do next year to harmonize all of our dispirate systems – set up infrastructure to allow them to communicate with one another using streams of events, otherwise known as logs.

Then, just about the time I was finished writing the 2014 plan, Jay Kreps published The Log: What every software engineer should know about real-time data’s unifying abstraction on the LinkedIn Engineering Blog. In it, he documents the philosophical underpinnings of pretty much everything I talked about in the plan.

The point of his post is that a log containing all of the changes to a data set from the beginning is a valid representation of that data set. This representation offers some benefits that other representations of the same data set do not, especially when it comes to replicating and transforming that dataset.

As it turns out, even though most engineers use logging, they are probably underexploiting logs in the form that Kreps describes in a pretty serious way. How do we scale up on the Web? We move more operations from synchronous to asynchronous execution, and we diversify our data stores and the ways in which we use them. (Even at Etsy, where we are heavily committed to MySQL, we use it in a variety of ways, some of which resemble key/value stores.) Representing data as a log makes it easier to handle asynchronous operations and diverse data stores, providing a clear path to scaling upward.

Kreps’ thoughts on logs are a must-read for any engineer working on the Web. I expect I’m not the only person who found that post to be perfectly timed.

Great reads of 2013: 1491

I’m going through some of my favorite long reads from 2013 that I didn’t get around to blogging at the time I read them.

My first pick is actually not from 2013, it’s from 2002, but I happened to read it this year. Furthermore, it was later expanded into a book that was published in 2006. I plan to read it in 2014. The book and the article, written by Charles C. Mann, are both entitled 1491. Mann’s argument is revisionist history at its best, and I mean that as a compliment.

From both school and popular culture, I learned that the Americas were a sparsely populated pristine wilderness, largely unsullied by human impact. Mann sets out to demolish that narrative. His argument is that the evidence points to pre-Columbian America being far more densely populated than we’ve been taught, and the landscape having been heavily shaped by human habitation. The America encountered by later colonists had been depopulated thanks to pathogens carried by earlier European voyages, and was in transition due to the near-extinction of its keystone species–native Americans.

1491 is probably the most interesting thing I read in 2013.

Alan Turing’s royal pardon

One piece of news this week is that Alan Turing received a royal pardon for his conviction for indecency in 1952 for indecency. The best book I read in 2013 was Andrew Hodges’ Alan Turing: The Enigma. I keep intending to write more about the book, but on the occasion of Turing’s pardon, I’ll again encourage you to pick it up. Most articles about Turing’s life explain that he committed suicide after undergoing chemical castration after being convicted for homosexual acts, but Hodges’ conjecture about why Turing took his own life is far more nuanced and interesting. Here’s Hodges on the pardon:

Alan Turing suffered appalling treatment 60 years ago and there has been a very well intended and deeply felt campaign to remedy it in some way. Unfortunately, I cannot feel that such a ‘pardon’ embodies any good legal principle. If anything, it suggests that a sufficiently valuable individual should be above the law which applies to everyone else.

It’s far more important that in the 30 years since I brought the story to public attention, LGBT rights movements have succeeded with a complete change in the law – for all. So, for me, this symbolic action adds nothing.

A more substantial action would be the release of files on Turing’s secret work for GCHQ in the cold war. Loss of security clearance, state distrust and surveillance may have been crucial factors in the two years leading up to his death in 1954.

It’s a great book.

It’s OK to verify skills in an interview

Philip Walton: Interviewing as a Front-End Engineer in San Francisco

Walton reacts to the types of questions he got interviewing for front-end developer positions with Internet companies. He was asked lots of abstract questions and few questions about the practicalities of front-end development. I think that it’s a mistake to focus too much on a skills match when interviewing candidates, but I also think that you should verify the skills candidates claim to have in the interview process. In other words, a company that mostly uses Python shouldn’t be afraid to hire Ruby developers, but they should make sure that those Ruby developers really can program in Ruby. Front-end skills are more generalized, and it seems crazy not to interview front-end developers about those skills.

Why you may want to hate BitCoin

Charlie Stross: Why I want Bitcoin to die in a fire

BitCoin is one of those things I’ve found vaguely irritating since the beginning. Stross has a strong list of legitimate reasons to be irritated. Relatedly, this post by Jason Kuznicki explains why BitCoin is likely a speculative bubble.

The costly bias against older workers

Harvard Business Review posted an interview with MIT professor Ofer Sharone about the limited job prospects for older workers, especially the long-term unemployed. I agree with most everything Sharone says, but I want to put a bit of a different spin on it. In the interview, he addresses the excuses people use when they refuse to consider hiring older, LTU workers–skills mismatch, underemployment, and fear that they won’t stick around in general.

First, there’s the obvious point that I don’t see made often enough–if the job market were robust, none of these excuses would prevent businesses from hiring older workers, and the problem of long-term unemployment wouldn’t really exist because everybody who wants to work would have a job. Indeed, if economic growth were strong enough, people would be begging retirees to come back to work. This already happens in specific fields that enjoy full employment. I know I write this every time I write anything related to the economy, but it can’t be said often enough. The number one demand we should make as voters is that the federal government pursue policies that lead to greater economic growth.

Now I want to write a few paragraphs for people who are in the position to hire. I think it is inarguable that there’s a bias against hiring older workers in just about every field. I know the Web business well, and I know that it is lousy with ageism. From an economic perspective, this is painfully wrongheaded.

Recruiting great people is probably the most difficult task facing anyone whose job includes hiring. The competition for good people is intense, candidates are tough to evaluate, and the costs of getting it wrong or failing to fill key jobs can be really high. Plus, hiring isn’t a whole lot of fun. As a hiring manager, I’m always looking for an inefficiency that can be exploited. Bias against older workers is a truly spectacular inefficiency.

While you may disagree with the specifics of the 10,000 hour rule, at some reductive level it’s obviously true. Ten thousand hours is five years of full time work. If someone’s been in the work force 20 years, they’ve invested enough time to master four things, and one of those things is usually showing up to work and getting things done. Brilliance is great, but experience can be better, and there are plenty of people out there who are both brilliant and experienced. Age bias somehow makes it hard to see that some older workers are former young geniuses who’ve added a couple of decades of relevant experience since they started out.

The second advantage is that older workers have an actual track record of stuff they’ve delivered, and often that’s lots of lots of stuff. They’ve usually worked on all sorts of projects good and bad, with all kinds of managers good and bad. This track record makes it easier to evaluate them during the interview process. The tough part with older workers is that you have to be somewhat clever when you’re hiring, because you can’t afford the older workers who are obviously great. They’re pulling down huge paychecks at places like Google or Facebook running engineering teams or inventing the next generation of the tools you rely on.

The third advantage is that older workers can often be a useful stabilizing influence and source of perspective in an organization. Having people around who saw the highs and lows of the dot com boom is not a bad thing. If you’re in finance, having people around who remember both the 2008 financial crisis and the savings and loan crisis from the 80’s is probably a good idea. After reading The Soul of a New Machine, I’d love to work with anyone from that team. They know things about working on a high pressure skunkworks project that anybody could learn from. Things aren’t that different than they used to be, but if you don’t work with anyone who’s been around awhile, you’d never know that.

Here’s the catch. There are a fair number of people who discover over time that their profession isn’t really something they’re passionate about, even if they started out that way. Maybe they don’t love the work as much as they thought they would at one time, or maybe that love was beaten out of them by a succession of crappy managers and sucky jobs. Keeping the fires burning brightly over the long term is a skill unto itself. When you’re interviewing older workers, you’re interviewing people who may have already figured out that for them, the job is just a job, but need to keep bringing a paycheck home every week. Regardless of how they get there, people who aren’t passionate about either their work or the mission can bring the whole team down.

Passion is the number one thing I look for at an interview. A fair number of people are obviously brimming with sincere passion for the company, or some aspect of the job, or something that will propel them to show up every day with the kind of energy that makes work fun. For those that aren’t, the best approach is to find a topic that makes their eyes light up, whether it’s programming, or stamp collecting, or Seinfeld, and then comparing that level of passion to their passion when they talk about the work they’ll be doing.

Right now, older people are an undervalued asset. Smart business people will figure out how to take advantage of that inefficiency. I’m not just saying that because I turned into an older person when I wasn’t paying attention.

That’s not analytics

OK, so a guy made a Web site called Tab Closed Didn’t Read to post screen shots of site that hide their content behind various kinds of overlays that demand that users take some action before proceeding. He’s written a followup blaming the problem on over-reliance on analytics, mainly because some people justify the use of these intrusive calls to action by citing analytics. Anyone who justifies this sort of thing based on analytics should be sued for malpractice.

You can measure almost anything you like. It’s up to the practitioner to determine which metrics really matter to them. In e-commerce the formula relatively simple. If you’re Amazon.com, you want people to buy more stuff. Cart adds are good, but not as good as purchases. Adding items to your wish list is good, but not as good as putting them in your cart. If Amazon.com added an overlay to the home page urging people to sign up for a newsletter, it may add newsletter subscribers, but it’s quite likely that it would lead to less buying of stuff, less adding of stuff to the shopping cart, and less adding of items to wish lists. That’s why you don’t see annoying overlays on Amazon.com.

Perhaps in publishing, companies are less clear on which metrics really matter, so they optimize for the wrong things. Let’s not blame analytics for that.

© 2024 rc3.org

Theme by Anders NorenUp ↑