rc3.org

Strong opinions, weakly held

Author: Rafe (page 19 of 989)

Matt Haughey on RSS and news readers

Thoughts surrounding Google Reader's demise

Matt Haughey talks about RSS and news readers. I’m glad to say that I just renewed my annual subscription for NewsBlur, so as a news consumer, I’m not too affected by the impending demise of Google Reader. I do worry that the loss of Google Reader will reduce the readership of RSS feeds in general, and most of the readers of this blog still read it through RSS. I’d still write the blog if only ten people read it, but it’s nice to know people are reading it, or at least marking it read in their favorite reader.

The challenges of redesigning Wikipedia

A questioner on Quora asked whether Wikipedia has ever considered a redesign. In response, Wikpedia designer Brandon Harris, has written the clearest explanation I’ve read of the challenges involved in changing the design of a large scale Web site. If there’s one universal truth that I’ve absorbed more and more deeply with age, it’s that change never comes easy.

Here’s one bit:

How about languages? We support around 300 of them. That’s a scale problem that most people forget. I’ve seen several unsolicited redesigns that may look pretty but nearly all of them ignore (or worse, downplay) what is arguably the greatest feature of Wikipedia: that you can get it in your own language. If we want to change a text label (say, from “Password” to “Enter your password”) it will require hundreds of volunteers to check and localize that text. It’s a daunting task.

That’s just one of many complexities that he catalogs.

The 100 year gun control project revisited

After the Newtown massacre, I wrote about a 100 year gun control project. Here’s a bit of what I wrote:

I would suggest that those people lengthen their time frame. What if we came up with a plan to fundamentally change America’s gun culture over the next 100 years? There are policies that we could start pursuing today that would move us in that direction, and taking those steps beats giving up in every way.

The New York Times reports today falling household gun ownership is already a long term trend:

The household gun ownership rate has fallen from an average of 50 percent in the 1970s to 49 percent in the 1980s, 43 percent in the 1990s and 35 percent in the 2000s, according to the survey data, analyzed by The New York Times.

In 2012, the share of American households with guns was 34 percent, according to survey results released on Thursday. Researchers said the difference compared with 2010, when the rate was 32 percent, was not statistically significant.

Gun control advocates would do well to pursue cultural change rather than legal change. Rather than banning handguns or changing laws around concealed carry, groups should be working to stigmatize both. The idea that people should be responsible for defending themselves with firearms in public places should rightly be considered a fringe view in a civilized, urban society.

The design of the index for Facebook’s Graph Search

Under the Hood: Building out the infrastructure for Graph Search

Really interesting post on the design of the search index used by Facebook’s Graph Search feature. How challenging was it to build? Facebook started the project in 2009 and migrated all their other search systems to it before building Graph Search. All three of the engineers listed as working on the project were at Google prior to working at Facebook.

How hard is it to build a lyrics site?

Song lyrics sites are universally terrible. They are a usability nightmare, with bad markup and tons of ads. Here are some examples:

The markup is bad, there are ads everywhere, and the usability generally sucks. Why? I understand that running such a site exposes you to legal risk, and I’ve always assumed that the outlaw foundation of such sites explains their awfulness. Even so, I’m wondering how difficult it would be to make a better attempt.

Any high quality site in this vein has to have some form of revenue, because it will attract a lot of traffic. The secret is to spend as little as possible on infrastructure. The other day, the NPR News Apps Blog had a post about building a high capacity, low cost site. That seems like a good starting point.

The other requirement is a big catalog of songs, organized by artist and album, and then the lyrics for all of them. I think building such a database with absolutely minimal human intervention would be fun.

Right now I’m just kicking this idea around. If I start working on it, I’ll post about my progress.

How developers use API documentation

Chris Parnin writes about problems with API documentation, as evidenced by developer migration to Stack Overflow. The whole thing is incredibly interesting, and points to a need for a major reconsideration of what makes for good documentation.

Why I don’t talk about learning curves

I don’t want to pick on this person, so I won’t use their name, but I saw this in a blog post today:

The most common complaint people have when learning Haskell is the steep learning curve.

It’s a very typical example of a mistake I see all the time, which is that when people say something has a steep learning curve, they mean that it’s difficult to learn. It’s understandable why people would think that way — steep things are difficult to climb.

However, the X axis on the plot of a learning curve is the resources invested, and the Y axis represents the level of mastery attained. You can look it up. So a steep curve means that initial progress in learning is very rapid. The fuller definition of a steep learning curve is that initial progress is rapid but that the curve plateaus and progress becomes difficult.

Unfortunately, the rampant misuse of “steep learning curve” means that if I use it correctly, nobody will actually get what I’m talking about. If I use it incorrectly, then I’m part of the problem. The end result has been to discourage discussion of learning curves using that terminology at all. Nobody seems to mind.

Don’t get stuck

At Etsy, our engineering is well known for practicing continuous deployment. For all of the talk in the industry about continuous deployment, I don’t think that its impact on personal productivity is fully understood.

If you don’t work in a shop that does continuous deployment, you may assume that the core of it is that releases are not really planned. Code is pushed as soon as it’s finished, not according to some schedule, and that’s true, but there’s a much deeper truth at work. The real secret to the success of continuous deployment is that code is pushed before it’s done.

When people are practicing continuous deployment the Etsy way, they start out by adding a config flag to the code base behind which they can hide all of the code for their feature. As soon as the flag has been added, they add some conditional code to the application that creates a space that they can fill with the code for their new feature. At that point, they should be pushing code as frequently as is practical.

This is the core principle of continuous deployment. It doesn’t matter if the feature doesn’t work at all or there’s nothing really to show, you should be pushing code in small, digestible chunks. Ideally, you’ve written tests that are then part of the continuous integration suite, and you’re having people review that code before it goes out. So even though you don’t have a working feature, you’re confident the code you’re producing is robust because other people have looked at it, and it’s being tested every time anyone deploys or runs the suite of automated tests. You’re also reducing the chances of having to spend hours working through a painful merge scenario.

Many engineers are not prepared to work this way. There’s a strong urge to hold onto your code until you’ve made significant progress. On many teams, working on a feature for a week or two to build something real before you push it is completely normal. At Etsy, we see that as a risky thing to do. At the end of those two weeks you’re pushing a sizable chunk of code that has never been tested and has never run on the production servers out into the world. That chunk of code very well may be too big for another engineer to review carefully in a reasonable amount of time. It should have been broken up.

Pushing code frequently is the main factor that mitigates the risk of abandoning the traditional software release cycle. If you deploy continuously but the developers all treat the project like they’re developing in a more traditional fashion, it won’t work.

That’s the systems-based argument for pushing code at a rate that tends to make people uncomfortable, but what I want to talk about is how taking this approach improves personal productivity. I’m convinced that one thing that separates great developers from good developers is that great developers don’t allow themselves to get stuck. And if they do get stuck, they get stuck on design issues, and not on problem solving or debugging.

Thinking in terms of what code you can write that you can push immediately is one way to help keep from getting stuck. In fact, a mental exercise I use frequently when I’m blocked on solving a problem is to try to come up with the smallest thing I can do that represents progress. Focusing on deploying code frequently helps me stay in that mindset.

Banks really are evil

Here’s a selection of stories involving banks that appeared just this weekend in the New York Times:

Major Banks Aid in Payday Loans Banned by States

For the banks, it can be a lucrative partnership. At first blush, processing automatic withdrawals hardly seems like a source of profit. But many customers are already on shaky financial footing. The withdrawals often set off a cascade of fees from problems like overdrafts. Roughly 27 percent of payday loan borrowers say that the loans caused them to overdraw their accounts, according to a report released this month by the Pew Charitable Trusts. That fee income is coveted, given that financial regulations limiting fees on debit and credit cards have cost banks billions of dollars.

Ahead of Election in Cyprus, Gloom and Voter Apathy Tied to Financial Woes

What many Cypriots find most frustrating is that their crisis, like those in Ireland and Iceland before them, was concentrated in the banks. There is no sovereign debt crisis and, before the banking collapse, their economy was relatively healthy. Why, they wonder, should they suffer for the misdeeds of a few bankers? Why cover losses that should be borne, at least in part, by private investors?

Patron of Siena Stumbles

There is little question, though, that JPMorgan helped enable an acquisition that was regarded as foolish by many people and that severely weakened Monte dei Paschi. Later, Deutsche Bank of Germany and Nomura of Japan undertook transactions with previous management at Monte dei Paschi that helped it conceal losses of 730 million euros, raising further questions about the conduct of investment banks.

It’s like this pretty much every week.

What to do with data scientists

I’ve been thinking a lot lately about where data scientists should reside on an engineering team. You can often find them on the analytics team or on a dedicated data science team, but I think that the best place for them to be is working as closely with product teams as possible, especially if they have software engineering skills.

Data scientists are, to me, essentially engineers with additional tools in the toolbox. When engineers work on a problem, they come up with engineering solutions that non-engineers may not see. They think of ways to make work more efficient with software. Data scientists do the same thing as engineers, but with data and mathematics. For example, they may see an opportunity to use a classifier where a regular software engineer may not. Or they may see a way to apply graph theory to efficiently solve a problem.

This is what the Javier Tordable presentation on Mathematics at Google that I’ve linked to before is about. The problem with having a data science team is lack of exposure to inspiring problems. The best way to enable people to use their specialized skills to solve problems is to allow them to suffer the pain of solving the problem. As they say, necessity is the mother of invention.

The risk, of course, is that if a data scientist is on one team, they may not have any exposure at all to problems that they could solve that are faced by other teams. In theory, putting data scientists on their own team and enabling them to consult where they’re most needed enables them to engage with problems where they are most needed, but in practice I think it often keeps them too far from the front lines to be maximally useful.

It makes sense to have data scientists meet up regularly so that they can talk about what they’re doing and share ideas, but I think that most of the time, they’re better off collaborating with members of a product team.

Older posts Newer posts

© 2024 rc3.org

Theme by Anders NorenUp ↑