Danny Sullivan on Google and MetaFilter
1

Danny Sullivan on Google and MetaFilter

Search engine analyst Danny Sullivan applies his expertise to analyzing the penalties MetaFilter that might have cost the site a huge chunk of its Google search traffic. It’s a really interesting look at how Google penalizes sites that try to game the system.

The section on how Google penalizes publishers for inbound “unnatural links” is really interesting:

It’s insane because it has allowed the same sites that charged people to get links to now charge for them to be removed. Or for the rise of an entire link debuilding industry. Or for publishers who have long suffered terrible link requests to now get messages from people asking for links to be removed.

The article has some really good advice for Google in how it manages its relationship with publishers. I also liked this bit:

Earn respect. That’s your best defense if things go south with Google. It’s also your best offense for doing well in Google.

As Sullivan points out, people cheered when Google lit Demand Media on fire and set them adrift.

Maybe you should be afraid of Google
1

Maybe you should be afraid of Google

One of the big stories in my small world over the past week or so has been the layoffs at MetaFilter. Matt Haughey broke the news in a MetaTalk post on the state of the site. Here’s the bottom line:

While MetaFilter approaches 15 years of being alive and kicking, the overall website saw steady growth for the first 13 of those years. A year and a half ago, we woke up one day to see a 40% decrease in revenue and traffic to Ask MetaFilter, likely the result of ongoing Google index updates. We scoured the web and took advice of reducing ads in the hopes traffic would improve but it never really did, staying steady for several months and then periodically decreasing by smaller amounts over time.

The long-story-short is that the site’s revenue peaked in 2012, back when we hired additional moderators and brought our total staff up to eight people. Revenue has dropped considerably over the past 18 months, down to levels we last saw in 2007, back when there were only three staffers.

Today, he posted more details on Medium, both about the drop in revenue and Google’s recent classification of MetaFilter as a content farm. This has been happening to other reputable blogs as well. I haven’t gotten any of these requests, or if I have, they have gone unread.

I don’t really think of Google as a monopolist, but it is true that Google holds the fate of any number of Internet businesses in their hands. This is true whether they rely on Google-served ads for revenue, or they rely on organic search traffic from Google to grow their visitor base. I oftentimes tell people that Google is to Internet businesses what weather is to farmers. You can have fertile soil, plant the right crops, and run your farm incredibly well, but if it doesn’t rain, you’re not going to have anything to harvest in the end. By the same token, if Google makes a change that directs traffic away from your site, you’ll find yourself in the same situation as MetaFilter.

That’s scary. I don’t really have any solutions to propose, but the degree to which the Web publishing industry has become almost wholly dependent on Google demands more attention.

Comparative risk
0

Comparative risk

Walking on a high wire is inherently dangerous, but you’ve said that you prepare so much that death is not a risk.

No, it’s not. I think this is very wrong to put your life in danger or to gamble with your life. I cannot do the first step not being absolutely sure that I will successfully perform the last.

That’s from a Jessica Gross interview with Philippe Petit, the tight rope walker who was the subject of Man on Wire.

Relatively speaking, the risk assumed by Petit isn’t any greater than the risk you or I assume when crossing a busy street. The thrill is in the fact that for regular people, doing what Petit does would be impossibly risky. Preparation is everything.

Embrace the improved Web
0

Embrace the improved Web

But something else has happened over the past ten years; browsers got better. Their support for standards improved, and now there are evergreen browsers: automatically updating browsers, each version more capable and standards compliant than the last. With newer standards like HTML Imports, Object.observe, Promises, and HTML Templates I think it’s time to rethink the model of JS frameworks. There’s no need to invent yet another way to do something, just use HTML+CSS+JS.

Joe Gregorio argues that we should stop writing and adopting JavaScript frameworks and rely on the modern Web instead. This article is a great big picture view of the Web front-end as it exists today. If you do Web development, it’s a must-read.

Update: Read this excellent follow-up from Sam Ruby as well.

Paul Ford on the software canon
1

Paul Ford on the software canon

Here’s a question Paul Ford poses, and of course, attempts to answer in The Great Works of Software:

Is it possible to propose a software canon? To enumerate great works of software that are deeply influential—that changed the nature of the code that followed?

The list he comes up with is very solid. I feel like relational databases should also be represented — but it’s hard to pick one product. Should it be Oracle? It’s the first commercial relational database, and is still going strong. Maybe MySQL? They put relational databases in the hands of the masses.

It’s also a bit of a shame that no Web browsers make the list. Again, the problem is choosing one in particular. Browsers are the closest we’ve ever come to a universal client for online resources. We all know about viewing Web pages, but browsers also radically affected how businesses create software. Browsers ate the client-server paradigm, and then when the power of JavaScript increased, became a platform for writing client-server applications. I don’t know which one you pick — maybe you draw a line from Mosaic to Netscape to Mozilla to Firefox, but the browser changed everything.

It’s an interesting question to think about.

Why online games have sucky databases
0

Why online games have sucky databases

Every team and company has blind spots. In some cases, these blind spots can cut across an entire industry. One of the most common is Not Invented Here Syndrome, an inability to trust software that you didn’t write yourself. In the past, many businesses had a huge blind spot when it came to open source software, often going so far as to ban it as a matter of policy. These days, many newer companies refuse to purchase commercial software, preferring only open source software. These blind spots often lead to making the kinds of expensive mistakes that are also prohibitively expensive to fix.

Most often, blind spots result from a failure of pattern recognition. People see their own problems as novel when in fact they are slight variations of problems that have been encountered and solved many, many times. What got me thinking about this was a post by database industry analyst Curt Monash, who writes about his frustration with the many bugs and flaws he has encountered in online games that seem to result from a failure to use off the shelf database technology effectively. If there’s one thing we know how to do in the computer industry, it’s store and retrieve transactional data at scale, and yet it appears as though many game companies have absolutely no clue how you might build such a system.

The game industry is famously a monoculture, even by the standards of the rest of the software industry. It is also remarkably insular – there’s not a lot of crossover between working on games and working on other kinds of software. I’m sure there are plenty of people in the game industry who understand what relational databases are and how they work, but I suspect that the industry suffers from a lack of people who’ve built large-scale database applications that work reliably. More importantly, my impression is that the culture of the game industry would make it difficult to even hire people with that kind of experience.

How does a company minimize blinds spots? Obviously hiring for diversity (both with regard to demographics and experience) is a big deal, but the solution requires more than that. It’s also necessary to build organizations where people who point out blind spots are respected rather than ignored. In the example I’m thinking of, I’m talking about technology choices that lead to bugs and poor user experience, but the blind spot could just as easily be related to potential markets that go untapped, or management practices that lead to irreparable image problems and lawsuits.

The downsides of blind spots are pretty serious, and it’s not like my suggestions for preventing them are in any way novel or perhaps even interesting. So why don’t we do more to prevent them? To state the obvious, we’re rarely aware of our blind spots. More importantly, blind spots enable us to move faster thanks to the certainty they foster. They enable us to spend more time doing and less time thinking, and when you’re in a hurry, thinking often feels like a waste of time. Unfortunately, as we often see, they wind up being really expensive in the long run.

The big net neutrality sellout
1

The big net neutrality sellout

Goodbye Net Neutrality; Hello Net Discrimination

Solid piece on the big net neutrality sellout on the part of the FCC yesterday. Everyone is already writing about this, but I wanted to add my voice just in case decision makers are paying attention. Yes, we see what you are doing. Yes, we are mad about it. Yes there will be consequences in terms of votes, volunteerism, and campaign contributions.