Are movie theaters misdiagnosing their problems?
2

Are movie theaters misdiagnosing their problems?

Last week, Roger Ebert wrote a piece about the effect improperly operated projectors have on the movie viewing experience. Most of the time, when you go to the movies, the bulb in the projector is insufficiently bright, usually because movie theaters are too lazy or incompetent to configure their digital 3D projectors to project 2D movies properly. (3D movies are inherently less bright than 2D movies, it’s a drawback.) He also points out that improper projection goes back before digital movies, it’s a long term problem in the movie industry.

The decline of movie theaters is blamed on many things, usually the rise of home entertainment. It’s funny, though, this is an area where the market has a perverse effect. Movie theaters compete on the comfort of the seating, but not on the quality of projection. You never see an ad that says, “Brightest projector bulb in the city.” So people go to the movies and have a subpar experience because the picture is difficult to see and then choose to watch something on Netflix Instant or get a DVD from Red Box. Movie theaters don’t have a feedback loop to tell them that the real problem might be the bad job they’re doing operating the projectors.

His piece reminded me of an essay from Slate last year, on the vanishing of professional projectionists. As is often the case, removing skilled professionals from the equation seems economical, but there are also costs, and those costs are probably being lost in the noise.

Links for May 26
6

Links for May 26

  • How DRM may have made it more difficult for the Amazon.com MP3 store to fulfill orders for Lady Gaga’s album when they put it on sale.
  • The Affordable Care Act is increasing the number of people with health insurance.
  • “In matters of cooking, authenticity is a joke.” This statement is important and true.
  • Edouard de Pomaine’s tomatoes a la creme. Incredibly tasty and easy to make.
  • An explanation of Pivotal Tracker’s client-side architecture. These days, all of the advanced Web applications are client-server apps written using JavaScript and HTML instead of Visual Basic or PowerBuilder.
  • Tim Bray thinks about what may happen to our stuff when we’re gone.
  • The Cassiopeia Project is a library of free science instruction videos. Funded by a retired scientist who wants to improve the quality of science education.
  • Journalists appear to be noticing that Republican politicians and pundits are not engaging with reality. Hopefully it’s the start of a trend.
  • The New York Times explains the lengths to which hotels must go to protect their staff from guests. Depressing.
  • Researchers find that cultured people feel less stress. Perhaps you’d enjoy a trip to a museum this weekend.
On the misuse of Occam’s Razor
3

On the misuse of Occam’s Razor

This weekend I read an interesting post about Occam’s Razor but decided not to blog about it, until I came across someone misusing Occam’s Razor and couldn’t suppress the need to clear my throat. In a post about digital camera pricing, I read the following sentence:

Occam’s Razor tells us that the simplest answer is the most likely one.

This is not correct. Here’s how Wikipedia defines Occam’s razor:

Occam’s razor, often expressed in Latin as the lex parsimoniae, translating to law of parsimony, law of economy or law of succinctness, is a principle that generally recommends selecting the competing hypothesis that makes the fewest new assumptions, when the hypotheses are equal in other respects.

The important part of the sentence is not about simplicity, but about selecting a hypothesis. When you’re trying to figure out which explanation for something you’ve observed is correct, the best approach is to test the hypothesis that you can eliminate most quickly. This has nothing to do with the fact that the simplest answer is probably correct, but rather that it makes sense to start with the possibilities that are easiest to eliminate.

This is particularly relevant to scientific experimentation, but it applies to problem solving in general. When you find a bug in your application, Occam’s razor would suggest that it makes sense to start by examining the code you just wrote yourself rather than checking Google to see whether there’s a bug in MySQL that has suddenly manifested itself and caused the incorrect behavior.

Blaming your own code requires only one new assumption — that you made a mistake when you were writing code that has yet been tested. Blaming MySQL requires you to assume that there is a bug in MySQL that has not been fixed or perhaps even detected and that your code (which is theoretically correct) somehow causes this bug to manifest itself. Perhaps to you, looking at your own code first may seem blindingly obvious, but to many developers it is not.

The point here is that Occam’s razor is a tool for problem solving (or experiment design), not a short cut that lets us skip problem solving entirely. In many cases, the simplest explanation is not correct, but it’s hard to be sure until you’ve eliminated it as a possible explanation.

Will North Carolina ban community broadband?
2

Will North Carolina ban community broadband?

Larry Lessig begs Democratic North Carolina governor to veto a bill that would ban community broadband networks:

On your desk is a bill passed by the overwhelmingly Republican North Carolina legislature to ban local communities from building or supporting community broadband networks. (H.129). By midnight tonight, you must decide whether to veto that bill, and force the legislature to take a second look.

North Carolina is an overwhelmingly rural state. Relative to the communities it competes with around the globe, it has among the slowest and most expensive Internet service. No economy will thrive in the 21st century without fast, cheap broadband, linking citizens, and enabling businesses to compete. And thus many communities throughout your state have contracted with private businesses to build their own community broadband networks.

This bill is a terrible idea. Banning local communities from providing services that voters are willing to pay for on behalf of corporate political contributors is fundamentally undemocratic.

I’m going to skip the diatribe about what this kind of legislation says about Republican priorities. I will add that Governor Perdue is almost certainly not going to be reelected next year, so she may as well do the right thing.

Update: Governor Perdue will allow the bill to become law without her signature, taking the most gutless possible course.

Links for May 20
0

Links for May 20

Finding the value in acquired companies
3

Finding the value in acquired companies

Horace Dediu on acquiring companies:

Clayton Christensen succinctly defined the value in any company as the sum of three constituent parts: resources, processes or business models. Market value can be nothing more and nothing less than these three things.

An acquisition has to be positioned on one of these targets just like a product is positioned on a specific market. The problem with being deliberate about where the value lies is that once positioned a certain way, the integration team will begin to execute on that plan. This means that the thing you decided was worth most (e.g. resources) gets all the attention and the other potential sources of value (processes or profit models) are discarded.

This argument reduces to there being three separate companies being available. The buyer pays for all but gets to keep only one.

When you look at it this way you realize that the reason most acquisitions fail is because the buyer throws away most of the real value in the company.

He goes on to analyze the acquisition of Skype by Microsoft by these criteria. I’d like to see other acquisitions that occurred further back in the past analyzed in this way. My gut feeling is that nearly all acquisitions focus on resources, since they’re the most obvious repository of value and the easiest (relatively speaking) thing to integrate into the acquiring company. It’s very hard to merge the processes of two different companies together, or for an acquiring company to support new business models that didn’t develop organically.

Jumping back into Rails with both feet
2

Jumping back into Rails with both feet

Since February, I’ve been working on a pro bono project that launched last week. What’s in it for them is they got a brand new application that replaces an unwieldy bunch of paper and email-based processes. Before, they had a Web page that was protected by basic authentication and led to a set of form-to-mail scripts and downloadable Word documents. Now they have an application with real user accounts that can be carried over from year to year. There’s still a lot of work to do on the administrative side of the application, but I’m quite proud of the results so far.

What’s in it for me is I got to revisit Ruby on Rails after a couple of years away working primarily with Java and PHP. I also used Git, GitHub, Pivotal Tracker, and switched from TextMate to Vim for this project. The project has been very entertaining for me, aside from some stress as the release date moved closer to the early bird deadlines on some of the forms we were replacing.

I found it really easy to jump back into Rails, despite the fact that a lot of things have changed since the last time I used it. Most of the utility scripts that used to be run individually are now bundled under a single rails script, and Rails is geared much more strongly toward using resource-oriented controllers rather than developers structuring controllers in their own way, but beyond that, things are basically the same. I also found that most of the things that are different are, in fact, improved. Rails 3 handles dependencies far, far more elegantly than its predecessors did, which makes deployment much better. It’s easier than it has ever been to set up deployment using Capistrano. The testing framework is more flexible and powerful.

For some time I wondered whether Rails was going to be an evolutionary dead end in terms of Web applications platforms, but I’m very impressed at where it is as a platform right now.

I used two plugins that were a huge help in terms of getting the application built. The first is the Devise plugin, which provides user registration and authentication functionality. I’ve been very impressed with its capabilities and ease of use. The second is CanCan, which provides a role-based authorization system that integrates nicely with Devise. The two of them saved me weeks of work.

The one big mistake I made was not taking a test-first approach to development. I had intended to, but I was in a rush and I ran into an inflector bug that caused me some grief. That bug prevented my tests from working properly, and rather than tracking down the issue, I did a bunch of development without writing accompanying tests. Now I’m backfilling the tests, which is never fun.

The organization I’m building this application for uses DreamHost for hosting, and I assumed I’d be able to use their existing hosting account to deploy this application. Unfortunately, while DreamHost does support Rails, they do not yet support Rails 3. I wound up having to deploy the application on my own slice running at Linode. I considered sites like Heroku, but they were just too expensive. I had thought we’d be closer to turnkey hosting for Rails by now, but that still appears not to be the case. On the other hand, getting the Rails application up and running from scratch on my own Linux server was simpler than it has ever been.

Rails is no longer the hot young thing that all of the developers are swooning over, but I’m finding it to be more excellent than ever. I still can’t imagine building an application in PHP if Ruby on Rails is also an option. The other takeaway is that developers need to be on GitHub, period. I’m using a private repository to host this project and it’s working beautifully.

Hacking Tyler, Texas
0

Hacking Tyler, Texas

Here’s a project that looks interesting: journalist/programmer Christopher Groskopf is moving from Chicago to Tyler, Texas for reasons mostly not of his choosing. In doing so, he’s going to work to adapt the town to be what he wants it to be. Here’s a taste:

Tyler has information that could be freed. Tyler has government that could be opened. Tyler has news that could be hacked. Moreover, Tyler has an almost completely unexploited market. There are no hackers there. The small number of high-tech businesses that exist in the region are either web development shops serving local businesses or robotics companies.

Should be interesting.

Against arbitrary measures of worth
3

Against arbitrary measures of worth

Chris Dixon explains how Tom Pinckney got into MIT without a high school diploma:

Tom grew up in rural South Carolina and mostly stayed at home writing video games on his Apple II. There was no place nearby to go to high school. He took a few community college classes but none of those places could give him a high school degree. It didn’t really matter – all he wanted to do was program computers. So when it came time to apply to college, Tom just printed out a pile of code he wrote and sent it to colleges.

It’s worth noting that the incentives for college administrators are completely misaligned with this sort of flexibility in admissions standards.

In the larger sense, this is a reminder that when it comes to evaluating things, room should be made for the exercise of human judgement, especially when the relationship between measurable factors and the final results cannot be easily quantified.

The latest on torture and the hunt for Osama bin Laden
0

The latest on torture and the hunt for Osama bin Laden

Nobody rounds up the news like Dan Froomkin, and his latest piece is on the reaction among interrogators and intelligence professionals on whether torture helped us track down Osama bin Laden. Here’s the summary:

Defenders of the Bush administration’s interrogation policies have claimed vindication from reports that bin Laden was tracked down in small part due to information received from brutalized detainees some six to eight years ago.

But that sequence of events — even if true — doesn’t demonstrate the effectiveness of torture, these experts say. Rather, it indicates bin Laden could have been caught much earlier had those detainees been interrogated properly.

The truth is that the US captured a number of people who knew the name of the courier who ultimately led us to bin Laden, and none of them ever gave up the name, even under torture.

It’s sad but unsurprising that having invested their legacies in the promotion and defense of interrogation techniques that the US has, in the past, treated as war crimes, the defenders of torture are absolutely compelled to make completely unjustifiable claims about its efficacy. And, as Andrew Sullivan pointed out yesterday, the eagerness of torture apologists to justify its use in response to Osama bin Laden being found shows that they were always lying about reserving it for “ticking time bomb” scenarios.