Strong opinions, weakly held

More on hash-bang links

Tim Bray offers a good high level explanation of why hash-bang links are a horrible idea and fundamentally break the web. I like this question:

So, my question is: what is the next great Web app that nobody’s built yet that depends on the simple link-identifies-a-resource paradigm, but that now we won’t be seeing?

I got a few responses to my question yesterday, which was, why are people doing this, and the one that I found most convincing is that it’s a resource issue. Let’s say you’re building a Web application and you want the ability to load items onto the page dynamically using AJAX. You have to pay engineers to build the JavaScript code that does so, and also pay someone to build the services on the servers that respond to the AJAX requests. Paying people to build the equivalent functionality that serves static pages costs even more money. So people who don’t really understand the Web cut costs in that fashion.

That probably explains the Twitter case, since Twitter is an application that rightly has many dynamic elements. But that doesn’t make sense for Gawker, a Web publisher in the business of publishing static blog posts on the Web. Why are they loading that content dynamically? My best guess there is that they hired a developer or manager who had done it that way somewhere else, probably for more sensible reasons. They came to Gawker and decided to just build things in the way that they already understood. That person should probably be fired.


  1. I have a simpler explanation for Gawker’s decision to use Javascript to load everything, and it’s something that it hit me the first time I played with the new design, before it started to fail on me: “Dude, that looks so cool! We should totally fade in content and dynamically scroll the page to the top, and have the left and right column scrollable, but the right col uses JavaScript mouse wheel events so it looks seamless, and … and … “

    You get the picture.

    And it seems all that was done without load testing such a heavy site to see if it will actually scale to an audience as large as Gawker’s.

    I guess they took Bill O’Reilly’s advice too literally: “We’ll do it live! We’ll do it LIVE!” 🙂

  2. Take a look to http://www.gizmodo.es

    The Spanish version remains with the old page layout, as you can see the same layout is loaded again, again and again in spite of page loading is speedy.

    This is crazy and know we have the technology to give up the absurd page paradigm in web sites where most of the content is the same and only some parts are changing.

    Said this I’m not sure whether the job done in http://us.gizmodo.com/ (new AJAX intensive layout) is the best.

  3. http://www.spoiledmilk.dk/blog/?p=1922

    “what is the next great Web app that nobody’s built yet that depends on the simple link-identifies-a-resource paradigm”

    I dunno, because hash-bang links still identify a resource, albeit in a way that is a little weird, and are perfectly compatible with all the existing web apps like Facebook and Twitter and whatever.

    They also have a defined fallback mechanism to static pages (maybe not always implemented but you get no search engine love if you don’t) that can be deterministically interpreted by a client that wants a non-hash URL.

    The idea that they “break the web” is hysterical nonsense.

Leave a Reply

Your email address will not be published.


© 2024 rc3.org

Theme by Anders NorenUp ↑