Tim Bray offers a good high level explanation of why hash-bang links are a horrible idea and fundamentally break the web. I like this question:
So, my question is: what is the next great Web app that nobody’s built yet that depends on the simple link-identifies-a-resource paradigm, but that now we won’t be seeing?
That probably explains the Twitter case, since Twitter is an application that rightly has many dynamic elements. But that doesn’t make sense for Gawker, a Web publisher in the business of publishing static blog posts on the Web. Why are they loading that content dynamically? My best guess there is that they hired a developer or manager who had done it that way somewhere else, probably for more sensible reasons. They came to Gawker and decided to just build things in the way that they already understood. That person should probably be fired.
February 10, 2011 at 6:06 pm
You get the picture.
And it seems all that was done without load testing such a heavy site to see if it will actually scale to an audience as large as Gawker’s.
I guess they took Bill O’Reilly’s advice too literally: “We’ll do it live! We’ll do it LIVE!” 🙂
February 11, 2011 at 5:28 am
Take a look to http://www.gizmodo.es
The Spanish version remains with the old page layout, as you can see the same layout is loaded again, again and again in spite of page loading is speedy.
This is crazy and know we have the technology to give up the absurd page paradigm in web sites where most of the content is the same and only some parts are changing.
Said this I’m not sure whether the job done in http://us.gizmodo.com/ (new AJAX intensive layout) is the best.
February 16, 2011 at 3:41 pm
“what is the next great Web app that nobody’s built yet that depends on the simple link-identifies-a-resource paradigm”
I dunno, because hash-bang links still identify a resource, albeit in a way that is a little weird, and are perfectly compatible with all the existing web apps like Facebook and Twitter and whatever.
They also have a defined fallback mechanism to static pages (maybe not always implemented but you get no search engine love if you don’t) that can be deterministically interpreted by a client that wants a non-hash URL.
The idea that they “break the web” is hysterical nonsense.