The New York Times is about to embark on its latest experiment in getting online readers to pay up. They posted the details today. If you visit the Web site, you’ll be placed under the following constraint:
On NYTimes.com, you can view 20 articles each month at no charge (including slide shows, videos and other features). After 20 articles, we will ask you to become a digital subscriber, with full access to our site.
They are allowing readers who are referred to the site from blogs to read those articles:
Readers who come to Times articles through links from search, blogs and social media like Facebook and Twitter will be able to read those articles, even if they have reached their monthly reading limit. For some search engines, users will have a daily limit of free links to Times articles.
I may or may not pay for the site, but I’m glad they’re going to take some steps to make sure that their relevance doesn’t fall too far in terms of being a site referenced on blogs. I generally don’t subscribe to sites behind paywalls because even if I enjoy them, I can’t link to them from the blog.
It’ll be interesting to see whether the New York Times can thread the needle of earning subscription revenue without losing in the market for attention. Most other sites that have tried it have not done well.
Update: Felix Salmon has some thoughts on the New York Times’ pricing model. I noticed this oddity as well:
Beyond that, $15 per four-week period gives you access to the website and also its smartphone app, while $20 gives you access to the website also its iPad app. But if you want to read the NYT on both your smartphone and your iPad, you’ll need to buy both digital subscriptions separately, and pay an eye-popping $35 every four weeks. That’s $455 a year.
The message being sent here is weird: that access to the website is worth nothing. Mathematically, if A+B=$15, A+C=$20, and A+B+C=$35, then A=$0.
Update: Cory Doctorow’s comments are worth reading as well.
Publishing a resilient blog
Brent Simmons wants to see people move back to Web logs that render posts to static files so that they don’t go down in flames every time they get an unexpected traffic spike. I just wanted to point out that it is possible to build a resilient Web site using WordPress, as I explained in my post How to Speed Up WordPress In an Emergency.
I’ve never put a lot of faith in content management systems that “bake” pages if those pages will be updated dynamically. Obviously the worst case for any site is cratering under load, but the second worst is not showing people the most up to date content that’s available. If you allow comments on your site or provide other dynamic features, caching can be pretty tricky, and simply baking pages is almost completely impossible.
Fortunately, there are other ways to go about things. I still prefer caching at the database level to caching at the page level, but there’s no reason that a site that’s really dealing with a lot of traffic can’t do both.