Will Ruby on Rails succeed J2EE?
3

Will Ruby on Rails succeed J2EE?

OnJava.com interviews a number of developers to ask them what they think about the idea that Ruby will, at some point, replace Java as the enterprise language of choice. They didn’t ask me, but I’ll do the interview as if they did. (I should do this for more interviews.) They asked all of the people different questions, so I’ll just use whichever questions I like.

1. Why would you choose Ruby on Rails over J2EE for your next Web project?

Initially I was quite sceptical about Ruby on Rails, mainly because of the inordinate amount of hype I was seeing. It wasn’t until I started reading the Rails book that I understood what it offered over the environments with which I was already familiar. The main reason I’d choose Ruby on Rails is that it enables you to get more done with fewer lines of code than J2EE or PHP, and with a lot more structure than with PHP, ColdFusion, or Perl. When you write a Ruby on Rails application, you feel like you’re writing clean code, the way you do with Java, only you’re getting a lot more done with less work.

2. Where’s the innovation in Ruby on Rails?

When I started writing applications in Ruby on Rails, I really enjoyed it, but the question I really wanted to answer was, “What’s innovative about this environment.” The components of Rails are an object-relational modeling framework, an MVC framework, and a templating system not unlike JSP or PHP. In the Java world, we already had a number of excellent MVC frameworks (I’ve used Struts and Spring MVC personally), and a number of well-regarded ORM frameworks (Hibernate is my favorite). There are plenty of systems for embedding code in Web pages out there. The Rails implementations are nice, incremental improvements over what’s come before, but the real innovation results from the “convention over configuration” mantra that is the starting point of Rails design.

Basically, the writers of Rails created a set of rules for properly written applications, and if you follow those rules, you don’t have to tell Rails exactly what you’re doing. The rules mainly relate to naming. If you name things what Rails expects you to, whether they’re database tables and columns, form fields, or view components, you can avoid configuration. I find myself agreeing with all of the Rails rules, so they’re not a hindance, and following them saves a ton of typing.

Anybody could have done something similiar, but Rails was the first framework I’ve seen to take this “convention over configuration” approach, and as much as it’s hyped, it’s even cooler to see how it works in practice.

3. Is there a class of applications for which you’d still use Java rather than Ruby on Rails?

The answer to this question is obvious — of course there is. For one thing, there’s a class of applications where only EJB will do. It would be a project that I’d work on most likely, because I don’t have experience with applications for which only EJBs will suffice, but they are out there. By the same token, one project I’ve worked on had a Web interface, a Web services interface, a batch interface, and lots of deep business logic. It also required fine grained transaction handling that I took care of with Spring. I don’t think I would have attempted that application in Ruby on Rails.

On the other hand, I’m working on a publishing application that I’m writing in Rails. Building it in Java would have just slowed me down.

4. What’s the biggest differentiator between Ruby on Rails and J2EE right now?

The area where Ruby on Rails and J2EE really can’t compare is the learning curve. J2EE lends itself well to writing solid, maintainable Web applications. It also lends itself well to stuffing code into JSPs and churning out unmaintainable Model 1 Web applications. The developer makes the difference. It took me a couple of years of Java development before I really got into MVC frameworks, and longer than that before I started using ORM. Part of that was that MVC and ORM were relatively new back then, and now they’re components of most well-written Web applications. That said, if I wanted to take a new developer and get them writing J2EE applications the way I would, they’d need to pick up Java, JSP, Spring (or some other MVC framework), Hibernate, Xdoclet, Ant, Eclipse, log4j, and probably several other things that I’m forgetting. And they have to learn how to make all that stuff play well together.

In fact, on my current project, there was already a developer here who was going to be working on it. He has some Java experience, but is new to Web applications. He wrote one Web application in Java, using only JSPs. My bet was that it would be easier to get him up to speed on Ruby on Rails (even as I was learning it myself) rather than training him on all of the different Java tools and frameworks that I use and to simultaneously teach him the “MVC way” of writing Web applications. It turned out that I was right, he was working in Rails productively very soon after starting, even though he had no experience with Ruby or Rails previously. That would have never happened had I handed him a list of the Java tools and libraries he’d be using.

The game within the game
0

The game within the game

When the Sony DRM scandal broke, I pointed to a weblog entry arguing that the DRM wasn’t there to prevent customers from pirating music but rather to keep the music off of iPods in order to punish Apple for not supporting competing DRM schemes in its software. Today Joel Spolsky argues that record companies want to force Apple to allow them to price tracks sold via the iTunes Music Store at different levels because it will enable them to exert more control over musicians. As consumers we’re just pawns when it comes to this stuff.

The ideology of information
0

The ideology of information

Mark Schmitt on whether President Bush might have been deceived by his advisors when it came to intelligence on Iraq:

We’re asking very traditional questions: Was information withheld? Was there deceit about the information? Those are the familiar Watergate/Iran-contra questions. But they overlook the Ideology of Information that the administration created. By this I mean the whole practice of evaluating all information going into the war not for its truth value, but for whether it promoted or hindered the administration’s goal of being free to go to war. The President could have been given every bit of intelligence information available, and he and/or Cheney would have reached the same decision because they would have discarded, discounted, or disregarded most of it. Information that was Useful to that goal was put in one box, Not Useful put in another. Entire categories of information were assigned to the Not Useful box because their source was deemed an opponent of U.S. military action, or assumed to have some other motive.

Schneier on Sony’s rootkit
0

Schneier on Sony’s rootkit

Bruce Schneier has published a solid wrapup of the controversy over Sony’s rootkit, from its original discovery by Mark Russinovich to Sony’s meandering attempts to respond. He also asks a pointed question — why haven’t antivirus software makers done a better job of responding to this problem? He points out that Symantec’s response has been tepid.

The company that makes the anti-virus software I use, Grisoft makes no mention at all of the rootkit on their Web site. (Granted, we’re talking about spyware rather than a virus here, if you want to be technical.) Spybot, the well-regarded anti-spyware tool, doesn’t mention it on their updates page either. Something tells me that none of these companies want to run afoul of Sony’s legal department or be accused of providing tools to thwart copy protection.

Interestingly, anti-spyware software provider Lavasoft mentions the XCP software on its blog, but not whether their products will do anything about it. To its credit, Microsoft has announced that its anti-spyware tool will remove XCP. Maybe they’re the only company in this market that feels confident locking horns with Sony.

Enter your profile in Google Base
0

Enter your profile in Google Base

I read a piece this morning about using Google Base for personal profiles, and how it’s already works as a social software platform. You can enter the same kinds of information about yourself that you can on most other dating/social networking sites, and of course the theory is that once you’ve done so you’ll be easy to find via a basic Google search. I went ahead and entered some basic personal information about myself (not yet published) with the thought that if some old high school or college friend searches for me in Google, or one of the readers of my books tries to get in touch with me, having an entry about myself in Google Base will increase the odds of their actually being able to find me.

Entering your biography in Google Base may also be a good option for people who have trouble getting included in Wikipedia.

The future of relational databases
3

The future of relational databases

It sure seems like there’s a nascent trend that involves moving away from relational databases for storage, at least for stuff that will be exposed on the Web. First, Ning released their Web application platform that basically supports PHP and a <a href=’http://developerfaq.ning.com/group.php?FAQGroup:title=Using+the+Ning+Content+Store”>data store that isn’t relational. Then I read Adam Bosworth’s article Learning from the Web, which argues that today’s relational databases do not embody any of the principles derived from observing what works on the Web.

Yesterday, Google Base was launched. It’s like the Ning content store, except that rather than writing applications to access the items in the database, you access them via search. It’s a certainty that Google will provide an API for accessing the database with your own Web applications at some point. In the meantime, Google Base offers a way to publish structured information online without being burdened with writing or installing an application, or setting up your own database. Both Ning and Google eschew tables and enable you to create objects with attributes which in Ning’s case can reference other objects.

Furthermore, even at the application development level, the best practice for dealing with relational databases is to abstract them away with some kind of object-relational mapping layer. In Ruby on Rails, you use ActiveRecord. Java developers have a number of options to choose from, the most popular probably being Hibernate.

The other side of the coin is that today there are probably 1000 times as many servers running relational databases as there were 10 years ago. Unfortunately, they’ve become so common and widely used that they’re also becoming invisible. What’s the takeaway? Learning how to write stored procedures is probably not the best job skill you could pick up as a developer right now.

Update: Simon Willison has a more detailed explanation of how Google Base handles structured data.

Changing the rules
0

Changing the rules

If you’re trying to keep up with the latest on the Sony DRM/copy protection debacle, I’d recommend reading Ed Felten’sFreedom to Tinker, Bruce Schneier, or Boing Boing. What I wanted to talk about a little bit is the implications of this scandal.

I like the convenience of the iTunes Music Store and the prices are fine with me, but I rarely purchase tracks there because I don’t like the idea of Apple’s DRM causing me problems down the line when I want to move my music to a new computer or something. I prefer the freedom that buying a CD offers — you rip it and you have an unencumbered digital copy, and a physical copy to fall back on if your hard drive crashes. It seems to me though that the copy protection Sony licensed changes the rules.

If I fear that any CD I buy will silently install a bunch of crap on my computer, Apple’s DRM doesn’t sound quite so bad. At least I know what I’m getting.