rc3.org

Strong opinions, weakly held

Month: March 2006 (page 2 of 3)

Who’s worse, Bush 43 or Reagan?

Matthew Yglesias says Ronald Reagan. Mark Schmitt says George W Bush. Josh Marshall says Geore W Bush. I’m going to go with the elder statesmen here and say George W Bush. Forget Democrats. I think that after Reagan’s eight years in office, most Republicans and indepents would have said that America was better off. Something tells me in 2009 they’ll be saying something different.

New Yorker on Bill O’Reilly

I’m linking to Nicholas Lemann’s New Yorker profile of Bill O’Reilly just so I can republish this one bit:

Mainly, O’Reilly, like every political talk-show host with a big following, is a populist, who, in his beyond-irony way, is a rich, middle-aged white guy aligned with the ruling party, and who has the guts to stand up to the elitists who run (but also hate) this country. To say that that doesn’t make any sense is to deny oneself the pleasure that a close study of O’Reilly affords.

Brilliant.

Third anniversary of the Iraq invasion

I’ve been going back and reading the stuff I wrote about Iraq in the leadup to the invasion. Here’s a link to my post on Bush’s speech announcing the start of the war from March 17, 2003.

I wish I had been wrong when I wrote this:

I’ve given both sides of the debate very serious consideration, and unlike most neocons and warmongers, I’ve actually read The Threatening Storm. What I found today when I heard a reporter on the radio talking about how people in Baghdad were lining up at pharmacies to get all the medicine they can and filling up their cars with gas for all the good it will do is that my reaction against this war is a lot more visceral than I would have imagined. I grew up on the Gulf coast, and I can remember what it was like when we heard there was a hurricane coming. We did what we could and hoped against hope that the coming destruction would miss us, and it always did. What must it be like to be in a city in Iraq right now? You know the destruction is coming, and you know that the only thing that will save you now is luck.

I sit here in America, and I ponder the fact that we’re the people who are about to inflict that on another country. Not because they’ve attacked us, and not because they’re preparing to attack us, but because they might possibly attack us. I won’t argue with anyone who says that Saddam Hussein is a brutal, oppressive dictator who deserves whatever fate befalls him, but there are literally millions of people who are about to stop being Saddam Hussein’s victims and start being our victims. The United States is about to be the disaster that befalls them. And when I look at President Bush, Donald Rumsfeld, and their ghoulish set of war-loving minions, I don’t think they appreciate the gravity of that.

Ruby on Rails Migrations

I just started using the migrations feature of Ruby on Rails. Migrations are a way to capture changes to a database schema so that you can apply them without destroying data, and so that you can roll them back if you made a mistake. Before I understood how they worked, I thought they were basically magical and was hesitant to try them out because I don’t believe in magic. Some Rails developers I’m working were extolling them, so I investigated further and discovered that they are not magic, but that they are pretty darn useful.

One thing you learn about Rails is that just about every file you touch is a full blown Ruby script, whether it’s a unit test, a build file (or Rakefile as they’re known in Rails world), or a database migration script. The only thing that differentiates them is how they’re used and which libraries they import. So a migration script is just a Ruby script that has access to a bunch of methods that perform database operations. It also has access to all of the model classes in your application, so that you can modify the data in your database from within a migration as well. Fundamentally, a migration script isn’t all that different than a file full of SQL statements that modify a database schema, except for perhaps a small usability improvement.

Each migration script has two methods that it has to override, one to apply the changes in the migration and another to roll them back. So if you want to add a new table called “widgets” to your database, in your migration script you add the code to create the table to the “up” method, and the code to drop the table in the “down” method.

As I said, there’s nothing magical about the migration files themselves. They’re just collections of database operations. What’s more interesting is how Ruby on Rails applies them. Migration files are versioned, and the version number is part of the file name. The first migration has a name like 001_initial_schema.rb. You apply it using the command rake migrate. So if another developer checks in versions two through five of the schema, the next time you run rake migrate it will apply each of the migrations in turn to bring your schema up to the most recent revision. Of course you can also use it when deploying to production, eliminating a lot of risk that comes from hacking on a live schema manually to apply changes that were made since the last version was deployed.

All in all, migrations are an elegant solution to a problem that’s common to all Web applications, regardless of the problem that they’re written in. They could easily be ported to Ant for Java applications, or to Perl or PHP scripts for those platforms as well. Maybe I’m missing the fact that such tools are already common, but I haven’t seen them.

The only question for me with migrations is how to best include them in my workflow. For example, let’s say your adding a new feature to your application that requires you to add a column called “color” to a table called “widgets”. Do you generate a new migration script (Rails comes with a generator that will automatically build a skeleton of a migration script for you with the proper version number), add the command to add the column to that migration script, and then run rake migrate, or do you modify the schema yourself and just create the migration before you check your code in?

If you do your initial work in the migration script, the problem is that if you add more stuff later, you have to roll back the migration and then run it again to apply all of the changes to your development database. On the other hand, if you make the schema changes by hand, you increase the risk of checking in a migration that has not been well tested. After some testing, it seems like the best approach is to create one migration for each change you come up with? Let’s say you’re adding a comments feature to a blog, so you write a migration that creates the table for comments and run the migration on your local system. You realize later that you need to store the URL of the commenter. The best approach, in terms of using migrations most efficiently, is to create a new migration to add that field rather than editing the previous migration. At least that’s how it seems to me.

Where the action is

In case you missed my initial announcement, I’ve stareted a link blog called Of Interest. That seems to be where I’m spending most of my energy currently, mainly because it gives me an opportunity to link out to a lot of the weblogs I read without having to write posts here at rc3.org (which feel kind of heavy). At some point I’ll integrate the two sites, but for now there’s not even a link from the rc3.org home page to Of Interest.

Universal Firefox and Thunderbird

This morning I decided I could no longer wait for the official universal builds of Firefox and Thunderbird, so I downloaded the unofficial builds here. I’m actually sticking with Camino for now, because it felt a bit faster than Firefox in my 10 second trial run. Thunderbird was, for some reason, one of the slowest applications under Rosetta, but of course the universal build is blindingly fast. Don’t wait to upgrade.

Ruby pays off

When I started my new job last year, one of the most exciting things about it was that we would be writing several applications from the ground up, and that it would be my job (in part) to decide which platforms to use to write these applications. I’ve been working for roughly five years writing Java/J2EE Web applications, but when the time came to pick a platform for this project, I chose Ruby on Rails. The last product I built was a huge pile of business logic with a very thin Web services layer on top. For that one I’d probably choose to use Java again. These applications are mostly content management tools, and play perfectly into the strengths of Ruby on Rails.

Anyway, today we launched our first small Ruby on Rails application — a piece of the identity management stuff I was talking about awhile back. It’s a simple account management application (sign up, reset password, update profile, etc) which stores users in its own database and to an LDAP server’s database. We already have a couple of internal PHP applications, several MediaWiki installs, and a Jabber server that are are authenticating users via LDAP, so this gets us most of the way toward having centralized account management, although we will probably add a Web service for authentication for some applications that are yet to be written as well.

We already had some accounts on a different LDAP server, so I had to migrate those users to the new database. This was one of the places where the decision to use Ruby on Rails really paid off. Most people who are new to Ruby think of it in the Rails context, but it originated as a saner way to do the stuff Perl does. Given a flat file of old accounts, parsing the data with Ruby was really simple, as I expected. The cool thing was that by including my Rails environment configuration, I could load up the entire Rails environment and take advantage of it from my script as well. That meant that I could use ActiveRecord, the object-relational mapping tool in Rails, to perform all of my database tasks, and the ActionMailer, the mail component I was already using in my Web application, to send emails to all of the users to let them know that their passwords had been changed. In other words, I was able to use lots of framework code outside its expected context and save a lot of time doing so.

After migrating the accounts, I realized that I had not created entries in a table called ldap_entries. Users must have an entry in that table to authenticate against the LDAP server. Fortunately, my user model already had a method called generate_ldap_entry that’s called when users sign up via the Web interface, so to add the entries for every user in the database I was able to use a two line script:

require File.expand_path(File.dirname(__FILE__)
+ "/../config/environment")
User.find_all.each { |user| user.generate_ldap_entry }

That’s real power.

The MacBook Pro is hot

The MacBook Pro is hot, and by that, I don’t mean that it’s the subject of much lust and attention. I mean that you could use it as a portable camp stove. After using mine for a couple of weeks, that’s my one big complaint. It’s fast, it has worked like a charm, and the display is wonderful. The problem with it is that the area where you place your left wrist when you’re typing gets rather hot in normal usage. It’s distracting enough that I sometimes sit and think about what applications I’m using might be causing it to heat up. Trying to change your usage patterns in order to make your computer run cooler generally isn’t ideal.

That said, I’m not giving it back.

Standard implementations versus standards documents

John Udell recaps a talk at ETech arguing that what the world needs is not standards documents but rather standard implementations. In other words, rather than writing a specification for the Atom protocol, in an ideal world there would be standard, open source implementations of libraries to produce and consume Atom documents.

It’s an interesting idea in theory, but how can we have one true implementation when there are so many languages and platforms out there? If the one true implementation of Atom is written in Python, the Java programmer has to create the implementation by reading the one true implementation’s source rather than reading a standards document.

I do think that standards bodies should write conformance testing tools to release with their standards, so that you can be sure that your implementation of a standard isn’t broken.

Ruby on Rails: A big fixture failing

This is a somewhat obscure Ruby on Rails question, but I thought I’d ask it anyway. Is it the case that the code that reads test fixtures into your database ignores any settings you apply to your models? It sure looks to me like if you point a model at a table name other than the default using set_table, or override the default connection information to associate a particular model with a different database, the part of the testing framework that reads the fixtures just ignores it completely. That seems like a somewhat huge limitation to fixtures, which are otherwise very useful.

For those of you who don’t use Ruby on Rails, fixtures provide a way to easily seed your unit tests with data, as well as wipe out data between tests. The way Rails testing usually works is that you create a database that’s just for testing and set up some fixtures with test data, that way you can quickly and easily run your unit tests against a clean database to insure that data errors aren’t causing your code to break and to isolate your test data from data that might actually be useful.

Older posts Newer posts

© 2024 rc3.org

Theme by Anders NorenUp ↑