rc3.org

Strong opinions, weakly held

The continuous integration tool chain

Since last year, I’ve been somewhat obsessed with continuous deployment. One of my work projects for this year is to start moving toward a continuous deployment model. Making progress is going to require a lot of engineering work, and also a few cultural changes.

My first big goal is to start deploying bug fixes as soon as they have been tested. Eventually it would be nice to break features out of the deployment process, but I think that if you have identified a bug and checked in a fix, the fix shouldn’t wait around for the next scheduled release to go live.

In September, I wrote about what you need to do to prepare for continuous deployment. Now I’m going to write a little bit about the tools I’m planning on using.

For continuous integration, I’m using Hudson, it’s surprisingly simple to set up as long as you have a build script that will compile all your code and run your tests. The first step is to get it to check out our code, make a build, and run unit tests against the build. I’ve accomplished that on my local machine, the next step will be to set it up on a server and have it start running the build automatically when people check in changes.

To improve test coverage, I’m using the Eclipse plugin EclEmma. It builds your code and shows exactly which lines of code are not covered by your unit tests. I find that test coverage tools are indispensable, especially when you’re working with a code base that was not written in a TDD regime. (A fair amount of code in this code base was written before TDD was under discussion.)

For testing Web interfaces, I plan on using Selenium. Ben Rometsch at Carsonified explains how to run Selenium tests through Hudson.

We also have a Web services component of our application that I plan on testing end to end. For those purposes, I’m writing a tool in Ruby that accepts tests in YML format, submits a request to the Web service, validates the response against the expected data, and then goes and checks the database to make sure the proper data was stored there as well. At some point I’ll be hooking that into Hudson as well. At that point, the process will look something like this:

  1. Check out the code
  2. Make a build
  3. Run the unit tests
  4. Set up a test version of the database
  5. Deploy the application in a test environment
  6. Start the application
  7. Run the Web service tests

The Web interface is a separate project and will follow its own process.

Getting continuous integration up and running is but one of four steps toward getting a continuous deployment regime set up. The next big step I’m taking is nailing down the build process and then updating the build scripts to remove a few manual steps we’re performing right now.

3 Comments

  1. Since Hudson allows project dependencies, we separate a lot of the steps you describe into individual projects. For example, compiling and running the unit tests is a separate project to compiling and running integration tests (that use the DB without deploying the application), and these trigger the building of the application as a WAR. RESTful service tests then deploy that war as it would be deployed in production, and are a separate project to the browser tests, and so on.

  2. Do you have unit tests for your JS or are you just planning to use Selenium for integration testing? I guess it depends on how you use JS whether it’s worth doing, but if you’re planning on writing unit tests, you can integrate them into your Selenium/Hudson CI toolchain. Check out Nicholas Zakas’s YUI Test library for the required pixie dust:

    http://github.com/nzakas/yuitest

  3. I am not even really diving into unit tests for the JavaScript yet. Sadly, the core business logic has to get the treatment first. So we’ll probably start with Selenium tests for the basic web interface functionality and then dig into the JavaScript further later on.

Leave a Reply

Your email address will not be published.

*

© 2024 rc3.org

Theme by Anders NorenUp ↑