Back on April 12, my Web host, Linode, sent me an email letting me know that I needed to reset my password without any further details. Today they announced that their user management application was hacked and that the hackers were able to download their full database, including hashed passwords and encrypted credit card information. The hackers also have the public and private keys to the credit card database. They can obtain the credit cards if they can brute force the passphrase for the private key. When it comes to security, taking shortcuts is death.
Laws like the Computer Fraud and Abuse Act make criminals of us all. Ludlow describes the inevitable consequences:
In a world in which nearly everyone is technically a felon, we rely on the good judgment of prosecutors to decide who should be targets and how hard the law should come down on them. We have thus entered a legal reality not so different from that faced by Socrates when the Thirty Tyrants ruled Athens, and it is a dangerous one. When everyone is guilty of something, those most harshly prosecuted tend to be the ones that are challenging the established order, poking fun at the authorities, speaking truth to power — in other words, the gadflies of our society.
Check out the whole post at NYTimes.com.
This week The Daily Show mocked the NCAA for ruling a wrestler on a 10% scholarship ineligible for selling rap music. NCAA athletes are not allowed to profit from their own names. It seems crazy because it is crazy. And the NCAA is utterly unsympathetic. If you don’t know why, check out Taylor Branch’s 2011 article in The Atlantic, The Shame of College Sports.
If you think like a security professional, though, the NCAA rule makes perfect sense. Players are not allowed to receive gifts or compensation for playing sports. If they were allowed to be compensated for other endeavors, it would create a loophole big enough to drive a truck through. Top ranked football recruits could just self-publish e-books on Amazon.com with titles like, “Buy This If You Want Me to Attend the University of Alabama” and rake in the dollars. Given a rule that disallows fans from compensating players, lots of other rules follow.
Being in the rules enforcement business is rarely fun. You start out trying to keep people from manufacturing crystal meth, and the next thing you know you have to show your driver’s license at the pharmacy to buy cold medicine.
John Siracusa’s post on Technological Conservatism is one of the best pieces I’ve read in some time. There are many advantages that come with being an experienced professional, but increasing conservatism is one of the disadvantages. Read the whole thing.
As you probably already know, film critic Roger Ebert passed away today, and I know him as a deep and wide-ranging thinker, a humanitarian, and of course as a great film critic. Had he not started blogging, I would probably know him only as the latter. In fact, I’d likely only know him as that guy who had a TV show where he rated movies using his thumbs.
Through the Web, Ebert reintroduced himself to all of us. He wrote about his family. He wrote about how to cook meals in a rice cooker. He struggled with how to appropriately take advantage of Amazon’s affiliate program. He took us on many impossibly romantic tours of London.
Here’s what he wrote about his commenters a few months after he initially started blogging:
Your comments have provided me with the best idea of my readers that I have ever had, and you are the readers I have dreamed of. I was writing to you before I was sure you were there. You are thoughtful, engaged, fair, and often the authors of eloquent prose. You take the time to craft comments of hundreds of words. Frequently you are experts, and generous enough to share your knowledge.
Ebert’s work as a critic was a love letter to film. His blog was a love letter to his fellow man. He’ll be sorely missed, but I’m so glad that the Web that so many people I know and respect fought to build and preserve provided the medium for him to share his thoughts with all of us.
I know it is coming, and I do not fear it, because I believe there is nothing on the other side of death to fear. I hope to be spared as much pain as possible on the approach path. I was perfectly content before I was born, and I think of death as the same state.
Roger Ebert on death
We don’t see it much lately because the job market for software engineers is so robust right now, but there’s a real fear among engineers that H-1 B Visa holders are coming to take our jobs or at least lower our compensation. I thought it was worth highlighting this argument that the broad benefits outweigh the potential harms by a large degree:
Consider the interests of every single American who isn’t a skilled engineer. The vast majority of private sector workers in the United States are engaged in local service provision. Maybe we’re in high status local service providing professions (doctors, architects) or maybe we’re in low status ones (retail clerks, maids) but it’s what American do. And clearly everyone involved in local service provision benefits if a new skilled worker earning an above-average salary moves to town.
Against that you weigh the possible harm to the software engineers who already here. I think it’s fair to say that in terms of current and future employment prospects, we have it easier than just about anybody.
Last week one of the data analysts at work asked me to help him out with a script he was writing. The script generates a CSV file and uploads it to an FTP server. He had one file containing a sequence of SQL queries, and another shell script that executes that script and then uploads the results via FTP. I thought it would be fun to write up what it took to convert those bits of code into something that meets the definition of production service in our environment.
The first, most obvious problem, was that the script was running on the analyst’s development VM, not on a production server. Relatedly, it was running from his personal crontab. The only traces that this production service even existed were in his personal space. That seemed wrong. It also queries tables in his personal schema and had the credentials for his database account hard coded in the script.
Fortunately we already have a production cron server that’s hooked into our deployment system along with a version controlled directory for the scripts to schedule cron jobs.
Relatedly, we are mostly a PHP shop, and we write cron jobs as command line PHP scripts. This may not be to your tastes (or mine), but it’s what we do. So the script needed to be ported to PHP. It also needed to extend our standard base class for crons. This provides basic features like logging to our centralized log management system, as well as conveniences like locking to prevent overlapping runs of the script and the ability to accept an email address to which to send alerts.
To get all of this working I had to rewrite the script in PHP, implementing the functionality to generate the CSV file and then send it via FTP. The SQL queries required to collect the data creates a couple of temporary tables and then runs a query against those tables. My first thought was that I would just run the queries natively from PHP through our database library, but the temporary tables only last the duration of a database session and there’s no guarantee that the queries will be run within the context of a single session, so the tables were disappearing before I could retrieve data from them.
Instead I had to put all of the queries into one variable and then run them through the command line client for the database using PHP’s proc_open function, piping the contents of the variable to the external process. I also switched things up to use the appropriate database credentials, which required the analyst to update the permissions for that table. Ideally, we’ll eventually change things up so that the data is stored in a production schema.
At that point, I had a script that would work but it didn’t have any error handling and it wasn’t a subclass of the base cron script we use. Adapting it to use the base cron script was pretty straightforward. Error handling for these types of scripts is a bit more complex. I opted to do one check to see whether the CSV file was created successfully, and then to catch any errors that occurred with FTP and alert. Fortunately, the base cron script makes it easy to send email when failures occur, so I didn’t have to write that part.
Finally, I just had to pick a time for the script to run, add the crontab entry, and then push the script through our deployment system. Or at least that was the idea. For whatever reason, the script works when I run it manually but it does not appear to be running through cron, so I’m running it manually every day for now. I also realized that if the script runs before the big data job that generates the data for it finishes, or that job fails for any reason, then the output of the script will be wrong. That means I need another layer of error handling to detect problems with the big data job and send an alert rather than uploading invalid data.
Why write this up? It’s to point out that for most projects, getting something to work is just a small, small part of building a production service. Exporting a CSV file from a database query and uploading it to an FTP server takes just a few minutes. Converting that into a service that runs within the standard infrastructure, and handles failure conditions smoothly takes hours.
There are a few takeaways here. The first is that anything we can do to make it easier to build production services is almost certainly worth the investment. Having a proper cron base script was really helpful. I’m creating a superclass of that base class that’s designed just for these specific kinds of jobs to make this work easier next time.
The second is an acknowledgement on the part of everyone involved in a project that getting something working is just the beginning, not the end of the project. The work of making a service production-ready isn’t fun or glamorous, but it’s what separates the hacker from the software engineer. Managers need to account for the time it takes to get something ready for production when allocating resources. And everybody needs to be smart about figuring out the level of reliability any service needs. If a service is going to run indefinitely, you need to be certain that it will work when the person who wrote it goes on vacation.
The third is that at any company, people are building services like this all the time outside the production context. You usually find out things went wrong with them at the worst possible time.
Garret Vreeland on taking notes:
Every day, I stuff more articles and links in there, in anticipation of the day when I’ll need to take advantage of them. Yet when I have an issue and need to find a solution, or I am looking for a reference … time and again I simply Google.
I find myself in the same boat. I use Evernote mainly as a repository not for notes but for the weekly status reports I compose for work, and a couple of months ago I did go back and read through them all. I have 4674 bookmarks in Pinboard, and I don’t look at them all that often. It’s easier to just use a Web search.
If Google searched your notes in Keep and showed the results alongside your Web search results, that would be compelling. That probably won’t happen, for the same reason why Google doesn’t have an option for searching your Google Drive when you search the Web. It’s a strategy tax.
John Siracusa handicaps the players in the mobile industry based on their dependencies on other companies. This is an interesting basis for analysis that could be applied widely.