18 May 2007
I don’t often write about affairs of state in this space. This is not due to a lack of interest. It is due more to the fear that I hold many of my political views rather too strongly to communicate them effectively. However, there are issues that I think it would be immoral not to oppose publicly. Torture is one of those things.
Mr Krulak and Hoar have written good piece about the practical downsides of torture over at the Washington Post (go read it, I’ll wait). Their basic argument is that torture enhances the ability of terrorist groups to recruit new members, which is the opposite of what is needed at this time.
If we forfeit our values by signaling that they are negotiable in situations of grave or imminent danger, we drive those undecideds into the arms of the enemy. This way lies defeat, and we are well down the road to it.
I think they are correct that, even on a strictly functional level, torture is huge net loss for us. However, even if torture where an effective weapon against terrorist organizations I would still be against it.
Arguments in favor of torture generally hing on the assumption that when the terrorist lose we win. Unfortunately, this assumption is completely false. The world is not a zero sum game. Every combination of winners and losers is possible.
By allowing torture we lose, regardless of it’s impact on terrorists. We lose the respect of the rest of the world. We lose our right not to be torture. We lose the very essence of ourselves.
As Gregory Djerejian points this out in his commentary on Mr Krulak and Hoar’s piece
history doesn’t advance in linear fashion defined by consistent progress, but perhaps moves more cyclically, with advances in human civilization constantly threatened by reverses.
Hopefully, we can regain what we have lost in the last few years. It would be shameful if my generation were the one to allow the start of a long slide backwards.
18 May 2007
•
Software Development
I have been watching the Semantic Web efforts with guarded interest for the last few years. I really like the idea. However, I have always thought it was probably a pipe dream. The Semantic Web is a chicken and egg problem, there must be a lot of data published to attract the general developer population but it needs to attract the general developer population to get a lot of data published.
RDF, SPARQL and the other Semantic Web technologies are pretty uniformly wicked cool. Unfortunately, they are also rather unlike the technologies with which most developers are familiar. I has never obvious to me how we, as an industry, could get to the Semantic Web from here. But today I became aware of GRDDL1, which is the path to the Semantic Web.
As I understand it, GRDDL amounts to this: publish your data in what ever format you like but include a link to an XSLT transform that will convert your published format into an RDF document. So you can continue to publish your microformatted HTML document and be part the Semantic Web just by adding a link element.
My initial reaction to GRDDL is an exquisite combination of “man, there are some really smart people in the world” and “duh, why did I not see that”. That set of feelings is usually a strong indication of a good idea.
01 May 2007
•
Software Development
I recently setup an automated backup system for my (and my wife’s) blog.1 Based on the recommendation of Mr O’Grady (and my belief that RESTful architectures are a good way to solve most problems) I decided to use Amazon’s S3 as the off site storage. I did not to take the same approach as RedMonk, however, because I wanted to play with S3 a bit more directly.
After playing with it I have to say that I am very impressed. S3’s RESTful API is powerful while being simple enough get started with right away. The Ruby AWS::S3 library makes it even easier to get started by providing a nice, idiomatic, wrapper around S3’s functionality.
My backup solution ended up being a 20 line ruby script2 that dumps a database, compresses the dump and then pushes it to S3. That combine with a couple of crontab entries and I was done.
It gets better, though. I got my first bill today:
Greetings from Amazon Web Services,
This e-mail confirms that your latest billing statement is available on the AWS web site. Your account will be charged the following:
Total: $0.02
Please see the Account Activity area of the AWS web site for detailed account information:
So there you go, a secure remote backup for only 2 cents (and a couple of hours of my time). I think these web service things may be around to stay.
27 Apr 2007
•
Software Development
Rake is a really excellent build tool. It is basically Make on steroids (and minus a few of the annoying inconveniences of make). If you build software of any sort you owe it to yourself to check out Rake.
The source of my Rake related euphoria today is that I just used a feature of Rake that is not available in any other build tool that I know. Namely, I added an action to an existing task1. This feature allows you to extend the behavior of a task that you do directly own. Say for example, ones defined by the framework you are using.
My particular situation was this. I have some data that is absolutely required for the application to function (permissions data in this case). Changes to this data don’t happen at run-time and the code explicitly references these records. Which means that while this information is stored in the database it is more akin to code and the data model than it is to the data managed by the application.
Given that this data is reference explicitly by the code it must reside in source control. Rails migrations are and excellent way to manage changes to the data model of an application and, as it turns out, the foundation data too. If you need to add or change some of this foundation data you can just write a migration to add, update or delete the appropriate records.
There is one slight issue with using migrations to manage foundation data, though. Only the structure of the development database gets automatically copied to the test database. So the code that requires the foundation data will fail its tests because that data does not exist. I have run into this problem before. That time I solved it by changing the way Rails creates the test database such that it used the migrations rather than copying the development database’s structure. It is a very nice approach but unfortunately it does not work for my current project.
To solve my problem this time I simply added an action to the db:test:prepare
task to copy the data from the table. The standard db:test:prepare
task provided by Rails dumps the development database’s structure and then creates a clean test database using that dump. For our project it still does but then it follows that up by dumping the data from roles table and loading that into to the test database also.
Extending the db:test:prepare
task means that all the tasks get an appropriate test database when they need it. And without me having to go around and added a dependency to all of them. I love it when my tools let me solve my problems easily.
26 Apr 2007
•
Software Development
Charlie Savage and his team over at MapBuzz have decided that is it time open the doors. MapBuzz is a great place to create and share maps. If you like maps, and really who doesn’t, go give it a try.