uri templates in elixir

I just released my very first Elixir package, UriTemplate. It is an RFC 6570 compliant URI template processor.

I’ve been playing with Elixir for the last few days and i really like it. I haven’t gotten into any of the concurrency stuff it really shines at yet but it is a really nice language in which it program even without that.

Wow! Spark-cassandra-connector seems to be about twice as fast as Calliope.

Apparently @asus’ idea of warranty “support” is to delay and misdirect but to *never* actually replace the broken device.

API meetup this thursday

We are having an API meetup at Lucky Pie in Louisville, CO on nov 15th @ 6:30pm. Come share a tasty beverage, a slice of pizza and your opinions on all things API with your fellow API crafters. If you are an API practitioner we’d love to meet you in person.

BTW, if you are in Denver for Defrag this drinkup is a short drive from the Omni Interlocken so please join us.

If you register it’ll help ensure we have enough space, but all are welcome, registered or not.


One of the hardest won lessons of my career is the power of incrementalism. I am, at my core, an idealist. When i look at a product or code base i tend to see the ways it deviates from my ideals, rather than the ways it is useful or does match my ideals. That is a nice way of saying i tend come off negative and cynical when discussing… well, almost anything.

This idealism is a powerful motivator to produce elegant code and good products. I suspect it is one of the reasons i have become a good software developer. It does have its dark side though. Overcoming some of the weaknesses of my idealism is an ongoing theme in my life. Most importantly idealism makes developing a sense of “good enough” very difficult. Paying attention to good enough is one of the most important skills any engineer can develop because otherwise you constantly produce solutions in search of a problem. An almost sure sign of an underdeveloped sense of good enough is the big rewrite. Early in my career i was involved in a couple of big rewrites. Both were successful from a technical perspective but both were a complete failure from a business perspective. They took too long and did not provide any immediate benefits to the customer.

In both cases i was crucial in convincing the business to do the big rewrite.

In the intervening years i have come to realize that, to use a football analogy, real progress is made 3 yards and a cloud of dust at a time. If you want a successful product, or a nice code base you will get it one small improvement at a time. The bigger the change you are contemplating the more uncomfortable you should be. If it is going to take more than a few man weeks of effort there is almost certainly a better, more incremental, way to achieve the goal.

I am not saying you should not rewrite the products you work on. Quite the contrary actually. If you are not in the middle of completely rewriting your code base you are Doing It Wrong. But that rewrite should be broken into many small improvements and it should never end. Those improvements will, over time, constitute a complete rewrite in practical terms. However, the business and customers will continue to get value out of the improvements as they are made, rather than having to wait many months. The developers benefit too, because the changes get released and battle tested in smaller, easier to debug pieces.

While i think all projects should be engaged in the continuous incremental rewrite, every rewrite needs to have a strategic vision. You need to know where you want to be in 1-3 years. Without such a vision you won’t know which incremental improvements to make. Once your team has a shared vision for where the product is headed you can make surprisingly rapid progress toward those goals without disrupting the day to day business. Be prepared for this strategic vision to change over time. As you gain more information about the domain and customers it is inevitable that your thinking will evolve. This is a key benefit of this model. You are continually able to make course corrections because you are always getting new information by putting the improvements in front of customers and getting feed back with very little delay.

Backlogs considered harmful

There are somethings we do under the pretense of being useful that are actually harmful. Unscheduled stories and bug reports in your ticket tracking system are an example.

Creating a ticket is easy when you are in the moment. However, once produced these artifacts have to be read and understood multiple times in the future. Each time you read a ticket it costs time. How many times have you given up trying to find some ticket that you think you wrote a long time ago and just entered a new one? How much time have you spent sifting through the same tickets every iteration deciding repeatedly that they are not important enough to actually schedule?

I am not saying that you should not enter tickets in your issue tracker. I am saying that doing so is not free. Therefore, you should consider very carefully whether the story or bug you are about to write will have a net positive value over the life of the project. Most likely it will not.

My rule of thumb is this: do not write it down unless you are willing to schedule it right now.

Willingness to schedule a bit of work is proxy for its importance. It is easy to pretend everything is top priority. If you are not willing to prioritize a bit of work before something you have already decided you want, it is obviously not very important.

If it is important you will not forget

I think the idea that you will forget something important is the scariest part of this approach. That fear is just silly. If you are passionate about an idea you will not forget it. If it is important to your customers they will not let you forget it.

Let me ask you a question: if neither you nor the customers care enough about an issue to get it on the schedule should you be expending effort on it?

To me the answer is clearly no. If you see a potential issue or have an idea let it rest until it becomes important. Odds are it never will become important and you will have saved a good deal of every one’s time. If it ever does become important the fact that six months ago you wrote a ticket vaguely related to an issue people are having now will not help anyway. You probably will not even be able to find that old ticket.

Corollary: todo comments considered extra harmful

Notice that my rule of thumb basically rules out todo comments altogether. Every todo comment is not only an unscheduled story, but an unscheduleable story. Even if the todo comment were a story in the ticketing system it would never, ever be scheduled. If it had a chance the developer would have written a story instead.

Todos are far worse than mere unshecheduled stories. Todo comments are a way for the developer to transfer some of the weight of the decisions that they made to future generations. A tax, in effect, on future generations of developers in order to assuage the author’s insecurities regarding decisions they have made in the code.

To the authors of todo comments i say, own your decisions. If you are not sure of what to do get a second pair of eyes now. Whatever you do, don’t burden future developers with your indecision. Right or wrong it will work out better if you make a reasonable decision and own it until there is some evidence that it was wrong.

Task switching in Git

This thing happens to me pretty often: i start a story, work on it for a while then something urgent comes up.1 The urgent thing needs to be fixed right away but i have a lot of changes in my working directory. Unfortunately, the changes i have made are incomplete and non-functional.

The usually suggested way to handle this is with git-stash. For a long time, i used stash in this situation myself. However, i often found myself lost in the stash queue. If you use stash to store unfinished work your stash queue can become quite long. It is easy to forget you have stashed work. It is also easy to do a git stash clear and lose that work.

There are lots of situations in which it can be quite a while before you get back to your stashed changes. For example, if you switch tasks because the business deprioritized the feature. Or if the urgent issue gets interrupted by an emergency issue.

It recently occurred to me that git provides a much more elegant way to deal with unfinished work.

The steps

First, always work in a feature branch. You should be doing this anyway but it is required for this technique to work.

  1. git add -A (on the feature branch)
  2. git commit -m 'WIP'
  3. Switch branches and fix that urgent issue. Using git like you always do.
  4. git checkout <feature-branch>
  5. git reset HEAD~1
  6. Continue where you left off. Once you are ready, commit.

This approach commits you in-progress work on the branch to which it belongs, keeping it safe.

How it works

Once you do your WIP commit your history will look something like:

That is great for temporarily storing your in-progress work. We definitely don’t want that nasty “WIP” commit in our history long term, though. The git reset HEAD~1 command changes the HEAD pointer of the feature branch back to the commit immediately before the “WIP” commit. That leave a commit graph something like:

Once you have completed your changes and committed the HEAD pointer of the feature branch will be updated to point the new commits. This leaves the “WIP” commit out of the commit history of the branch forever.

The “WIP” commit is now “unreachable” because no objects or references in the system point to it. It will be removed the next time you do a git gc.

git stash definitely has it place but i reserve it for situations where i am going to pop the stash very quickly (eg, i stash, the checkout a different branch, then pop).

  1. I do a lot of customer integration. Once a customer starts testing it is important to keep the turn around on their blocking issues to a minimum. If you don’t they get distracted and it’s no telling how long you’ll have to wait before they start testing again.

Mountain West RubyConf 2009

I'm attending MountainWest RubyConf 2009!

I going to Mountain West RubyConf this weekend. I am very excited. Last year this was a great conference and the schedule looks great this year too. If you are going to be there too let me know. One of the great things about these conferences is all the great people you get to meet, so hopefully i’ll see you there.

Want a Job?

As you have probably noticed, I recently started a new job. Which means that I also recently left a job.

The job I left was as at Absolute Performance, and it was a pretty good gig. The good news is that my leaving means that there is a spot for you. If you are interested in working on some cool Ruby, Java and C++ code with a really great team you should send them a resume. Oh, and don’t forget to tell them I sent you, maybe they will buy me a lunch or something.


My new boss is contemplating whether or not HTTP will remain the protocol of choice in the future. He seems to have reached the conclusion that XMPP is a better protocol than HTTP for the network infrastructure we have today.

With today’s connection characteristics, I wonder if HTTP would have been the weapon of choice 15-20 years ago? I doubt it.

Based on this conclusion Jud appears to believe that XMPP will replace HTTP in the future as the protocol of choice. I disagree with Jud on both points.

The Internet is much more reliable today than it has ever been before. It is so good, in fact, that there are many situations where you can trust the network these days. This is particularly true high levels of reliability are not required. However, the network is still not perfect, nor is it likely to ever be. Worse yet, the software that uses the network is still depressingly flaky.

More importantly, I don’t think HTTP “won” because of problems with the network. HTTP is ubiquitous today because it facilitates a programming model that can the use to solve some really hard problems reasonably easily. That programming model an implementation of the REST architectural style. The constraints of REST allow building highly scalable and reliable systems far more easily than any other approach available today.

In his post Jud speaks of HTTP as if it where transport protocol. As a transport HTTP is not terribly compelling. It has fairly high overhead for individual requests. It’s connection model usually ends up creating more TCP connections absolutely necessary. It allows a lot of variability in the capabilities of clients. And so on.

However, HTTP is decidedly not a transport protocol. It is an application protocol. HTTP provides a sophisticated set of semantics specifically designed to facilitate the implementation, and optimization, of REST style applications.

XMPP is, on the other hand, is a transport protocol (unlike HTTP). To be precise it is a (near) real-time message transport protocol. If you need that XMPP is an excellent choice. Particularly if the messages you a dealing with have a limited duration of meaningfulness. For example, if your application loses it’s connectivity to the message sender for any significant period of time it is likely that the application will not receive at least some of the messages sent via XMPP during that time. The server may spool some messages but completely unreasonable to expect APIs to keep track of an arbitrary number of undelivered messages for an arbitrary number of clients. The cost of doing that are just too high for a high volume producer to be able to implement.

I think XMPP will continue to get more penetration and mind share. It is a good protocol. It is not a competitor to HTTP, though. The two protocols serve very different purposes. I expect that many systems will utilized both. If you have a need for real-time messaging and you have relatively weak reliability requirements, or you are willing and able to invest significant effort implementing reliability in your application, use XMPP. But real-time messaging do not an application make.

HTTP is not a “dinosaur”, as Jud puts it, it is a shocking advanced piece of alien technology we have only recently discovered (as an industry) how to fully utilize. Actually, I am pretty sure we have not yet figured out how to fully utilized it. We will continue to see more and more applications and data service APIs implemented using the wicked cool semantics of REST/HTTP.