(Another) Rest Controller for Rails

Charlie has released his take on a RestController for Rails.

That is very sweet. It is great to see more work on RESTful Rails. It seems to me that each attempt gets closer to an approach I could believe in and be proud of. And, I get a warm fuzzy feeling any time I see a domain specific language developing. The resource handler Charlie has created is definitely part of a DSL there.

He brings up a few issue that result from his implementation. Some of which are important and some, IMHO, are not.

Leaky Abstractions

Charlie points out that the abstractions start to leak a little when you get to creating the views for a RestController.

The main issue is the method renaming. You have to know about it since you need to create templates called get.rhtml, get_member.rhtml, etc. It also comes into play if you want to turn on or off filters.

That is very unfortunate. The easiest way to solve this problem might be rethink what makes up a controller. The RestController design seem intent on combining the functionally of a cluster of related resources. In the example he provides the ProductController support interaction with the following resources

* every known product * the collection containing every known product * an editor for product resources * a creator for product resources

This set of resources are very cohesive and quite coupled to one another so combining them into a single controller is reasonable. But it causes this problem that you have know that the RestController is going to take

resource :Member do
  def get

and turn it into an action named get_member.

Perhaps it would be better to conceptualize a controller as a bit of code that mediates interaction with exactly one type of resource. With this view of the world you would end up with more, smaller, controllers. Charlie’s product example would look more like

class ProductController < ApplicationController
  include YarController  # that's YetAnotherRestController
  verb :get do
    @product = Product.find(params[:id])
  verb :put do
    @product = Product.find(params[:id])
      flash[:notice] = 'Product was successfully updated.'
      redirect_to :id => @product
    rescue => e
      # Send the current invalid values to the editor via the flash
      flash[:product] = @product
      redirect_to :resource => :editor, :id => @product
  verb :delete do
    redirect_to :id => nil, :resource => nil

Class ProductsController < ApplicationController
  include YarContoller
  verb :get do
    @product_pages, @products = paginate :products, :per_page => 10
  verb :post do
    @product = Product.new(params[:product])
      flash[:notice] = 'Product was successfully created.'
      redirect_to :resource => :collection
    rescue => e
      flash[:product] = @product
      redirect_to :resource => :editor

And so on… The main benefits of this is that the template to rendered for a get of ‘http://mystore.example/product/243’ is ‘apps/view/product/get.rhtml’ and, I think, the issues with filters go away, too. The down side is that you end up with four controllers for each basic type of resource you expose. One for the basic resource type you want to expose, one for the collection of all of those basic resources, one for the creator resource and one for the editor resource. I don’t know if the extra boiler plate code is worth the benefits but it feels like it might be.


Charlie also points out

a pure REST solution does not work with HTML forms since browsers don’t support PUT and DELETE

He is absolutely correct. However this tunneling PUT and DELETE over POST kludge does not bother me very much. I will now take a moment to revel in being more pragmatic than Charlie, quite possibly for the first time since I meet him seven years ago. Anyway, it is ugly that HTML does not support PUT and DELETE but still very workable.

Handling Bad Data

Finally there is an issue with handling failed attempts PUT/POST. This is the one that bothers me the most. It is not really all that bad from a pragmatic standpoint, storing this info in state works fine. However, it implies a certain weakness in my world view because I did not see it coming.

If the post fails we have to store the ill-formed product into the flash and redirect back to the editor since its at a different URL.

The fundamental problem here is that the separate editor resource will PUT the modified resource when you click save/submit. But what if you messed it up and, say, violated the business rule that blue products must have a price that is divisible by three? In a normal Rails app that proposed change would fail validation and the update action would just re-render the edit page with bad fields highlighted. But in a RESTful world the editor and the validation code are separate and it is wrong from REST stand point to just render the editor resource from in response to a product resource request. However, if you don’t do that you need to get the form data, and which fields are bad, from the previous attempt so that you can re-render the editor with the information the user previously entered and what was wrong with it.

One way you could solve this problem is to allow the creation of “invalid” resources. For example, you require a product to have a description. However, you receive a POST to ‘http://mystore.example/products’ without a description. You could issue the product an ID and store it in it’s invalid state (without a description) and then redirect the browser to the editor resource for that newly created, but invalid, product. That feels really clean from a design stand point but I am not sure how difficult it would be to implement. And you would certainly end up to permanently invalid resources, which might be hard to manage in the future.

Martin Fowler on LOP

Martin Fowler has done a nice write up about Language Oriented Programming and the breed of tools, that have recently appeared, to support it.

RE: Dynamicity and Throwing Money at Problems

Brian McCallister has in interesting post on how XML is used by the Java community. I think he is right about the fact that most of the XML dialects being used in the Java world go against the grain of Java. However, I think they are not being used for dynamicity, at least by his definition. Sure the dynamicity argument is there but it is a red herring because, as Brian points out, no one actually uses the dynamicity. What these Java developers are really doing is creating domain specific languages. They need to do this because, in many cases, the equivalent functionality written in Java would be an excessive amount of code.

And there-in lies the rub, if using plain Java is too expensive and hard to maintain then the logical thing to do is to switch to language in which solving your problem is not too expensive and hard to maintain. Unfortunately, most Java developers either A) are not allowed to even consider using a language other than Java or B) have some emotional ties to Java. If A is true the logical thing to do is to write the language you really need, and want, but to call it “configuration” so that it sounds like you are still writing the app in Java. If B is true the logical thing to do is to write the language really need but to call it “configuration” so that it sounds like you are still writing the app in Java. Notice that those two are pretty much exactly the same. Either way you end up with a custom programming language specific for your domain and you call it “configuration” even though the contents are obviously source code.

Coding Is Not Construction

The other day I was talking to a colleague and I compared software development with building a building. I have heard this analogy often and there are a lot of similarities. (For example, most buildings and software systems are, at least partly, custom.) I think there is much to be learned from this analogy when correctly applied, however it is more often than not mis-applied and when it is it leads to all sorts of false conclusions.

The basics of this analogy are that building construction and software development have the following phases:

Someone has an idea about what to build.

An architect/engineer designs the building or software by drawing a set of picture and writing some text about the thing to be built.

A bunch of labors use the documents produced in design step to produce the final product.

Sell the building or software.

People often equate the construction to coding when applying this process to software development. The RUP process, for example, uses these phases. But coding is design, not construction. The construction phase of building a building is more equivalent to compiling in software development. This is a bit easier to see if you look at what the output of a project is. In a building project the output is the building. In a software development project that output is it is the executable, and it’s supporting data, not the code. In the software industry we have already completely automated the construction phase. I think that we already know, sub-consciously at least, that coding is not construction because we call systems like make, ant, etc. “build” tools, implying that they construct the final product.

When you write code you are not producing the final product, you are producing a set of instruction to the construction team — the compiler and build tool — in much the same way an architect of a build produces instructions in the form of a set of blue prints. This distinction may not seem particular important at first, but the, incorrect, equation of coding to construction leads to some bad conclusions. Some example are component-based software engineering and certain types of out-sourcing.

What we call designs in software development are more like the artist’s view of a building than what architects produce as input to the building construction phase. While these nice pictures are useful, I think we have done our selves a disservice by calling them designs. Calling them designs implies that all the necessary information for construction is present and it never is. Design choices keep getting made until the day you freeze the code.

If we want to improve software development what we need are better construction teams, not off-the-shelf walls or structural designs for each floor done in different lower-cost-countries. Architects do not have to design a house down to the detail software developers write code because physical construction teams can fill in a lot of detail by themselves. I think this is why DSLs are often a big win. The compiler (or interpreter) for the DSL can fill in a lot of detail based on the it’s understanding of the domain.

Fixed a couple of spelling errors.


Jon Udell on DSLs. I think he has it right. DSLs are the way of the future. You only need to look that the proliferation of XML flavors used in the Java world to see that basically everyone has decided that using a DSL is better that writing Java code. I think this is especially telling in that XML is a horrible programming language and still using XML is easier/better than hand coding it in the generic language. Just imagine what could happen if these languages were designed to be easy to understand.

One of the reasons I really like Ruby is that it is easy implement stuff that looks and feels like syntax but is really just normal code. This allows you to extend the languages in ways that make your code more obvious. Just take the member access decorators as an example. If you want to make a method private you do the following:

def my_method
  #do something

In most languages the “private” is a keyword that the compiler understands but in ruby it is just a class method that says “future methods defined on this class are private until otherwise specified”. The power of this is pretty amazing when it is used correctly.

Another example is Rake (I stole these examples from Jim Weirich’s Rake tutorial). Rake is yet another build system like Make and Ant but it implemented as a set of extensions to Ruby so that your build script is straight Ruby but the common operations, like task dependencies, are expressed succinctly and in a way that is easy to understand. For example

file 'main.o' => ["main.c", "greet.h"] do
  sh "cc -c -o main.o main.c"

‘file’ defines a task that creates a file. File tasks know things like: if my file does not exist or any of the files on which I am dependent are newer by my file I need to execute, otherwise I am a no-op. Just think how much less obvious it would be if you were to write that out in general purpose language — or in an XML dialect. And even better you have a full strength language at your disposal if you need to do something that the build systems developers didn’t anticipate, which you will.

Back to Jon’s article. I am not sure if there will be a consolidation of environments, in the near future at least. It would be really nice if this were true but there are problems with all the obvious contenders. Most of the open source community will not accept a VM that is not fork-able (by which I mean that they cannot fork the code base if they do not like the direction it is moving). The Sun JVM is proprietary and the only open source JVMs are incomplete. The .NET CLR is an option but everyone is afraid of MS. The CLR is probably safe because of it’s status as an ECMA standard but there does not seem to be much movement to port existing languages to the CLR even though Mono claims to be in pretty good shape these days. Then there is Parrot — the Perl 6 VM — but it is not complete yet and it is not clear when it will be ready or how well it will support non-Perl languages.

You may have noticed the above is mostly about what the open source community will accept. I think most of the innovation in programming languages and DSLs is coming out of the open source community right now. There has some movement toward more inclusion in the commercial offerings. Sun has been adding support for dynamic languages with BSF and Coyote and the .NET CLR has always supported multiple languages. However, I think that if there is an environment consolidation it will be because the open source community comes to a consensus that there is one or two platforms that are good enough for all their needs. I know both Parrot and Mono want to be this platform but neither of them are there yet nor are any of the commercial VMs. It will be interesting to see what happens.

{Update: Fixed the description of the private access modifier in in Ruby. Thanks to obsolete rubyist pointing out that I had gotten it wrong.}