The Perils of Duck Typing?

In The Perils of Duck Typing Cedric writes about some fears he has related to duck typing.

He says

Duck Typing is a big time saver when you write code, but is it worth it? Don’t you pay this ease of development much later in the development cycle? Isn’t there a risk that you might be shipping code that is broken?

The answer is obviously yes.

The proponents of Duck Typing are usually quick to point out that it should never happen if you write your tests correctly. This is a fair point, but we all know how hard it is to guarantee that your tests cover 100% of the functional aspects of your application.

There is certainly a risk that you will ship broken code. In fact, you will almost certainly ship broken code. But you will ship broken code regardless of typing model you use. Static typing is no solution to the problems of defects. But the fact that you saved a lot of time by using duck typing in development means that a) you can send a little more time on testing and there by reduce number of defects you ship, b) get to market earlier or c) both. The fact of the matter is that type related error do not happen often enough in practice to make them worth worrying about (when is the last time you got a ClassCastException while working Java collections).

Cedric goes on to describe the use of interfaces as documentation (using interfaces to document what methods must exist for a piece of code to work) while implying that duck typing prevents this. Interfaces as documentation is a nice use of interfaces but duck typing does not preclude this. Smalltalk has SmallInterfaces. In Ruby, MixIns are commonly use to define the set of methods that are required. But both of the environments are duck typed. In Ruby, for example, if I create a MixIn to define an interface, you can “implement” my interface merely by implementing the appropriate methods, regardless of whether you include my MixIn or not. Interfaces as documentation should be treated just like all other documentation — when it is helpful use it, when it is not ignore it.

To be fair, Cedric likes Ruby because you can used MixIns to define interfaces, but I think his has conflated two completely separate issues. Duck typing does not preclude well documented interfaces. You can poorly design and document an interface a statically typed language just as easily as you can in a duck typed one. You should take care to reasonably document the interfaces you use, regardless of the type system.

I’m a Completionist Organic Piratic Elf, How About You

I have noticed a lot of way to categorize people lately. There is Clay Shirky’s One World, Two Maps where he talks about the differences between incrementalists and completionists. (Those names are from this essay that covers the same thing Incrementalists and Completionists[via].) Shirky’s essay pointed to From Pirate Dwarves To Ninja Elves which provides another couple of dimensions on which to classify people. Finally, there is Organics and Mechanics[via].

There is value in simply naming a concept — if you doubt it, just look at all the buzz around Ajax since it got a name. I think this is because it lowers the bandwidth needed to track the concept. Having these names makes classifying the people I know a lot easier. For me, that turns out to be fairly helpful because it allows me to more easily tailor my interactions with them. Even if you do not find it helpful it is still an interesting set of articles to read because it gives you a little insight into how others might be categorizing you.


Jon Udell on DSLs. I think he has it right. DSLs are the way of the future. You only need to look that the proliferation of XML flavors used in the Java world to see that basically everyone has decided that using a DSL is better that writing Java code. I think this is especially telling in that XML is a horrible programming language and still using XML is easier/better than hand coding it in the generic language. Just imagine what could happen if these languages were designed to be easy to understand.

One of the reasons I really like Ruby is that it is easy implement stuff that looks and feels like syntax but is really just normal code. This allows you to extend the languages in ways that make your code more obvious. Just take the member access decorators as an example. If you want to make a method private you do the following:

def my_method
  #do something

In most languages the “private” is a keyword that the compiler understands but in ruby it is just a class method that says “future methods defined on this class are private until otherwise specified”. The power of this is pretty amazing when it is used correctly.

Another example is Rake (I stole these examples from Jim Weirich’s Rake tutorial). Rake is yet another build system like Make and Ant but it implemented as a set of extensions to Ruby so that your build script is straight Ruby but the common operations, like task dependencies, are expressed succinctly and in a way that is easy to understand. For example

file 'main.o' => ["main.c", "greet.h"] do
  sh "cc -c -o main.o main.c"

‘file’ defines a task that creates a file. File tasks know things like: if my file does not exist or any of the files on which I am dependent are newer by my file I need to execute, otherwise I am a no-op. Just think how much less obvious it would be if you were to write that out in general purpose language — or in an XML dialect. And even better you have a full strength language at your disposal if you need to do something that the build systems developers didn’t anticipate, which you will.

Back to Jon’s article. I am not sure if there will be a consolidation of environments, in the near future at least. It would be really nice if this were true but there are problems with all the obvious contenders. Most of the open source community will not accept a VM that is not fork-able (by which I mean that they cannot fork the code base if they do not like the direction it is moving). The Sun JVM is proprietary and the only open source JVMs are incomplete. The .NET CLR is an option but everyone is afraid of MS. The CLR is probably safe because of it’s status as an ECMA standard but there does not seem to be much movement to port existing languages to the CLR even though Mono claims to be in pretty good shape these days. Then there is Parrot — the Perl 6 VM — but it is not complete yet and it is not clear when it will be ready or how well it will support non-Perl languages.

You may have noticed the above is mostly about what the open source community will accept. I think most of the innovation in programming languages and DSLs is coming out of the open source community right now. There has some movement toward more inclusion in the commercial offerings. Sun has been adding support for dynamic languages with BSF and Coyote and the .NET CLR has always supported multiple languages. However, I think that if there is an environment consolidation it will be because the open source community comes to a consensus that there is one or two platforms that are good enough for all their needs. I know both Parrot and Mono want to be this platform but neither of them are there yet nor are any of the commercial VMs. It will be interesting to see what happens.

{Update: Fixed the description of the private access modifier in in Ruby. Thanks to obsolete rubyist pointing out that I had gotten it wrong.}

Continuations (or How my Head Exploded)

Occasionally I find a new idea and wonder how I lived so long without encountering it before. Continuations are one of those ideas. I have been hearing about them for a few months but only recently have I started to understand them. They have been around for a long time and can solve problems that a difficult or impossible to solve otherwise. Of course, continuations are not supported in the in-vogue languages so that is probably how I missed them.

Anyway, here are some links to a couple of tutorials (thanks Charlie). A gentle introduction and a not-so gentle description of external iterators, called generators, in ruby. If those articles do not humble you a little I am impressed.

XP as Over-Reaction (Redux)

In an earlier entry I said, “XP is an over-reaction to water fall development methodologies.” I was wrong. I still think XP is an over reaction but I think it is reacting to heavy-weight methodologies rather than water fall methodologies.

The primary difference between XP and other methodologies is that it is that XP urges you to not do a lot of things you have been told to do in the past, because “you are not going to need it”, such as design documents and anticipated features. This mentality definately has some benefit — it is very easy to over-engineer software — but I think that most implementations of XP take it too far. It is easy to just decide not to do anything you do not need at this very moment. I think this sets you up for trouble in the future.

For example, I have been told that it is uncommon for Java code implemented using XP to have javadoc comments. This follows from the basic principles of XP, if you apply them with little thought. You do not need the comments when you are writing the method — you just wrote the test and you know what the method is suppose to do — so writing a comment is a waste of time. However I think that method and class comments are invaluable, they allow for much easier maintenance and refactoring in the future.

I suspect that most of my issues with XP come from poor implementations of the process. On the other hand, it almost does not matter. XP is intended to produce better software more reliably and if it is difficult to use correctly then it is unlikely to achieve that goal.

April 1st

Brian (get a blog so I can link to you) pointed this out to me. I had to put it up since I’ve been bitching and moaning about XP for a while now. This is also cute.