Vertical Slicing

I am a fan of polylithic architectures. Such architectures have many advantages related to enhancing evolvability and maintainability. When you decide to create a system composed of small pieces how do you decide what functionality goes into which component?

Principles

The goal is to sub-divide the application into multiple highly cohesive components which are weakly connascence with each other. To achieve the desired cohesion it will be necessary to align the component boundaries with natural fissure points in the application.

The strategy should allow for the production of a arbitrary number of components. A component that was of a manageable size yesterday could easily become too large tomorrow. In that situation the over-sized component will need to be sub-divided. Applying the same strategy repeated will result in a system that is more easily understood.

We want to minimize redundancy in the components. Redundancy results in more code with must be understood and maintained. More importantly redundancy usually introduces connascence of algorithm, making changes more error prone and expensive. In a perfect world, any particular behavior would be implemented in exactly one component.

We want to isolate changes to the system. When implementing a new feature it is desirable to change as few components as possible. Each additional component that must be changed raise the complexity of the change. The componentization strategy should minimize the number of components involved in the average change to the system.

With those metrics in mind lets explore the two most common approaches and see how they compare with each other. Those two patterns of componentization are horizontal slicing and vertical slicing.

Horizontal slicing

In this approach the component boundaries are derived from that implementation domain. The implementation is divided into a set of stacked layers in such a way that a layer initiates communication with the layers below it. This results in a standard layered architectures. By implementing each layer in a separate component you can achieve the horizontal slicing. This style of componentization strategy results in the very common n-tier architecture pattern.

For example, an application that has a business logic and a presentation layer the application would be divided into two components. A business logic component and a presentation component.

Vertical slicing

In this approach the component boundaries are derived from the application domain. Related domain concepts are grouped together into components. Individual components communicate with any other components as needed.

This approach is also quite common but is usually thought of a lot less formally. It is more common for this type of segmentation to develop incidentally. For example, because separate teams developed the parts independently, and then integrated them later. Any time you integrate separate applications you have vertical componentization.

The Score

Against the metrics we laid out earlier, vertical slicing does much better than horizontal.

Horizontal slicing Vertical slicing
Cohesion high high
Repeatability low high
DRYness low high
Change isolation low high

Cohesion

Horizontal slicing has high cohesion. Each of the components can represent the a logically cohesive part of the implementation.

Vertical slicing also has high cohesion. Each component represents highly cohesive part of the application domain.

Repeatability

Vertical slicing provides a mechanism for reapply the subdivision pattern an arbitrary number of times. If any component gets too large to manage it can be divided into multiple components based on the application domain concepts. This same process can be repeated from the initial division of a monolithic application until components of the desired size have been achieved.

Horizontal slicing is less repeatable. The more tiers the harder it is to maintain cohesiveness. In practice it is very rare to see an tiered architecture with more than 4 tiers, and 3 tiers is much more common.

DRYness

Horizontal slicing tends to result in some repetition. Certain behaviors will have to be repeated a each layer. For example, data validation rules. You will need those in the presentation layer to provide good error messages and in the business logic layer to prevent bad data being persisted.

Vertical slicing allows you to reduce the connascence of algorithm because any single user activity is implemented in exactly one component. Components usually do end up communicating to each other, however, they do so in a way that does not require in the same algorithms be implemented in multiple components. For any one bit of data or behavior, one component will its authoritative source.

Change isolation

Vertical scaling tends to allow new features to be implemented by changing only one component. The component changed is the one which already contains features cohesive with the new one.

Horizontal slicing, on the other hand, tends to require changes in every layer. The new feature will require additions to the presentation layer, the business logic layer and the persistence layer. Having to work in every layer increase the cognitive load required to achieve the desired result.

Conclusion

Vertical slicing provides significant advantages. The high cohesion, dryness, and change isolation combine to drastically reduces the risks and cost of change. That is turn allow better/faster maintenance and evolution of the system. The repeatability allows you to retain these benefits even while adding functionality over time. Each time a component gets too large you can divide it until you have reach a application size that is human scaled.

Having a large number of components operate as a system does result in a good deal of communication between the components. It important to pay attention to the design of the APIs. Poor API design can introduce excessive coupling which will eat up most of the advantages described above. Hypermedia – or more precisely, following the REST architectural style – is the best way i know to reduce coupling between the components.

Sentence of the day

Anyhow, I’d just conclude by asserting that my new Emacs/Gnus/Org/ERC setup beats my old vim/mutt/nothing/irssi to the death with a baseball bat. :-)

Julien Danjou

Is ruby immature?

A friend of mine recently described why he feels ruby is immature. I, of course, disagree with him. There is much in ruby that could be improved, but the issues he raised are a) intentional design choices or b) weaknesses in specific applications built in ruby. Neither of those scenarios can be fairly described as immaturity in the language, or the community using the language.

Set

Mr. Jones’ main example is one regarding the Set class in ruby. In practice Set is a rarely used class in ruby. I suspect it exists primarily for historical and completeness reasons. It is rather rare to see idiomatic ruby that utilizes Set.1

This is possible because Array provides a rather complete implementation of basic set operations. Rubyist are very accustom to using arrays. So is more common to just use the set operator on arrays rather than converting an array into a sets.

The set operations on Array do not have the same performance characteristics mr. Jones found with Set. For example,

$ time ruby -rpp -e 'pp (1..10_000_000).to_a & (1..10).to_a'
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

real	0m10.152s
user	0m6.592s
sys	0m3.515s

$ time ruby -rpp -e 'pp (1..10).to_a & (1..10_000_000).to_a'
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

real	0m12.410s
user	0m8.397s
sys	0m3.860s

Order still matters, but very much less. (That is on 1.8.6, the only version i have handy at the moment. I am sure that 1.9, or even 1.8.7, would be quite a bit faster.)

Libraries that are low traffic areas don’t get the effort that high use libraries do in any language. Even though Set is part of the standard library, it is definitely counts as a low traffic area. Hence, it has never been optimized for large numbers of items. This is appropriate because as we learned from Ron Pike “n is usually small”. The benefits of handling large sets performantly is not worth the addition complexity for a low traffic library.

nil

In his other example mr. Jones implies that the fact that nil is a real object is disadvantageous. On this count he is simply incorrect. Having nil be an object allows significant reductions in the number of special cases that must exist. This reduction in special cases often results in less code, but is always results in less cognitive load.

Consider the #try in ruby. While not my favorite implementation of this concept, it is still a powerful idiom for removing clutter from the code.

#try executes the specified method on the receive, unless the receiver is nil. When the receive is nil it does nothing. This allows code to use a best effort approach to performing non-critical operations. For example2,

def remove_email(email)                                                                                         
  emails.find_by_email(email).try(:destroy)                                                                     
end  

This is implemented as follows:

module Kernel
  def try(method, *args, &block)
    send(method, *args, &block)
  end
end

class NilClass
  def try(*args)
    # do nothing
  end
end

You could implement something like #try in a system that has non-object “no value” mechanism. It would be less elegant and less clear, though. (It would probably be less performant too because method calls tend to be optimized rather aggressively.) Have nil be an object like everything else is one less the primitive concept that the code and the programmer must keep in mind.

Mr. Jones does bring up the issue of nil.id returning 4 and that value being used as a foreign key in the database. This is not a problem i see very often, but i can happen.

This is definitely not a problem with ruby. Rather results from an unfortunate choice of naming convention in rails. Rails uses id as the name of the primary key column for database tables. This results in an #id method being created, which overrides the #id provided by ruby itself for all objects. If rails had chosen to call the primary key column something that did not conflict with an existing ruby core method – say pk – we would not be having this discussion.

In general

Mr. Jones asserts that “ruby is rife with happy path coding”. I disagree with his characterization. The ruby community has a strong bias towards producing working, if incomplete code, and iterating on that code to improve it. This “simplest thing that could work” approach does result in the occasional misstep and suboptimal implementations. In return you get to use a lot of new stuff more quickly and when there are problems they are easier to fix because the code is simpler.

The ruby community has strongly embraced the small pieces, loosely joined approach. This is only accelerating the innovation in ruby. Gems have lowered the fiction of distributing and installing components to previously unimaginable levels. This has allowed many libraries that would have been to small to be worth releasing in the past to come into existence.

Rack, with it’s middleware concept, is an example of the ruby community taking much of the Unix philosophy and turning it to 11. While rails has much historic baggage, even it is moving to a much more modular architecture with the up coming 3.0 release.

Following these principles does result in some rough edges occasionally, but the benefits are worth the trade. The 80% solution is how Unix succeed. An 80% solution today is better than a 100% solution 3 months from now. (As long as you can improve it when needed.) We always have releases to get to, after all.


  1. I, on the other hand, do use set rather more than the average rubyist. Set is a rather performant way producing collections without duplicate entries.

    </li>

  2. Shamelessly copied from Chris Wanstrath.

    </li> </ol> </div>

“life elevated”

That is utah’s slogan, apparently,which is where we are today. We spent the last few days at the grand canyon and lake powell. Both are awe inspiringly beautiful. So much so that I will skip posting the completely inadequate pictures my phone captured.

Elliot and Audrey are keeping travel journals. So far Elliot has ended every entry with, “it was big.” The grand canyon definitely fits that description.

I recommend the fossil walk, guided by a ranger, at the grand canyon. It is really cool to find fossils for yourself. Audrey particularly enjoyed finding and keeping count of the fossils. Perhaps she really will grow up to be paleontologist. (She is fond of claiming that as a future occupation.)

For all it’s grandeur, i am pretty sure the kids enjoyed swimming in lake powell far more. I understand that reaction. It is hard to beat cool water and a sandy beach in heat of the desert.

Petrified forest

Today we visited petrified forest national park today. We started in the painted desert area of the park. What desolate, beautiful landscape.

After that we moved on to the petrified wood portion of the day. That stuff is just cool. It is amazing how wood like the permineralize type is. The fully petrified type is really pretty.

The kids got their first jr ranger badges. At each national park kids can do some activities in the park and earn a badge and a patch for that park. It is a great way keep the kids engaged.

I recommend “Here comes science” by They might be giants for your next road trip. It is excellent driving music. Oh, and the kids like it too.

</p>