mod_proxy_balancer Is A Two Timing Hussy

The canonical way to deploy a Rails application is using Apache and mod_proxy_balancer to act as a reverse proxy request to a cluster of Mongrel processes running your application. It is easy to setup, debug and monitor. As it turns out the only problem with this setup is mod_proxy_balancer.

To see what I mean lets start with what Google suggests. The reverse proxying gets setup something like this:

<Proxy balancer://my-application>
  BalancerMember http://localhost:8000
  BalancerMember http://localhost:8001
  BalancerMember http://localhost:8002
</Proxy>

ProxyPass /my-application balancer://my-application

That works. Particularly if the load is not very high and all the requests to the application take about the same amount of work. If, however, the responses time for the application vary much you will start seeing some odd artifacts in the response times. Specifically, you will start seeing requests that should be fast taking a long time to complete.

This is because in mod_proxy_balancer seems to apply a rather simple minded round robin dispatch algorithm. This algorithm means that a single mongrel can end up with one or more low cost requests queued up behind a very expensive request. Even when there are other backend processes that are doing nothing at all. Because Rails is single threaded it means that each request that is sent to a particular mongrel process has to wait until all previous requests to that process are completely finish before work can begin on it. Those small requests end up taking a very long time.

Imagine a series of requests arriving, one per second. The first request is expensive, taking five seconds to complete, each subsequent request takes one second to complete.

From that you can see that the final request in our scenario will take 3 seconds to complete even though it is only one second of work. Worse yet there are two idle backends that could have complete it in one second had it been dispatched optimally.

Maybe Monogamy

A quick scan through the Apache docs reveals the max parameter for ProxyPass. Sweet, a nice simple solution. All that is needed is to tweak the Apache configuration to look like this:

<Proxy balancer://my-application>
  BalancerMember http://localhost:8000 max=1
  BalancerMember http://localhost:8001 max=1
  BalancerMember http://localhost:8002 max=1
</Proxy>

ProxyPass /my-application balancer://my-application

That says that Apache mod_proxy_balancer should make no more than 1 connection to each Rails backend at any given time. If a request comes in and all connections are busy that request will be queued in Apache waiting for the next available backend. Which is exactly what we want. Any Rails process handling a long request will not have further requests dispatched to it until it has finished what it is currently working on.

(It is worth noting that this is not completely optimal resource usage – the backends go idle for a moment between each request – but from a responsiveness point of view it far better than the alternative.)

Broken Trust

Unfortunately, upon deploying the configuration above into a high load environment it will rapidly become obvious that it does not solve the problem. Nominally short requests continue taking really excessive amounts of time. Running netstat in such an environment will yield something like the following.

Proto Recv-Q Send-Q Local Address       Foreign Address         State
...      
tcp      741      0 127.0.0.1:8000      127.0.0.1:62322         ESTABLISHED 
tcp      741      0 127.0.0.1:8000      127.0.0.1:53214         ESTABLISHED 
tcp        0      0 127.0.0.1:8000      127.0.0.1:61024         ESTABLISHED 
...

There you can see that there are still multiple connections being made to the same Rails backend.

Apache’s mod_proxy_balancer seems to have a race condition that allows it to establish more than the configured max number of connections to a single backend in high load conditions. I suspect it goes something like this, multiple requests arrive at about the same time they are each dispatched to their own Apache worker. The multiple Apache workers then each look at their shared data and see that the next available backend is the same one. Then each of those workers, simultaneously, create a connection to the same backend. This means that in low load situations everything is fine because you are unlikely to have multiple requests to Apache arrive at the exact same time. In high load situations, however, when multiple requests are practically guaranteed to arrive simultaneously you will end up with more than max connections to individual backends. (BTW, I have seen way more that three simultaneous connection to single backend with max=1.)

Rebound

So now you know that you cannot trust Apache’s mod_proxy_balancer. You can use Pen. It is easy to configure, fast and it works great. Oh, except for the fact that it does not queue requests if all the backends are busy. You could handle that by setting a max connections on the proxy in Apache, but we already know that we cannot trust mod_proxy_balancer.

It seems that HAProxy might be the best choice. I’ll let you know how it works out.

RSpec Emacs Mode

I just released a small Emacs minor mode, rspec-mode that provides some convenience functions related to dealing with RSpec.

So far this minor mode provides some enhancements to ruby-mode in the contexts of RSpec specifications. Namely, it provides the following capabilities:

  • toggle back and forth between a spec and it’s target (bound to \C-c so)

  • verify the spec file associated with the current buffer (bound to \C-c ,)

  • verify the spec defined in the current buffer if it is a spec file (bound to \C-c ,)

  • ability to disable the example at the point (bound to \C-c sd)

  • ability to reenable the disabled example at the point (bound to \C-c se)

Try it out (download and repo details are here) and let me know if you find any problems or make any improvements.

Fun With Public Keys

I just spent a long time diagnosing an RSA public key exchange problem. Google was of very little help so hopefully this article will get picked up save someone else the trouble in the future.

The problem is this an RSA public key PEM or DER generated by Ruby’s OpenSSL::PKey::RSA are unreadable by OpenSSL, Bouncy Castle and probably other crypto tools. For example, if you try to load a public key PEM file generated by OpenSSL::PKey::RSA with the openssl command you get the following error

$  openssl rsa -text -pubin < my_pub_key.pem
unable to load Public Key
16879:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:647:Expecting: PUBLIC KEY

and if you try to load a DER generated by OpenSSL::PKey::RSA you get this error

$  openssl rsa -text -pubin -inform DER < my_pub_key.cer
unable to load Public Key
16880:error:0D0680A8:asn1 encoding routines:ASN1_CHECK_TLEN:wrong tag:tasn_dec.c:1294:
16880:error:0D07803A:asn1 encoding routines:ASN1_ITEM_EX_D2I:nested asn1 error:tasn_dec.c:380:Type=X509_ALGOR
16880:error:0D08303A:asn1 encoding routines:ASN1_TEMPLATE_NOEXP_D2I:nested asn1 error:tasn_dec.c:749:Field=algor, Type=X509_PUBKEY

The actual issue is that Ruby’s OpenSSL::PKey::RSA#to_pem and #to_der generate a PKCS#1 public key format while the openssl rsa command only works PKCS#8 formatted public keys. Both of the forms are grammars in the dreaded ASN.1 binary format which completely obscures the differences between them. Both formats encode the exact same information but the PCKS#8 style does it in a more complicated way.

I have not found a good solution to this problem. There does not seem to be any way to make OpenSSL::PKey::RSA#to_pem generate anything other than a PKCS#1 style key. I suspect that OpenSSL is able to handle PKCS#1 public keys but there does not seem to be any way to get the openssl command to do so. Similarly, it seems from Bouncy Castle’s API docs that you should be able to coerce it into accepting PKCS#1 public keys does not do so by default.

Fortunately, OpenSSL::PKey::RSA#new works just fine if handed a PKCS#8 PEM. That has allowed me to work around this issue by using the openssl command to generate the PKCS#8 style PEM files (both private and public using openssl genrsa and openssl rsa -pubout respectively) and then just reading/serving those files as needed. It is not great but it does work.

Colorado BrainJam Is This Friday

For any tech people in the Denver/Boulder area I wanted to point out that there is a BrainJam unconference happening this Friday (April 11th). Looks like a lot very interesting people have signed up so far, and I will be attending, so I have high expectations. If you are able to join us find me and say “hi”.

MountainWest RubyConf

I am going to be at MountainWest RubyConf tomorrow and Saturday. I looking forward to meeting and hanging out with lots of interesting peoples in the Ruby community. Oh, and the schedule looks very interesting.