mod_proxy_balancer Is A Two Timing Hussy

The canonical way to deploy a Rails application is using Apache and mod_proxy_balancer to act as a reverse proxy request to a cluster of Mongrel processes running your application. It is easy to setup, debug and monitor. As it turns out the only problem with this setup is mod_proxy_balancer.

To see what I mean lets start with what Google suggests. The reverse proxying gets setup something like this:

<Proxy balancer://my-application>
  BalancerMember http://localhost:8000
  BalancerMember http://localhost:8001
  BalancerMember http://localhost:8002

ProxyPass /my-application balancer://my-application

That works. Particularly if the load is not very high and all the requests to the application take about the same amount of work. If, however, the responses time for the application vary much you will start seeing some odd artifacts in the response times. Specifically, you will start seeing requests that should be fast taking a long time to complete.

This is because in mod_proxy_balancer seems to apply a rather simple minded round robin dispatch algorithm. This algorithm means that a single mongrel can end up with one or more low cost requests queued up behind a very expensive request. Even when there are other backend processes that are doing nothing at all. Because Rails is single threaded it means that each request that is sent to a particular mongrel process has to wait until all previous requests to that process are completely finish before work can begin on it. Those small requests end up taking a very long time.

Imagine a series of requests arriving, one per second. The first request is expensive, taking five seconds to complete, each subsequent request takes one second to complete.

From that you can see that the final request in our scenario will take 3 seconds to complete even though it is only one second of work. Worse yet there are two idle backends that could have complete it in one second had it been dispatched optimally.

Maybe Monogamy

A quick scan through the Apache docs reveals the max parameter for ProxyPass. Sweet, a nice simple solution. All that is needed is to tweak the Apache configuration to look like this:

<Proxy balancer://my-application>
  BalancerMember http://localhost:8000 max=1
  BalancerMember http://localhost:8001 max=1
  BalancerMember http://localhost:8002 max=1

ProxyPass /my-application balancer://my-application

That says that Apache mod_proxy_balancer should make no more than 1 connection to each Rails backend at any given time. If a request comes in and all connections are busy that request will be queued in Apache waiting for the next available backend. Which is exactly what we want. Any Rails process handling a long request will not have further requests dispatched to it until it has finished what it is currently working on.

(It is worth noting that this is not completely optimal resource usage – the backends go idle for a moment between each request – but from a responsiveness point of view it far better than the alternative.)

Broken Trust

Unfortunately, upon deploying the configuration above into a high load environment it will rapidly become obvious that it does not solve the problem. Nominally short requests continue taking really excessive amounts of time. Running netstat in such an environment will yield something like the following.

Proto Recv-Q Send-Q Local Address       Foreign Address         State
tcp      741      0         ESTABLISHED 
tcp      741      0         ESTABLISHED 
tcp        0      0         ESTABLISHED 

There you can see that there are still multiple connections being made to the same Rails backend.

Apache’s mod_proxy_balancer seems to have a race condition that allows it to establish more than the configured max number of connections to a single backend in high load conditions. I suspect it goes something like this, multiple requests arrive at about the same time they are each dispatched to their own Apache worker. The multiple Apache workers then each look at their shared data and see that the next available backend is the same one. Then each of those workers, simultaneously, create a connection to the same backend. This means that in low load situations everything is fine because you are unlikely to have multiple requests to Apache arrive at the exact same time. In high load situations, however, when multiple requests are practically guaranteed to arrive simultaneously you will end up with more than max connections to individual backends. (BTW, I have seen way more that three simultaneous connection to single backend with max=1.)


So now you know that you cannot trust Apache’s mod_proxy_balancer. You can use Pen. It is easy to configure, fast and it works great. Oh, except for the fact that it does not queue requests if all the backends are busy. You could handle that by setting a max connections on the proxy in Apache, but we already know that we cannot trust mod_proxy_balancer.

It seems that HAProxy might be the best choice. I’ll let you know how it works out.

7 thoughts on “mod_proxy_balancer Is A Two Timing Hussy

  1. Glenn,

    That does not appear to be exactly what I want. I don’t want requests dispatched to the least busy backend. I want them dispatched to a not busy backend. If there is not an idle backend available, I want them queued until there is one.

    Consider following scenario:

    time       1     2     3     4     5
      1        |-----------------| (a)
                         |........-----| (c)
      2              |-----| (b)

    A fair balancer without a connection limit might (correctly) send request c to backend 1. This is because all the backends are equally “busy” it is not wrong to send that request to any backend. However, as human you can see that it ends up have quite suboptimal results in this scenario

    Basically any balancer algorithm with the ability to set a max connections on each backend (that actually worked) would solve my problem. Once there is a balancer for nginx that has that I am there.

  2. Peter,
    I too have seen this same issue with mod_proxy_balancer and some long running queries. It happens exactly how you described it too … connection 1 starts the long running query, conn 2 and 3 finish … and four is stuck waiting for 1 to finish. Unfortunately at this point we have just increased the number of mongrels and tuned the long running queries. It seems to have helped for now – but definitely not an optimal solution. I would be interested to see how HAProxy worked for you.

    Keep me posted.

  3. Hi Peter,

    I fought with this a few months ago, and I concluded that… this “max” parameter…. how do you say… “I do not think it means what you think it means.” :)

    The mod_proxy documentation could certainly be clearer, but my reading is that the setting is always per Apache child process. So, in a typical prefork setup with, oh I don’t know, 10 or 15 apache processes, you really end up with a max of 10 or 15 connections per Mongrel. (And when you get busy and Apache spins up more children, you get even more connections per Mongrel.) Basically you still end up with the “dumb” round-robin allocation. Not what you or I intended.

    With the worker MPM, you can lock down ServerLimit so that everything is in lots of threads within one process, but that seems risky and probably not great at high load. But with worker, you at least have some control.

    Here’s a mailing list post touching on this:

    However, refusing to believe that Apache is completely incapable of being a proper Rails frontend (as you said, nothing “fancy” is needed, just a dispatcher that finds or waits for an idle backend), I persisted in my googling.

    I ended up finding a reference to the “acquire” parameter… probably this message:

    The Apache docs for “acquire” (a parameter to ProxyPass, like “max”) make it sound pretty drastic. True, if all backends are busy for the duration of the acquire timeout, it’ll return a 503 Server Busy, but if any backends are idle it appears to make Apache hunt for one (after the timeout) instead of just picking a possibly busy one and waiting.

    For us, “acquire=1” has worked out pretty well. We also have “max=1” but because of the above I don’t think it’s actually doing anything. Just make sure you have enough Mongrels. (Easy to say of course. :))

    HAproxy is still probably a better bet. Rather pathetic that Apache is too “dumb/smart” to front for Mongrel/Rails isn’t it? Maybe someday someone will hack on Apache to make it behave properly in front of Rails. Or maybe we’ll all end up using mod_rails or whatever they’re calling it these days. :)

  4. Ben,

    Acquire does seem rather drastic, but I can see how it might result in the desired behavior.

    As for mod_proxy’s documentation, it is not as clear as I like but I would not describe the verbage around max as unclear. Specifically, it says, “Apache will never create more than the Hard Maximum connections to the backend server.” That could easily be wrong but is hardly unclear.

  5. hello budys

    my problem is somewhat different than the discussion but its related to apache + behind proxy
    apps virtual host redirected to *

    my application having the subdomains concept on each account designed pretyy well trough rails coding .
    the problem is that any one of the proxy instance getting 100% cpu usage and results into proxy errors .

    i hoping getting clues in mod_proxy setting
    so that all behind port happy with feeding from Apache .

    I seen the application browse with port [ ] if i click on tab having sub.productfamily [after click on it will open ] but its not at port level

    (on simple case any whole app is browsed on port we selected )


Comments are closed.