Strong opinions, weakly held

Setting Apache MaxClients for small servers

Every once in awhile, the server this blog runs on chews up all of its RAM and swap space and becomes unresponsive, forcing a hard reboot. The problem is always the same — too many Apache workers running at the same time. It happened this morning and there were well over 50 Apache workers running, each consuming about 15 megs of RAM apiece. The server (a virtual machine provided by Linode) has 512 megs of RAM, so Apache is consuming all of the VM’s memory on its own.

At first I decided to attack the problem through monitoring. I had Monit running on the VM but it wasn’t actually monitoring anything. I figured that I’d just have it monitor Apache and restart it whenever it starts consuming too many resources. I did set that up, but I wondered how Apache was able to get itself into such a state in the first place.

The problem was that Apache was configured very poorly for my VM. Because I’m running PHP apps with the PHP module, I’m running Apache using the prefork module. For more information on Apache’s Multi-Processing Modules, check out the docs. Basically, prefork doesn’t use threads, so you don’t have to make sure your applications and libraries are thread-safe.

Anyway, here are the default settings for Apache in Ubuntu when it comes to resource limits:

StartServers          5
MinSpareServers       5
MaxSpareServers      10
MaxClients          150
MaxRequestsPerChild   0

In preform mode, Apache can handle one incoming request per process. So in this case, when Apache starts, it starts five worker processes. It also tries to keep five spare servers idle for incoming demand. If it has ten idle servers, it starts shutting down processes until the number of idle servers goes below ten. Finally, MaxClients is the hard limit on the number of workers Apache is allowed to start. So on my little VM, Apache feels free to start up to 150 workers, at 15 megs of RAM apiece, using up to 2.25 gigabytes of RAM, which is more than enough to consume all of the machine’s RAM and swap space.

This number is far, far, far too high for my machine. I had to do this once before but when I migrated from Slicehost to Linode some time ago, I forgot to manually change the Apache settings. I wound up setting my machine to a relatively conservative MaxClients setting of 8. I’m still tweaking the other settings, but for a server that’s dedicated to Web hosting, you may as well set the StartServers setting to the same as the MaxClients setting so that it never has to bother spinning up new server processes to meet increasing demand.

Currently my configuration looks like this:

StartServers          8
MinSpareServers       1
MaxSpareServers       8
MaxClients            8
MaxRequestsPerChild   0

The only danger with this low setting is that if there are more than 8 simultaneous incoming requests, the additional requests will wait until a worker becomes available, which could make the site really slow for users. Right now I only have about 60 megs of free RAM, though, so to increase capacity I’d need to either get a larger VM, move my static resources to S3, or set up a reverse proxy like Varnish and serve static resources that way.


  1. This may or may not have an impact on your MaxClients settings, but just out of curiosity, are you running a caching plugin with WordPress? (I’m using QuickCache these days.) I think it may make a big difference when you get a spike of traffic since Apache just has to load static HTML instead of processing a bunch of MySQL and PHP.

  2. Yep, I’m a happy user of WP Super Cache.

  3. Thanks for posting this. A handy reminder to turn down the numbers on my local machine, since I’m the only person who ever connects to it.

  4. I’d recommend the Varnish route: static media will be essentially free, key pages stay up even if the backend fails and – most importantly – it’ll ensure that you never need to generate more than one copy of any URI simultaneously so you can trim MaxClients down

  5. It’s definitely worth looking at nginx with php-fpm for low memory servers, it’s an easy switch to make and I’m pretty sure you won’t regret it – there’s lots of examples out there for things like wordpress and other PHP apps on nginx.

  6. I would describe myself as nginx-curious. Understand Apache better helps me professionally, but I do wonder if I should set up nginx for the same reason that it’s good to learn new programming languages.

  7. I can’t say enough good things about Varnish – it has allowed a site my company maintains (a 4-node server) to sustain 8k+ simultaneous connections and 750k+ uniques in a day.

  8. Varnish indeed rules, I’ve done some rush tests with Blitz.io and it’s looking really good. However I’m a big fan of the Nginx+PHP5-FPM setup.

Leave a Reply

Your email address will not be published.


© 2019 rc3.org

Theme by Anders NorenUp ↑