Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>There are many other factors, but I'm really guessing that you're spawning 30-40 processes quickly using using 30M or so each and killing your machines limited memory, then continuing to spawn new ones and thashing to swap, slowing everything down.</p> <p>With 2G of ram, MaxClients at 150 and MaxRequestsPerChild at 0 the server resources are probably getting swamped even if your DB isn't on the same physical server. </p> <p>Basically, for web server performance you don't want to ever swap. Run your tests and then immediately check memory on the web server with a:</p> <pre><code>free -m </code></pre> <p>This will give you memory usage in MB and swap usage. You should ideally see swap either 0 or close to 0. If not zilch or very low swap usage, the issue is simply memory running out and your server is thrashing therefore wasting CPU resulting in slow response time.</p> <p>You need to get some numbers to be certain, but first do a 'top' and press Shift-M while top is running to sort by memory. The next time you run your tests and find a ballpark number on how much %MEM is being reported for each httpd process. It will vary, so it's best to use the higher ones as your guide for a worst case bound. I've got a wordpress, a drupal, and a custome site on the same server that routinely allocates 20M per http process from start and eventually grow in time upwards--if unchecked past 100M each. </p> <p>Pulling some numbers out of my butt for example, if I had 2G and linux, core services, and mysql were using 800M, I'd keep expectations for memory I'd want to assume available for Apache fun to be under 1G. With this, if my apache processes were using on the high side an average of 20M, I could only have 50 MaxClients. Thats a very non-conservative number, in real life I'd drop Max down to 40 or so to be safe. Don't try to pinch memory... if you're serving up enough traffic to have 40 simultaneous connections, pony up the $100 to go to 4G before inching up the Max servers. It's one of those, once you cross the line everything goes down the toilet, so stay safely under your memory limits!</p> <p>Also, with php I like to keep the MaxRequestsPerChild to 100 or so... you're not CPU bound serving web pages, so don't worry about saving a few milliseconds spawning new child process. Setting it to 0 means unlimited requests and they never be killed off unless the total clients exceed MaxSpareServers. This is generally <strong>A VERY BAD THING</strong> with php using apache workers as they just keep growing on until badness occurs (like having to hard restart your server because you can't log in cause apache used up all memory and ssh can't work without timing out).</p> <p>Good Luck!</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload