Note that there are some explanatory texts on larger screens.

plurals
  1. POCelery: limit memory usage (large number of django installations)
    text
    copied!<ul> <li>we're having a setup with a large number of separate django installations on a single box. each of these have their own code base &amp; linux user.</li> <li>we're using celery for some asynchronous tasks.</li> <li>each of the installations has its own setup for celery, i.e. its own celeryd &amp; worker.</li> <li>the amount of asynchronous tasks per installation is limited, and not time-critical.</li> <li>when a worker starts it takes about 30mb of memory. when it has run for a while this amount may grow (presumably due to fragmentation).</li> </ul> <p>the last bulletpoint has already been (somewhat) solved by settings --maxtasksperchild to a low number (say 10). This ensures a restart after 10 tasks, after which the memory at least goes back to 30MB.</p> <p>However, each celeryd is still taking up a lot of memory, since the minimum amount of workers appears to be 1 as opposed to 0. I also imagine running <code>python manage.py celery worker</code> does not lead to the smallest-possible footprint for the celeryd, since the full stack is loaded even if the only thing that happens is checking for tasks.</p> <p>In an ideal setup, I'd like to see the following: a process that has a very small memory footprint (100k or so) is looking at the queue for new tasks. when such a task arises, it spins up the (heavy) full django stack in a separate process. and when the worker is done, the heavy process is spun down.</p> <p>Is such a setup configurable using (somewhat) standard celery? If not, what points of extension are there?</p> <p>we're (currently) using celery 3.0.17 and the associated django-celery. </p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload