Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>As you have noticed, your process blocks when it runs a callback. There are several ways to deal with this depending on what your callback does.</p> <p>If your callback is IO-bound (doing lots of networking or disk IO) you can use either threads or a greenlet-based solution, such as <a href="http://www.gevent.org/" rel="nofollow" title="gevent">gevent</a>, <a href="http://eventlet.net/" rel="nofollow" title="eventlet">eventlet</a>, or <a href="http://teepark.github.io/greenhouse/master/" rel="nofollow" title="greenhouse">greenhouse</a>. Keep in mind, though, that Python is limited by the GIL (Global Interpreter Lock), which means that only one piece of python code is ever running in a single python process. This means that if you are doing lots of computation with python code, these solutions will likely not be much faster than what you already have.</p> <p>Another option would be to implement your consumer as multiple processes using <a href="http://docs.python.org/2/library/multiprocessing.html" rel="nofollow" title="multiprocessing">multiprocessing</a>. I have found multiprocessing to be very useful when doing parallel work. You could implement this by either using a <a href="http://docs.python.org/2/library/multiprocessing.html#multiprocessing.Queue" rel="nofollow" title="Queue">Queue</a>, having the parent process being the consumer and farming out work to its children, or by simply starting up multiple processes which each consume on their own. I would suggest, unless your application is highly concurrent (1000s of workers), to simply start multiple workers, each of which consumes from their own connection. This way, you can use the acknowledgement feature of AMQP, so if a consumer dies while still processing a task, the message is sent back to the queue automatically and will be picked up by another worker, rather than simply losing the request.</p> <p>A last option, if you control the producer and it is also written in Python, is to use a task library like <a href="http://www.celeryproject.org/" rel="nofollow" title="celery">celery</a> to abstract the task/queue workings for you. I have used celery for several large projects and have found it to be very well written. It will also handle the multiple consumer issues for you with the appropriate configuration.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload