Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>There are several aspects to consider in your question. </p> <p>First, the interaction between apache MPM's and mod_wsgi applications. If you run the mod_wsgi application in embedded mode (no <code>WSGIDaemonProcess</code> needed, <code>WSGIProcessGroup %{GLOBAL}</code>) you inherit multiprocessing/multithreading from the apache MPM's. This should be the fastest option, and you end up having multiple processes and multiple threads per process, depending on your MPM configuration. On the contrary if you run mod_wsgi in daemon mode, with <code>WSGIDaemonProcess &lt;name&gt; [options]</code> and <code>WSGIProcessGroup &lt;name&gt;</code>, you have fine control on multiprocessing/multithreading at the cost of a small <a href="http://code.google.com/p/modwsgi/wiki/PerformanceEstimates" rel="noreferrer">overhead</a>.</p> <p>Within a single apache2 server you may define zero, one, or more named <code>WSGIDaemonProcess</code>es, and each application can be run in one of these processes (<code>WSGIProcessGroup &lt;name&gt;</code>) or run in embedded mode with <code>WSGIProcessGroup %{GLOBAL}</code>.</p> <p>You can check multiprocessing/multithreading by inspecting the <code>wsgi.multithread</code> and <code>wsgi.multiprocess</code> variables.</p> <p>With your configuration <code>WSGIDaemonProcess example processes=5 threads=1</code> you have 5 independent processes, each with a single thread of execution: no global data, no shared memory, since you are not in control of spawning subprocesses, but mod_wsgi is doing it for you. To share a global state you already listed some possible options: a DB to which your processes interface, some sort of file system based persistence, a daemon process (started outside apache) and socket based IPC.</p> <p>As pointed out by Roland Smith, the latter could be implemented using a high level API by <a href="http://docs.python.org/library/multiprocessing.html#managers" rel="noreferrer"><code>multiprocessing.managers</code></a>: outside apache you create and start a <code>BaseManager</code> server process</p> <pre><code>m = multiprocessing.managers.BaseManager(address=('', 12345), authkey='secret') m.get_server().serve_forever() </code></pre> <p>and inside you apps you <code>connect</code>:</p> <pre><code>m = multiprocessing.managers.BaseManager(address=('', 12345), authkey='secret') m.connect() </code></pre> <p>The example above is dummy, since <code>m</code> has no useful method registered, but <a href="http://docs.python.org/library/multiprocessing.html#using-a-remote-manager" rel="noreferrer">here</a> (python docs) you will find how to create and <em>proxy</em> an object (like the <code>counter</code> in your example) among your processes.</p> <p>A final comment on your example, with <code>processes=5 threads=1</code>. I understand that this is just an example, but in real world applications I suspect that performance will be comparable with respect to <code>processes=1 threads=5</code>: you should go into the intricacies of sharing data in multiprocessing only if the expected performance boost over the 'single process many threads' model is significant. </p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload