Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p><a href="http://docs.python.org/library/multiprocessing.html#multiprocessing.pool.multiprocessing.Pool.map" rel="nofollow">map()</a> function gives the items in chunks to the given function. By default this chunksize is calculated like this (<a href="http://svn.python.org/projects/python/branches/release26-maint/Lib/multiprocessing/pool.py" rel="nofollow">link to source</a>):</p> <pre><code>chunksize, extra = divmod(len(iterable), len(self._pool) * 4) </code></pre> <p>This probably results in too big chunk size in your case and lets the process run out of memory. Try setting the chunk size manually like this:</p> <pre><code>my_pool.map(foo, db1.index.find(), 100) </code></pre> <p>EDIT: You should also consider reusing the db connection and closing them after usage. Now you create new db connection for each item, and you don't call <code>close()</code> to them. </p> <p>EDIT2: Also check if the <code>while</code> loop gets into an infinite loop (would explain the symptoms).</p> <p>EDIT3: Based on the traceback you added the map function tries to convert the cursor to a list, causing all the items to be fetched at once. This happens because it want's to find how many items there are in the set. This is part of <code>map()</code> code from <a href="http://svn.python.org/view/python/trunk/Lib/multiprocessing/pool.py?view=markup" rel="nofollow">pool.py</a>:</p> <pre><code>if not hasattr(iterable, '__len__'): iterable = list(iterable) </code></pre> <p>You could try this to avoid conversion to list:</p> <pre><code>cursor = db1.index.find() cursor.__len__ = cursor.count() my_pool.map(foo, cursor) </code></pre>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload