Note that there are some explanatory texts on larger screens.

plurals
  1. POPython multiprocessing doesn't use all cores on RHEL6
    primarykey
    data
    text
    <p>I have been trying to use the python multiprocessing package to speed up some physics simulations I'm doing by taking advantage of the multiple cores of my computer.</p> <p>I noticed that when I run my simulation at most 3 of the 12 cores are used. In fact, when I start the simulation it initially uses 3 of the cores, and then after a while it goes to 1 core. Sometimes only one or two cores are used from the start. I have not been able to figure out why (I basically change nothing, except closing a few terminal windows (without any active processes)). (The OS is Red Hat Enterprise Linux 6.0, Python version is 2.6.5.)</p> <p>I experimented by varying the number of chunks (between 2 and 120) into which the work is split (i.e. the number of processes that are created), but this seems to have no effect.</p> <p>I looked for info about this problem online and read through most of the related questions on this site (e.g. <a href="https://stackoverflow.com/questions/1182315/python-multicore-processing">one</a>, <a href="https://stackoverflow.com/questions/5784389/using-100-of-all-cores-with-python-multiprocessing">two</a>) but could not find a solution.</p> <p>(Edit: I just tried running the code under Windows 7 and it's using all available cores alright. I still want to fix this for the RHEL, though.) </p> <p>Here's my code (with the physics left out):</p> <pre><code>from multiprocessing import Queue, Process, current_process def f(q,start,end): #a dummy function to be passed as target to Process q.put(mc_sim(start,end)) def mc_sim(start,end): #this is where the 'physics' is p=current_process() print "starting", p.name, p.pid sum_=0 for i in xrange(start,end): sum_+=i print "exiting", p.name, p.pid return sum_ def main(): NP=0 #number of processes total_steps=10**8 chunk=total_steps/10 start=0 queue=Queue() subprocesses=[] while start&lt;total_steps: p=Process(target=f,args=(queue,start,start+chunk)) NP+=1 print 'delegated %s:%s to subprocess %s' % (start, start+chunk, NP) p.start() start+=chunk subprocesses.append(p) total=0 for i in xrange(NP): total+=queue.get() print "total is", total #two lines for consistency check: # alt_total=mc_sim(0,total_steps) # print "alternative total is", alt_total while subprocesses: subprocesses.pop().join() if __name__=='__main__': main() </code></pre> <p>(In fact the code is based on <a href="https://stackoverflow.com/users/95810/alex-martelli">Alex Martelli's</a> answer <a href="https://stackoverflow.com/questions/1182315/python-multicore-processing">here</a>.)</p> <p>Edit 2: eventually the problem resolved itself without me understanding how. I did not change the code nor am I aware of having changed anything related to the OS. In spite of that, now all cores are used when I run the code. Perhaps the problem will reappear later on, but for now I choose to not investigate further, as it works. Thanks to everyone for the help.</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload