Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Here's my suggestion:</p> <pre><code>import multiprocessing import threading import time def good_worker(): print "[GoodWorker] Starting" time.sleep(4) print "[GoodWorker] all good" def bad_worker(): print "[BadWorker] Starting" time.sleep(2) raise Exception("ups!") class MyProcManager(object): def __init__(self): self.procs = [] self.errors_flag = False self._threads = [] self._lock = threading.Lock() def terminate_all(self): with self._lock: for p in self.procs: if p.is_alive(): print "Terminating %s" % p p.terminate() def launch_proc(self, func, args=(), kwargs= {}): t = threading.Thread(target=self._proc_thread_runner, args=(func, args, kwargs)) self._threads.append(t) t.start() def _proc_thread_runner(self, func, args, kwargs): p = multiprocessing.Process(target=func, args=args, kwargs=kwargs) self.procs.append(p) p.start() while p.exitcode is None: p.join() if p.exitcode &gt; 0: self.errors_flag = True self.terminate_all() def wait(self): for t in self._threads: t.join() if __name__ == '__main__': proc_manager = MyProcManager() proc_manager.launch_proc(good_worker) proc_manager.launch_proc(good_worker) proc_manager.launch_proc(bad_worker) proc_manager.wait() if proc_manager.errors_flag: print "Errors flag is set: some process crashed" else: print "Everything closed cleanly" </code></pre> <p>You need to have a wrapper thread for each process run, that waits for its end. When a process ends, check for the exitcode: if > 0, means it raised some unhandled exception. Now call terminate_all() to close all remaining active processes. The wrapper threads will also finish as they are dependent on the process run.</p> <p>Also, in your code you're completely free to call proc_manager.terminate_all() whenever you want. You can be checking for some flags in a different thread or something like that..</p> <p>Hope it's good for your case.</p> <p>PS: btw.. in your original code you did something like an global exit_flag: you can never have a "global" exit_flag in multiprocessing because it simply ain't global as you are using separated processes with separated memory spaces. That only works in threaded environments where state can be shared. If you need it in multiprocessing then you must have explicit communication between processes (<a href="http://docs.python.org/2/library/multiprocessing.html#pipes-and-queues" rel="nofollow">Pipe and Queue accomplish that</a>) or something like <a href="http://docs.python.org/2/library/multiprocessing.html#shared-ctypes-objects" rel="nofollow">shared memory objects</a></p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload