Note that there are some explanatory texts on larger screens.

plurals
  1. POBypassing buffering of subprocess output with popen in C or Python
    text
    copied!<p>I have a general question about popen (and all related functions), applicable to all operating systems, when I write a python script or some c code and run the resulting executable from the console (win or linux), i can immediately see the output from the process. However, if I run the same executable as a forked process with its stdout redirected into a pipe, the output buffers somewhere, usually up to 4096 bytes before it is written to the pipe where the parent process can read it.</p> <p>The following python script will generate output in chunks of 1024 bytes</p> <pre><code>import os, sys, time if __name__ == "__main__": dye = '@'*1024 for i in range (0,8): print dye time.sleep(1) </code></pre> <p>The following python script will execute the previous script and read the output as soon as it comes to the pipe, byte by byte</p> <pre><code>import os, sys, subprocess, time, thread if __name__ == "__main__": execArgs = ["c:\\python25\\python.exe", "C:\\Scripts\\PythonScratch\\byte_stream.py"] p = subprocess.Popen(execArgs, bufsize=0, stdout=subprocess.PIPE) while p.returncode == None: data = p.stdout.read(1) sys.stdout.write(data) p.poll() </code></pre> <p>Adjust the path for your operating system. When run in this configuration, the output will not appear in chunks of 1024 but chunks of 4096, despite the buffer size of the popen command being set to 0 (which is the default anyway). Can anyone tell me how to change this behaviour?, is there any way I can force the operating system to treat the output from the forked process in the same way as when it is run from the console?, ie, just feed the data through without buffering?</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload