Reading Live Output from a Subprocess

Nobody nobody at nowhere.com
Sat Apr 7 02:08:57 EDT 2012


On Fri, 06 Apr 2012 12:21:51 -0700, Dubslow wrote:

> It's just a short test script written in python, so I have no idea how
> to even control the buffering

In Python, you can set the buffering when opening a file via the third
argument to the open() function, but you can't change a stream's buffering
once it has been created. Although Python's file objects are built on the
C stdio streams, they don't provide an equivalent to setvbuf().

On Linux, you could use e.g.:

	sys.stdout = open('/dev/stdout', 'w', 1)

Other than that, if you want behaviour equivalent to line buffering, call
sys.stdout.flush() after each print statement.

> (and even if I did, I still can't modify the subprocess I need to use in
> my script). 

In which case, discussion of how to make Python scripts use line-buffered
output is beside the point.

> What confuses me then is why Perl is able to get around this just fine
> without faking a terminal or similar stuff.

It isn't. If a program sends its output to the OS in blocks, anything
which reads that output gets it in blocks. The language doesn't matter;
writing the parent program in assembler still wouldn't help.

> I take it then that setting Shell=True will not be fake enough for
> catching output live? 

No. It just invokes the command via /bin/sh or cmd.exe. It doesn't affect
how the process' standard descriptors are set up.

On Unix, the only real use for shell=True is if you have a "canned" shell
command, e.g. from a file, and you need to execute it. In that situation,
args should be a string rather than a list. And you should never try to
construct such a string dynamically in order to pass arguments; that's an
injection attack waiting to happen.




More information about the Python-list mailing list