Subprocess with a Python Session?

Paul Boddie paul at boddie.org.uk
Fri Dec 8 06:33:16 EST 2006


Hendrik van Rooyen wrote:
> "Giovanni Bajo" <noway at ask.me>
> >
> > Yeah, but WHY was the API designed like this? Why can't I read and write
> > freely from a pipe connected to a process as many times as I want?
>
> you can - all you have to do is to somehow separate the "records" - else how is
> the receiving side to know that there is not more data to follow?

This is one of the more reliable methods since upon receiving a packet
"delimiter" the receiver knows that the data is complete. It shouldn't
attempt to process anything which isn't yet complete.

> The simplest way is to use newline as separator, and to use readline() on the
> receiving side.

Agreed. Using the readline method on file objects created from sockets
is a tried and trusted approach.

> Or you can use read(1) and roll your own...

Indeed. There are a few tricks in this department: use select or poll
to test the status of file descriptors (which is what the standard
library asyncore module, Medusa and Twisted do), attempt to examine the
buffer status of sockets using the MSG_PEEK flag (something which
didn't prove to be appropriate for some work I've done since I was
using pipes, although more investigation may be required), or set
timeouts on the underlying sockets (reliable only in certain kinds of
communications, I would assert).

> To make sure the stuff is written from memory on the transmitting side, use
> flush(), if you want to do many records, or close() as Fredrik said if only one
> thing is to be transferred.

Also good advice. However, as was mentioned elsewhere, if you're
invoking Python as a subprocess it can help to invoke it in unbuffered
mode (the -u option), otherwise flush and close don't always seem to do
their magic.

Paul




More information about the Python-list mailing list