printing to stdout

Cameron Simpson cs at cskk.id.au
Thu Aug 16 18:31:22 EDT 2018


On 16Aug2018 22:37, richard lucassen <mailinglists at lucassen.org> wrote:
>I can run a shell script from the commandline as root in which I start
>a python script as user "ha". The output to stdout and stderr
>generated by the python script is visible in an xterm:
>
>#!/bin/dash
>exec 2>&1
>chpst -u ha:ha:i2c -U ha /usr/local/ha/init.sh
>exec chpst -u ha:ha:i2c:gpio /usr/local/ha/wait4int.py
>
>So far so good. But when I run the script supervised by runit, I can
>see the output generated by the shell script "init.sh", but the output
>of the python script is not transferred to the supervised logging. The
>python script itself works, it reads out some I/O expanders on a
>Raspberry Pi. But the output of the "print" commands seems to disappear:
[...]

This isn't specific to Python, you'll find it with most programmes. (The 
shell's builtin "echo" command is an exception.)

Some discussion: most output streams are buffered: this means that when you 
issue some kind of "write" operation to them, the output data are copied to a 
buffer area in memory, and only actually written out to the operating system 
(which mediates the data between programmes) at certain times. This reduces the 
OS level data transfers, which is generally a performance win overall.

When are the data sent on to the OS? That depends on the buffered arrangement.  
When the buffer fills, the data are always sent on because otherwise there's no 
room for more data. But otherwise, the data are sent on at specific times.

An unbuffered stream sends the data on immediately. The standard error stream 
is usually unbuffered, so that error messages get out immediately.

A fully buffered stream is sent on only when the buffer fills.

A line buffered stream is sent on when a newline lands in the buffer.

You can of course devise whatever system you like, but these three are the 
common presupplied automatic ones.

Also, you can usually force any buffered data to be send on by flushing the 
buffer.

On UNIX systems, the _default_ behaviour of the standard output stream depends 
on what it is connected to. Stdout is line buffered when connected to a 
terminal and fully buffered otherwise. This generally makes for nice 
interactive behaviour (you see timely output when working interactively) and 
better overall performance when the output is going to a file or a pipe.

So let's look at your script:

>      print ("%x: %x" % (pcf, output))
[...]
>          print ('[ALERT] possible INT loop, disable 10 seconds')

Your programme will be writing into a buffer. Your messages only go out when 
enough have accrued to fill the buffer.

To force te messages to go out in a timely manner you need to flush the buffer.  
You have two choices here: call sys.stdout.flush() or pass "flush=True" with 
the print call, eg:

  print(...., flush=True)

Just looking at your loop I would be inclined to just call flush once at the 
bottom, _before_ the sleep() call:

  sys.stdout.flush()

Your call; the performance difference will be small, so it tends to come down 
to keeping your code readable and maintainable.

Cheers,
Cameron Simpson <cs at cskk.id.au>



More information about the Python-list mailing list