[issue13322] buffered read() and write() does not raise BlockingIOError

Antoine Pitrou report at bugs.python.org
Thu Nov 3 16:49:22 CET 2011


Antoine Pitrou <pitrou at free.fr> added the comment:

> Wierdly, it looks like BlockingIO is not raised anywhere in the code
> for the C implementation of io.

That would explain why it isn't raised :)

This is a hairy issue: read(n) is documented as returning either n bytes or nothing. But what if less than n bytes are available non-blocking? Currently we return a partial read. readline() behaviour is especially problematic:

>>> fcntl.fcntl(r, fcntl.F_SETFL, os.O_NDELAY)
0
>>> rf = open(r, mode='rb')
>>> os.write(w, b'xy')
2
>>> rf.read(3)
b'xy'
>>> os.write(w, b'xy')
2
>>> rf.readline()
b'xy'

We should probably raise BlockingIOError in these cases, but that complicates the implementation quite a bit: where do we buffer the partial data? The internal (fixed size) buffer might not be large enough.

write() is a bit simpler, since BlockingIOError has a "characters_written" attribute which is meant to inform you of the partial success: we can just reuse that. That said, BlockingIOError could grow a "partial_read" attribute containing the read result...

Of course, we may also question whether it's useful to use buffered I/O objects around non-blocking file descriptors; if you do non-blocking I/O, you generally want to be in control, which means not having any implicit buffer between you and the OS.

(this may be a topic for python-dev)

----------
nosy: +benjamin.peterson, neologix, stutzbach
stage:  -> needs patch

_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue13322>
_______________________________________


More information about the Python-bugs-list mailing list