Bug 3.11.x behavioral, open file buffers not flushed til file closed.

Cameron Simpson cs at cskk.id.au
Sun Mar 5 18:48:09 EST 2023


On 05Mar2023 09:35, aapost <aapost at idontexist.club> wrote:
>I have run in to this a few times and finally reproduced it. Whether 
>it is as expected I am not sure since it is slightly on the user, but 
>I can think of scenarios where this would be undesirable behavior.. 
>This occurs on 3.11.1 and 3.11.2 using debian 12 testing, in case the 
>reasoning lingers somewhere else.
>
>If a file is still open, even if all the operations on the file have 
>ceased for a time, the tail of the written operation data does not get 
>flushed to the file until close is issued and the file closes cleanly.

Yes, because files are _buffered_ by default. See the `buffering` 
parameter to the open() function in the docs.

>2 methods to recreate - 1st run from interpreter directly:
>
>f = open("abc", "w")
>for i in range(50000):
>  f.write(str(i) + "\n")
>
>you can cat the file and see it stops at 49626 until you issue an f.close()

Or until you issue an `f.flush()`. hich is what flush is for.
>cat out the file and same thing, stops at 49626. a ctrl-c exit closes 
>the files cleanly, but if the file exits uncleanly, i.e. a kill command 
>or something else catastrophic. the remaining buffer is lost.

Yes, because of bfufering. This is normal and IMO correct. You can turn 
it off, or catch-and-flush these circumstances (SIGKILL excepted, 
because SIGKILL's entire purpose it to be uncatchable).

>Of course one SHOULD manage the closing of their files and this is 
>partially on the user, but if by design something is hanging on to a 
>file while it is waiting for something, then a crash occurs, they lose 
>a portion of what was assumed already complete...

f.flush()

Cheers,
Cameron Simpson <cs at cskk.id.au>


More information about the Python-list mailing list