[ python-Bugs-959379 ] Implicit close() should check for errors

SourceForge.net noreply at sourceforge.net
Sun Nov 7 15:16:36 CET 2004


Bugs item #959379, was opened at 2004-05-24 13:32
Message generated for change (Comment added) made by astrand
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=959379&group_id=5470

Category: Python Interpreter Core
Group: Python 2.2.3
>Status: Closed
>Resolution: Fixed
Priority: 5
Submitted By: Peter Åstrand (astrand)
Assigned to: Nobody/Anonymous (nobody)
Summary: Implicit close() should check for errors

Initial Comment:
As we all know, the fileobjects destructor invokes the
close() method automatically. But, most people are not
aware of that errors from close() are silently ignored.
This can lead to silent data loss. Consider this example:

$ python -c 'open("foo", "w").write("aaa")'

No traceback or warning message is printed, but the
file is zero bytes large, because the close() system
call returned EDQUOT. 

Another similiar example is:

$ python -c 'f=open("foo", "w"); f.write("aaa")'

When using an explicit close(), you get a traceback:

$ python -c 'f=open("foo", "w"); f.write("aaa"); f.close()'
Traceback (most recent call last):
  File "<string>", line 1, in ?
IOError: [Errno 122] Disk quota exceeded

I'm aware of that exceptions cannot be raised in
destructors, but wouldn't it be possible to at least
print a warning message?


----------------------------------------------------------------------

>Comment By: Peter Åstrand (astrand)
Date: 2004-11-07 15:16

Message:
Logged In: YES 
user_id=344921

Fixed in revision 2.193 or fileobject.c. 

----------------------------------------------------------------------

Comment By: Tim Peters (tim_one)
Date: 2004-06-01 20:23

Message:
Logged In: YES 
user_id=31435

I think the issue here is mainly that an explicit file.close() 
maps to fileobject.c's file_close(), which checks the return 
value of the underlying C-level close call and raises an 
exception (or not) as appropriate; but file_dealloc(), which is 
called as part of recycling garbage fileobjects, does not look 
at the return value from the underlying C-level close call it 
makes (and, of course, then doesn't raise any exceptions 
either based on that return value).

----------------------------------------------------------------------

Comment By: Peter Åstrand (astrand)
Date: 2004-06-01 20:16

Message:
Logged In: YES 
user_id=344921

It has nothing to do with the interpreter shutdown; the same
thing happens for long-lived processed, when the file object
falls off a function end. For example, the code below fails
silently:

def foo():
    f = open("foo", "w")
    f.write("bar")

foo()
time.sleep(1000)

----------------------------------------------------------------------

Comment By: Terry J. Reedy (tjreedy)
Date: 2004-06-01 19:53

Message:
Logged In: YES 
user_id=593130

I think there are two separate behavior issues: implicit file 
close and interpreter shutdown.  What happens with
$ python -c 'f=open("foo", "w"); f.write("aaa"); del f'
which forces the implicit close *before* shutdown.

As I recall, the ref manual says little about the shutdown 
process, which I believe is necessarily implementation/system 
dependent.  There certainly is little that can be guaranteed 
once the interpreter is partly deconstructed itself.

>I'm aware of that exceptions cannot be raised in
destructors, but wouldn't it be possible to at least
print a warning message?

Is there already a runtime warning mechanism, or are you 
proposing that one be added?




----------------------------------------------------------------------

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=959379&group_id=5470


More information about the Python-bugs-list mailing list