Python 2.0

Yukihiro Matsumoto matz at netlab.co.jp
Wed Jun 2 00:01:02 EDT 1999


Paul Prescod <paul at prescod.net> writes:

|> I said nothing about the destructor invocation predictability.
|
|I consider that a part of the definition of destructor.

Well,.. is it?  That's fine, anyway.  Now I understand you.

|> I don't understand and really interested in why you guys require
|> to predict destructor invocation so much, but it's another story.
|
|What if I am holding GUI "device context" objects and the GUI has a
|limited number of them. I don't think that there is a way to tell the GC
|that there is a limited number of them. It doesn't know that device
|contexts are a limited resource, just as memory is.

Of cource you can tell the GC about limitaion, if you can detect
outage of the resource somehow.  And you can detect outage for most of
the resources on the machine.

e.g.

  fd = open(path, mode);
  if (fd < 0 && (errno == EMFILE || errno == ENFILE)) {
    call_gc();  
    fd = open(path, mode);
  }
  if (fd < 0) {
    raise("can't help it");
  }

Ruby uses this scheme for most of the resources like fd, sockets.  So
you can write something like:

  txt = open('blahbalh.txt','r').read()

close() will be called sometime later (not immediately after), but
called for sure.  If not called, it's a bug in interpreter.

I don't mean real GC does everything ref counting does.  For
exapmle, with real GC:

  dialog = GUItoolkit.dialog(params)
  ... do something with the dialog ...
  dialog = None

will close the dialog at unpredictable timing, which is not
desirable for everyone.  But I prefer explicit termination of the
dialog for this case.

  dialog = GUItoolkit.dialog(params)
  ... do something with the dialog ...
  dialog.close

|What GC algorithm does Ruby use and how much overhead does it add?

Ruby uses conservative mark and sweep GC.  It never moves the objects.
The overhead tends to depend on applications, but memory management
overhead is normally less than 5% of the execution (Ruby often runs
faster than Python).  And as I said before, the hang is not noticable
for interactive processes.

|> Second, do you think that optimization for basic users compensates the
|> risk of potential memory leak, by cyclic structures or missing DECREF
|> in extension modules?  Do you mean it's easy for experts to find
|> cyclic structure and cut their reference to destruct?  For me, it's
|> not.  I hate that.  Is this because I'm not a Python expert?
|
|It's pretty much a toss-up for me. In general I prefer optimizations that
|make the language easy for new users to those that are for complex, long
|running programs but full garbage collection would make complex programs a
|fair bit easier.

So you think ref counting is nice for new users, maybe by its
predictability.  I don't think so because even new users can make
cyclic references.  But it's a toss-up indeed.

						matz.




More information about the Python-list mailing list