Queue cleanup

Steven D'Aprano steve at REMOVE-THIS-cybersource.com.au
Sat Aug 28 06:40:37 EDT 2010


On Sat, 28 Aug 2010 00:33:10 -0700, Paul Rubin wrote:

> Dennis Lee Bieber <wlfraed at ix.netcom.com> writes:
>> 	The nice thing about it [reference counting] is that it is sort
>> of deterministic -- one can examine code and determine that an object
>> is collected at some point in the execution...
>> 	Heap marking, OTOH, tends to run at indeterminate times, which 
could
>> have an impact if one needs predictable response timings
> 
> Reference counting has the same problem.  

In theory, yes, but in practice ref counting tends to spread out the 
performance impact more smoothly. There are exceptions, such as the one 
you mention below, but as a general rule ref counting isn't subject to 
the "embarrassing pauses" that tracing garbage collectors tend to be 
subject to.


> If you drop the last reference
> to a complex structure, it could take quite a long time to free all the
> components.  By contrast there are provably real-time tracing gc
> schemes, including some parallelizeable ones.

I could be wrong, but how can they not be subject to the same performance 
issue? If you have twenty thousand components that all have to be freed, 
they all have to be freed whether you do it when the last reference is 
cleared, or six seconds later when the gc does a sweep.


> One reason CPython still
> can't run threads on parallel cores is it would have to lock the
> reference counts every time they're updated, and the slowdown from that
> is terrible.

On the other hand, the reason that CPython still has reference counting 
is that the alternatives tried so far are unacceptably for non-threaded 
code.




-- 
Steven



More information about the Python-list mailing list