Question About When Objects Are Destroyed (continued)

Tim Daneliuk info at tundraware.com
Sat Aug 5 20:14:13 EDT 2017


On 08/05/2017 05:58 PM, Chris Angelico wrote:
> On Sun, Aug 6, 2017 at 7:32 AM, Tim Daneliuk <info at tundraware.com> wrote:
>> On 08/05/2017 03:21 PM, Chris Angelico wrote:
>>> After a 'with' block,
>>> the object *still exists*, but it has been "exited" in some way
>>> (usually by closing/releasing an underlying resource).
>>
>> The containing object exists, but the things that the closing
>> logic explicitly released do not.  In some sense, a context
>> acts like a deconstructor, just not on the object it's associated
>> with.
>>
>>> If there's a resource you need to clean up, you clean that up
>>> explicitly,
>>
>> Such "resources" *are* objects themselves notionally.  You are exactly
>> killing those objects to free the underlying resources they consume.
> 
> Utterly irrelevant. The original post was about the string in memory.
> An "open file" is no more an object than the part of a floating point
> number after the decimal is.
> 
>>> so the object's lifetime shouldn't matter to you.
>>
>> I disagree with this most strongly.  That's only true when the machine
>> resources being consumed by your Python object are small in size.  But
>> when you're dynamically cranking out millions of objects of relatively
>> short lifetime, you can easily bump into the real world limits of
>> practical machinery.  "Wait until the reference count sweep gets rid of
>> it" only works when you have plenty of room to squander.
>>
>> Also, waiting for the reference count/gc to do its thing is
>> nondeterministic in time.  It's going to happen sooner or later, but not
>> at the same or a predictable interval.  If you want to write large,
>> performant code, you don't want this kind of variability.  While I
>> realize that we're not typically writing embedded realtime drivers in
>> Python, the principle remains - where possible make things as
>> predictable and repeatable as you can.
>>
>> For reasons I am not free discuss here, I can say with some assurance
>> that there are real world applications where managing Python object
>> lifetimes is very much indicated.
> 
> Very VERY few. How often do you actually care about the lifetime of a
> specific Python object, and not (say) about the return of a block of
> memory to the OS? Memory in CPython is allocated in pages, and those
> pages are then suballocated into objects (or other uses). Sometimes
> you care about that block going back to the OS; other times, all you
> care about is that a subsequent allocation won't require more memory
> (which can be handled with free lists). But most of the time, you
> don't need to think about either, because the language *does the right
> thing*. The nondeterminism of the GC is irrelevant to most Python
> programs; in CPython, that GC sweep applies only to reference *cycles*
> (and to weak references, I think??), so unless you frequently create
> those, you shouldn't have to care.
> 
> I've written plenty of large programs in high level languages. Some of
> them in Python, some in Pike (which has the same refcount semantics),
> and some in REXX (which has very different technical semantics but
> comes to the same thing). I've had those programs running for months
> on end; in more than one instance, I've had a program running for over
> a year (over two years, even) without restarting the process or
> anything. Aside from taking care not to create cyclic references, I
> have not needed to care about when the garbage collector runs, with
> the sole exception of an instance where I built my own system on top
> of the base GC (using weak references and an autoloader to emulate a
> lookup table larger than memory). So yes, I maintain that most of the
> time, object lifetimes *should not matter* to a Python programmer.
> Python is not C, and you shouldn't treat it as C.
> 
> ChrisA
> 


OK, noted, and thanks for the clear explanation.



More information about the Python-list mailing list