[Python-ideas] Python-ideas Digest, Vol 90, Issue 30

Terry Reedy tjreedy at udel.edu
Fri May 23 04:07:58 CEST 2014


On 5/22/2014 11:40 AM, M.-A. Lemburg wrote:
> On 22.05.2014 17:32, Ned Batchelder wrote:
>>
>> The whole point of this proposal is to recognize that there are times (debugging, coverage
>> measurement) when optimizations are harmful, and to avoid them.
>
> +1
>
> It's regular practice in other languages to disable optimizations
> when debugging code. I don't see why Python should be different in this
> respect.
>
> Debuggers, testing, coverage and other such tools should be able to
> invoke a Python runtime mode that let's the compiler work strictly
> by the book, without applying any kind of optimization.
>
> This used to be the default in Python,

I believe that Python has always had an 'as if' rule that allows more or 
less 'hidden' optimizations, as long as the net effect of a statement is 
as defined.

1. By the book, "a,b = b,a" means create a tuple from b,a, unpack the 
contents to a and b, and delete the reference to the tuple. An obvious 
optimization is to not create the tuple. As I remember, this was once 
tried out before tuple unpacking was generalized to iterable unpacking. 
I don't know if CPython was ever released with that optimization, or if 
other implementations have or do use it. By the 'as if' rule, it does 
not matter, even though an allocation tracer (such as the one added to 
3.4?) might detect the non-allocation.

2. The manual says
'''
@f1(arg)
@f2
def func(): pass

is equivalent to

def func(): pass
func = f1(arg)(f2(func))
'''
The equivalent is 'as if', in net effect, not in the detailed process. 
CPython actually executes (or at least did at one time)

def <internal rereference>(): pass
func = f1(arg)(f2(<internal reference>))

Ignore f1. The difference can be detected when f2 is called by examining 
the approriate namespace within f2. When someone filed an issue about 
the 'bug' of 'func' never being bound to the unwrapped function object, 
Guido said that he neither wanted to change the doc or the 
implementation. (Sorry, I cannot find the issue.)

3. "a + b" is *usually* equivalent to "a.__class__.__add__(b)" or 
possibly "b.__class__.__radd__(a)". However, my understanding is that if 
a and b are ints, a 'fast path' optimization is applied that bypasses 
the int.__add slot wrapper. Is so, a call tracer could notice the 
difference and if unaware of such optimizations, falsely report a problem.

4. Some Python implementations delay object destruction. I suspect that 
some (many?) do not really destroy objects (zero out the memory block).

> but there's definitely a need for being able to run Python in
> a debugger without having it perfectly valid skip code lines
> (even if they are no ops).

This is a different issue from 'disable the peephole optimizer'.

-- 
Terry Jan Reedy



More information about the Python-ideas mailing list