[Python-ideas] Python-ideas Digest, Vol 90, Issue 30

Eric Snow ericsnowcurrently at gmail.com
Thu May 22 02:17:18 CEST 2014


On Wed, May 21, 2014 at 4:51 PM, Ned Batchelder <ned at nedbatchelder.com> wrote:
> On 5/21/14 3:44 PM, Raymond Hettinger wrote:
>> I think this opens a can of worms that is better left closed.
>>
>> * We will have to start running tests both with and without the switch
>> turned on for example (because you're exposing yet another way to
>> run Python with different code).
>
> Yes, this could mean an increased testing burden.  But that scale
> horizontally, and will not require a large amount of engineering work.
> Besides, what better way to test the optimizer?

I buy that to an extent.  It would definitely be helpful when adding
or changing optimizations, particularly to identify the impact of
changes both in sematics and performance.  However, work on
optimizations isn't too common.  Aside from direct work on
optimizations, optimization-free testing could be useful for
identifying optimizer-related bugs (which I expect are quite rare).
However, that doesn't add a lot of benefit over a normal buildbot run
considering that each run has few changes it is testing.

Having said all that, I think it would still be worth testing with and
without optimizations.  Unless the optimizations are
platform-specific, would we need more than one buildbot running with
optimizations turned off?

>> * Over time, I expect that some of the functionality of the peepholer
>> is going to be moved upstream into AST transformations you will
>> have even less ability switch something on-and-off.
>
> I'm perfectly happy to remove the word "peephole" from the feature.  If we
> expect the set of optimizations to grow in the future, then we can expect
> that more cases of code analysis will be misled by optimizations.  All the
> more reason to establish a way now that will disable all optimizations.

While the use-case is very specific, I think it's a valid motivator
for a means of disabling all optimizations, particularly if disabling
optimizations is isolated to a very focused location as you've
indicated.

The big question then is the impact on implementing optimizations (in
general) in the future.  There has been talk of AST-based
optimizations.  Raymond indicates that this makes it harder to
conditionally optimize.  So how much harder would it make this future
optimization work?  Is that a premature optimization? <wink>

>> * I sympathize with "there is an irritating dimple in coverage.py"
>> but that hasn't actually impaired its usability beyond creating a
>> curiosity.  Using that a reason to add a new CPython-only
>> command-line switch seems like having the tail wag the dog.
>
> I don't think you should dismiss real users' concerns as a curiosity.  We
> already have -X as a way to provide implementation-specific switches, I'm
> not sure why the CPython-only nature of this is an issue?

If optimizations can break coverage tools when run on other Python
implementations, does that make a case for a more general command-line
option?  Or is it just a matter of CPython's optimizations behave
badly by breaking some perceived invariants that coverage tools rely
on, and other implementations behave correctly?

If it's the latter, then perhaps Python needs a few tests added to the
test suite that verify that optimizer doesn't break the invariants.
Such tests would benefit all implementations.  However, even if it's
the right approach, if the burden of fixing things is so much more
than the burden of adding a no-optimizations option, it may make more
sense to just add the option and move on.  It's all about who has the
time to do something about it.  (And of course "Now is better than
never.  Although never is often better than *right* now.")

Of course, if the coverage tools rely on CPython implementation
details then an implementation-specific -X option makes even more
sense.

FWIW, regardless of the scenario a -X option makes practical sense in
that it would relatively immediately relieve the (infrequent? but
onerous) pain point encountered in coverage tools.  However, keep in
mind that such an option would not be backported and would not be
released until 3.5 (in late 2015).  So I suppose it would be more
about relieving future pain and not helping current coverage tool
users.

>
>> * Ideally, the peepholer should be thought of as part of the code
>> generation.  As compilation improves over time, it should start
>> to generate the same code as we're getting now.  It probably
>> isn't wise to expose the implementation detail that the constant
>> folding and jump tweaks are done in a separate second pass.
>
> I'm happy to remove the word "peephole".  I think a way to disable
> optimization is useful.  I've heard the concern from a number of coverage.py
> users.  If as we all think, optimizations will expand in CPython, then the
> number of mis-diagnosed code problems will grow.

The comparison made elsewhere with -O0 option in other compilers is
also appropriate here.

-eric


More information about the Python-ideas mailing list