New PEP: The directive statement
Tim Peters
tim.one at home.com
Sat Mar 24 17:46:55 EST 2001
[Martin von Loewis]
> ...
> My point is that even though you can nuke all obsolete future
> statements, it might not be desirable to do so - since the module then
> become silently incorrect in an older release, instead of outright
> failing.
Using the tool isn't required. It's a tool I would write because it's a tool
*I* would use (I don't care about clinging to old releases in my own code,
and indeed am quite keen to get rid of then-useless cruft ASAP). YMMV.
> ...
> I was asking you to predict the future: Do you think that people
> will prefer to write
>
> if __future__.nested_scopes.getOptionalRelease() >= sys.version_info:
>
> instead of
>
> if sys.version_info < (2,1):
As explained before, hardcoded version numbers can't be made to work *if*
you're interested in writing code that depends on a change's MandatoryRelease
(which is just a best guess under any release in which sys.version_info is
less than that).
> ...
> I have really no problem with the format of the _Feature members; I
> very much like using tuples for versioning, since it is quite elegant.
> My question is whether you think that people will use those computed
> version numbers instead of hard-coded ones, when comparing to
> sys.version_info.
I *expect* that-- just like now --they'll use hard-coded ones, until they
figure out that they can't get that to work as intended. Hard-coded version
numbers are always a bad idea anyway -- even if "they worked", the *reader*
of
if sys.version_info < (2,1):
hasn't a clue in hell what the true intent of that code is. If the true
intent is to key off the presence or absence of nested_scopes semantics,
writing a test that *names* nested_scopes is clearly much better.
I've never used a hard-coded version number in my entire Python life. Up
until now, I've been happy to do things like:
import tokenize
if hasattr(tokenize, "NL"):
# all right! I can make my parsing much faster by using NL.
...
else:
# bummer! It's a crufty old version of tokenize.py w/o NL.
But, in the presence of *incompatible* language changes, it's not at all
clear that a "just try it and see whether it works" runtime test is always
possible. For example, if the incompatible change is the introduction of a
new keyword (like, say, "directive" <wink>), then code attempting to use the
new keyword will likely blow up with a SyntaxError under earlier releases,
and SyntaxError can't be caught at compile-time (at least not without a layer
of obfuscating indirection). So being able to name the changed feature
explicitly wins big on that count too.
More information about the Python-list
mailing list