> Consistency has many paybacks. Reading is one. Writing is another.
> Learning is a third.
And adding stuff to a language has many costs -- almost all issues are
tradeoffs rather than absolutes. Most people who've tried Python and
spoken up here have given it high marks for readability, writability, and
a shallow learning curve. While those could be improved, there's been
little expressed demand for that (let alone agreement on what constitutes
> > naming "throw-away" intermediate temps is a very effective way to make
> > complex expressions more readily understandable ...
> and a way to make simple statement complex!
Touche! Good one. But I did say I _preferred_ the one-liner <wink>.
> > ... and the "inefficiency" is an illusion
> Measure before you speak: it is not an illusion.
I have, many times. In the program fragment we were discussing, it's
illusion. But by "illusion" I mean not "NO difference" but "not enough
difference for most people to care or even notice". An extra line or two
+ local assignment in Python is dirt cheap _compared to_ the cost of what
typical keys() and sort() do, + the overhead for method lookup and
invocation. If it's under a percent or two, few people will (or should!)
give a rip.
> The following two functions have significantly different execution
> times with the current implementation of python:
> def inter():
> a = 1 # assign to an intermediate value
> b = a
> def direc():
> b = 1
Sure! If we call the time for a no-argument function invocation F, and
the time for a local assignment A (not a perfect model, but a decent
first (& for me, last <wink>), stab), it's comparing F+2A to F+A. Unless
F/A is amazingly large on your system, a significant difference is
expected a priori.
In the context of the original example, the interpreter overhead for
executing an additional trivial line or two just isn't likely to be
significant compared to the time it takes to do typical keys() and sort()
(much larger than typical F).
About that "on your system" a bit above: I've been in the compiler
optimization business for 15 years, and a depressing amount of wrestling
with severe timing headaches comes with that territory. Unless you're
running on a single-CPU, real-memory, dedicated box, proper timing is a
very slippery undertaking. Worse, if you time Python on a variety of
architectures, you'll discover that F/A varies significantly across them.
So nobody should take anybody's Python timing results as more than
_qualitative_ hints about what's _not unlikely_ to be true on their own
> [a nice timing skeleton, plus tricks of the timing trade, deleted]
If people want to try this (go ahead -- it's fun <smile>!), add
ostimes = os.times
at the start (else you'll get a NameError), and change
delta = delta + t1 - t2
delta = delta + t2 - t1
so that "delta" accummulates the difference with the sign bit having the
same meaning it had in the earlier "print" (i.e., positive means 'inter'
was faster, negative means 'direc' was faster).
Also, e.g., try defining a global dict 'd' and replacing the timed
functions like so:
keys = d.keys()
a = len(keys)
a = len(d.keys())
Experiment to see how many elements need to be in d before you can no
longer detect any significant difference in timing. On an unloaded (but
not dedicated) multiprocessor 33 MHz SPARC running Sun OS/MP 4.1B here,
the difference dropped to the 1% range by the time d had 4 entries. At 5
entries, noise overwhelmed the difference ('twas a crap shoot whether
'inter' or 'direc' came out "faster" from one run to the next).
One other trick of the trade: pass a big enough number to inter_test so
that each loop takes "a long time" compared to the resolution of your
system's os.times() call. E.g., on this system os.times() is only good
to 1/60 second, so you want each loop to run at least a couple of
seconds (i.e., at least a few hundred times larger than 1/60 second).
The number you need to pass to inter_test to accomplish this of course
depends on your system.
OTOH hand, if you want to "prove" that local assignments are horribly
expensive <wink>, throw the calls out of the timing loops entirely,
replacing the loops with
while n > 0:
temp = n - 1
n = temp
while n > 0:
n = n - 1
> > (in your preferred "foo.keys().sort()", what do you think the
> > "foo.keys()" part returns if _not_ a throw-away temp -- albeit an
> > anonymous one? ditto the ".sort()" part? ).
> Alas, you don't recognize the distinction between a statically
> compiled program ...and an interpretted language such as Python
> [goes on to point out that it's theoretically possible to optimize
> away "his" temps but maybe not "tim's" temps]
If we're talking about what's "theoretically possible" (do we _have_ to?
-- it's so irrelevant <wink>), that Python is an interpreted language
today is also a mere implementation detail; and I can write an optimizer
that would deep-six "my" temps in my sleep. If we're talking about what
Python is and does today, neither flavor of temp is actually optimized
away. Somewhere between the actual and the possible, you probably have
a good point here <smile>.
> [the std wisdom on how to best use interpreted languages, deleted
> because tim heartily agrees]
> The bad news is that the cleanest workaround for the sort of problem
> that led to this topic causes the user (such as you) to fall into the
> trap of using throw-away temporaries. IF the deficiency we are talking
> about is corrected, then there would be no need to write slow code.
Sorry, but this chain lost me.
> I think this last paragraph shows we have much more agreement than
> disagreement, it is just perchance that you enjoy the position of a
> devils advocate. ;-o
Sometimes, yes, but mostly I'm very happy with Python already, and don't
want to lose it to creeping featureism. I don't even care (really!) if
it never gets a nanosecond faster ('tho I won't object if it does ...).
Long & bitter experience has also taught me that most discussions about
language features are least harmful in the context of designing a new
language, rather than fiddling an existing one. There are exceedingly
few pure wins, and even net wins _in the end_ aren't common.
That doesn't mean I hate all new suggestions, but does mean I approach
them with an obnoxious "oh yeah? prove it!" attitude at first. Lucky for
everyone I'm not in charge <wink> ...
> ... Alas, built in types are not full blown classes, and you *cannot*
> derive from them. It takes a lot more work to get the object like
> metaphor to fly.
It certainly does! Still, that builtin types aren't allowed as base
classes was advertised as an intentional decision on Guido's part, and
I'm not sure that's a bad thing. It's never gotten in my way, but as
I've never made serious use of a language where that _is_ allowed, it's
not something I'd be likely to miss.
> As I said, the agreement may be greater than you'd let on ;-)
It's even worse than that, Jim: I've been known to agree 100% that a
feature is absolutely wonderful, and _still_ argue that it doesn't belong
in Python. E.g., multi-line lambdas come to mind ...
agreeably-disagreeable-ly y'rs - tim
Tim Peters email@example.com
not speaking for Kendall Square Research Corp