> > [Tim says:]
> > Sorry, but nobody's gonna convince me I care how long it takes to
> > catentate string literals. ...
> Same here.
It all depends on what you are writing. I specifically had to hand
optimize out a bunch of constant string concatenation to get
acceptable performance recently. I was writing a "repr()" like
function (in Python) that dealt properly with recursively specified
(i.e., self referential) data items. Almost all the recursive
functions do a variety of string concatenations of results of
sub-functions, with constant strings injected hither and yonder. I
*only* optimize things that a profiler tells me is significant. As
I've mentioned on the list, I've written a profiler for Python that is
actually able to reveal such hot spots. (For instance, it found the
"global" keyword optimization that lead Tim to his front end
optimization/patch of the parser). By the way, if you wonder how I
define "acceptable performance," I define it as "being sufficient to
cause my boss to stop complaining that my code was dominating the CPU
time of the system."
I think it is an oversimplification to say that string concatenation
is something that no one cares about (though I'm willing to take Tim's
statement at face value, and believe that he'll never care about it
;-) ). I can tell you that the code that I have now is *less*
readable because I was forced (under 1.0.0) to do the string
concatenation optimizations by hand. I can also tell you that the
performance gains in these underlying routines that were called *many*
times is significant.
Eventually, I'll probably have to rewrite my whole prepr() (Persistent
repr()) functionality in C in order to get top notch performance.
Until then, and while I'm still experimenting/prototyping, it is a
pleasure to get reasonable performance from hand-tuning Python. I'm
just happy that some of these hand-tunings will not be needed in the
future (re: "global" and string-concatenation), and hence my code will
be easier to read and modify.