New Python User Question about Python.

Paul Rubin phr-n2001 at nightsong.com
Thu Aug 23 03:57:45 EDT 2001


Peter Hansen <peter at engcorp.com> writes:
> This gets suggested frequently, and while there are more
> thorough answers than this, in a nutshell the answer seems
> to be that the dynamic nature of Python makes full compilation
> an intractable problem.

I don't see why this is so.  Python semantics are very similar to
Lisp, and Lisp is often compiled.  Numerical Lisp code can be as fast
as numerical Fortran or C code.  Python could be the same way, as far
as I can tell.  Of course the Python compiler would often have to make
inferences (or be advised by the programmer) about object types, but
the current bytecode compiler already does some of that (e.g. it
generates FASTLOAD/FASTSTORE bytecodes to avoid dictionary lookups
when it can).

> An answer from a different point of view is "why bother?"  Python is
> already "fast enough" for many applications -- one might even say
> "for most applications for which it is used", 

Well, yes, applications for which Python is too slow don't get written
in Python.  But if Python were faster, it would be useable for more
applications, always a good thing from a Python advocacy perspective,
I would have thought.

> and there is little interest in improving its speed compared to
> improvements in other areas.

There seems to be considerable interest in Python's speed in this
newsgroup, I've been happy to find :)

> Python programmers tend to adopt a philosophy of making code work
> rather than making code fast.  Given a choice of doing one or the
> other, any reasonable person would pick the former.

That is a platitude.  To paraphrase Jon Bentley: whatever computer
system you're currently using to surf the web has hundreds of known,
but minor, bugs.  If a fairy godmother appeared before each user and
gave them one wish, and they had to choose between eliminating the
bugs or making their surfing experience 100 times faster, do you
really think every reasonable person would decline the speedup?  SPEED
MATTERS.  From a user's point of view it sometimes matters even more
than correctness.

> Since so many software projects are failures even before they manage
> to release their products, if they ever do, Python contributes
> greatly to improving that dismal record even with its currents
> speed.  So, "why bother"?  There are many things to fix in the
> software world before we optimize...

A couple days ago someone asked how to multiply a numeric vector
by a scalar in Python and I timed a couple of the suggested answers
last night.  The timings are in a separate post but basically, for
10,000 loops over a 100 element list, on my P3-750,

   a = map (lambda x: 4.0 * x, b)

took about 1.5 seconds, 

   a = [4.0 * x for x in b]  # list comprehension syntax probably wrong

took around 3 seconds, 

   a = 4.0 * b     # NumPy scalar * array multiplication

took 0.31 seconds, and just for fun I coded a simple loop in C and it
took 0.6 seconds to run 1 million times.  So C was 300 times faster
than a map call in Python and 50 times faster than Numerical Python.
This ratio is large enough to make Python impractical for a number of
things C is practical for.  I'd been hoping to write a telephony
application including signal processing in Python, but it doesn't look
feasible.  It's probably doable with C extensions for the number
crunching, but I don't have a C compiler for the target platform and
was hoping to avoid having to acquire one.

In the long run I hope there will be Python compilers and high
performance applications being written in Python, like there currently
are for Scheme or Lisp.

"The working scientist is trying to solve tomorrow's problem on
yesterday's computer.  The computer scientist, we find, often seems
to have it the other way around."  
  --Teukolsky et al, Numerical Recipes (misquote)



More information about the Python-list mailing list