The Cost of Dynamism (was Re: Pyhon 2.x or 3.x, which is faster?)
BartC
bc at freeuk.com
Thu Mar 24 09:01:54 EDT 2016
On 24/03/2016 03:24, Chris Angelico wrote:
> On Thu, Mar 24, 2016 at 12:41 AM, BartC <bc at freeuk.com> wrote:
>> To extend this analogy better, executing byte-code to directly perform a
>> task itself might be equivalent to travelling on foot, while everyone is
>> suggesting taking the bus, tube or taxi.
> It's easy to see that carrying five boxes of books will slow down
> you're walking *dramatically*. In fact, it's probably quicker to take
> just one of them, and then come back for another one, and so on. When
> you travel by car, it's much harder to measure the cost of the five
> boxes, but it made so much difference in walking time that you should
> probably take one box at a time, right?
>
> This is how you're currently evaluating Python. Instead of starting
> with the most simple and obvious code and refining from there, you're
> starting from a whole lot of preconceived ideas about what's "fast" or
> "slow", and assuming/expecting that they'll all still be valid. Many
> of them won't be, yet you still persist in doing things based on what
> you expect to be the case (because of what's fast/slow in C or some
> other language). We've explained this a number of times, and one by
> one, we're coming to the conclusion that you not only don't understand
> Python, you don't *want* to understand Python; and until you actually
> understand how the language works, timing stats are dubious.
>
> Do you understand why people aren't taking your results very seriously?
I've been using interpreted languages since the 80s, when they were much
cruder and slower (and when hardware was much slower too).
Yet I could still use them effectively. (I reckoned that when used
sensibly and in the right balance, a solution using a dynamic language
would only be between one and two times slower than using compiled,
native code. But it was many times more productive.)
So I understand perfectly that such languages have a huge range of
applications no matter what the speed of the underlying byte-code.
However.... once you start looking at tasks where the speed /might/
matter, then you have to start measuring properly.
And forgetting Python for a minute and concentrating only on its
byte-code as a language in its own right, how would you go about the job
of streamlining it?
You might start with profiling it to see which codes are more
expensive, which are called most then, all the usual stuff.
But there are all sorts of micro-micro-benchmarks that can concentrate
on a single byte-code. For example, how long does it take to call an
empty function with no parameters? Just putting such a call into a
simple loop can be effective:
Python 3 (on Windows) might take 200ns. Clisp is 1300ns (interpreted,
presumably). Ruby 170ns. Lua 80ns. Mine 10-20ns. Unoptimised C is 4ns,
but this is not executing code indirectly as most of the rest have to.
[Timings include loop overheads that need to be factored out.]
So there might be room for improvement, but those faster languages are
also simpler. Is Python's richness or dynamism the main factor here? If
so there is probably little to be done; if not... This is where the fun
starts.
But I understand that most people aren't interested in this kind of sport.
--
Bartc
More information about the Python-list
mailing list