The Cost of Dynamism (was Re: Pyhon 2.x or 3.x, which is faster?)

Chris Angelico rosuav at gmail.com
Sat Mar 12 07:52:13 EST 2016


On Sat, Mar 12, 2016 at 10:50 PM, BartC <bc at freeuk.com> wrote:
> On 12/03/2016 02:20, Chris Angelico wrote:
>>
>> On Sat, Mar 12, 2016 at 12:16 PM, BartC <bc at freeuk.com> wrote:
>
>
>>> 'Switch' testing benchmark. The little program show below reads a text
>>> file
>>> (I used the entire CPython C sources, 6MB), and counts the number of
>>> characters of each category in upper, lower, digit and other.
>>>
>>> (Note there are other ways to approach this task, but a proper 'lexer'
>>> usually does more than count. 'Switch' then becomes invaluable.)
>>
>>
>> Are you assuming that the files are entirely ASCII? (They're not.) Or
>> are you simply declaring that all non-ASCII characters count as
>> "other"?
>
>
>> Once again, you cannot ignore Unicode and pretend that everything's
>> ASCII, or eight-bit characters, or something. Asking if a character is
>> upper/lower/digit/other is best done with the unicodedata module.
>
>
> If you're looking at fast processing of language source code (in a thread
> partly about efficiency), then you cannot ignore the fact that the vast
> majority of characters being processed are going to have ASCII codes.
>
> Language syntax could anyway stipulate that certain tokens can only consist
> of characters within the ASCII range.
>
> So I'm not ignoring Unicode, but being realistic.
>
> (My benchmark was anyway just demonstrating a possible use for 'switch' that
> more or less matched your own example!)

Generally languages these days are built using ASCII tokens, because
they can be dependably typed on all keyboards. But there's no
requirement for that, and I understand there's a Chinese Python that
has all the language keywords translated. And identifiers can - and
most definitely SHOULD - be defined in terms of Unicode characters and
their types. So ultimately, the lexer needs to be Unicode-aware.

But in terms of efficiency, yes, you can't ignore that most files will
be all-ASCII. And since 3.3, Python has had an optimization for such
strings. So the performance question isn't ignored - but it's an
invisible optimization within a clearly-defined semantic, namely that
Python source code is a sequence of Unicode characters.

ChrisA



More information about the Python-list mailing list