[Python-Dev] issue2180 and using 'tokenize' with Python 3 'str's

"Martin v. Löwis" martin at v.loewis.de
Wed Sep 29 01:18:01 CEST 2010


Am 28.09.2010 05:45, schrieb Steve Holden:
> On 9/27/2010 11:27 PM, Benjamin Peterson wrote:
>> 2010/9/27 Meador Inge <meadori at gmail.com>:
>>> which, as seen in the trace, is because the 'detect_encoding' function in
>>> 'Lib/tokenize.py' searches for 'BOM_UTF8' (a 'bytes' object) in the string
>>> to tokenize 'first' (a 'str' object).  It seems to me that strings should
>>> still be able to be tokenized, but maybe I am missing something.
>>> Is the implementation of 'detect_encoding' correct in how it attempts to
>>> determine an encoding or should I open an issue for this?
>>
>> Tokenize only works on bytes. You can open a feature request if you desire.
>>
> Working only on bytes does seem rather perverse.

Yeah, source code really should stop being stored on disks, or else
disks should stop being byte-oriented.

Let's go the Smalltalk way - they store all source code in the image,
no need to deal with perversities like files anymore.

Regards,
Martin


More information about the Python-Dev mailing list