[issue11303] b'x'.decode('latin1') is much slower than b'x'.decode('latin-1')
Alexander Belopolsky
report at bugs.python.org
Thu Feb 24 17:30:03 CET 2011
Alexander Belopolsky <belopolsky at users.sourceforge.net> added the comment:
On Thu, Feb 24, 2011 at 11:01 AM, Marc-Andre Lemburg
<report at bugs.python.org> wrote:
..
> On this ticker, we're discussing just one application area: that
> of the builtin short cuts.
>
Fair enough. I was hoping to close this ticket by simply committing
the posted patch, but it looks like people want to do more. I don't
think we'll get measurable performance gains but may improve code
understandability.
> To have more encoding name variants benefit from the optimization,
> we might want to enhance that particular normalization function
> to avoid having to compare against "utf8" and "utf-8" in the
> encode/decode functions.
Which function are you talking about?
1. normalize_encoding() in unicodeobject.c
2. normalizestring() in codecs.c
The first is s.lower().replace('-', '_') and the second is
s.lower().replace(' ', '_'). (Note space vs. dash difference.)
Why do we need both? And why should they be different?
----------
title: b'x'.decode('latin1') is much slower than b'x'.decode('latin-1') -> b'x'.decode('latin1') is much slower than b'x'.decode('latin-1')
_______________________________________
Python tracker <report at bugs.python.org>
<http://bugs.python.org/issue11303>
_______________________________________
More information about the Python-bugs-list
mailing list