unicode as valid naming symbols

Rustom Mody rustompmody at gmail.com
Tue Apr 1 09:58:47 EDT 2014


On Tuesday, April 1, 2014 7:14:15 PM UTC+5:30, Chris Angelico wrote:
> On Wed, Apr 2, 2014 at 12:33 AM, Ned Batchelder  wrote:
> > Maybe I'm misunderstanding the discussion... It seems like we're talking
> > about a hypothetical definition of identifiers based on Unicode character
> > categories, but there's no need: Python 3 has defined precisely that.  From
> > the docs
> > (https://docs.python.org/3/reference/lexical_analysis.html#identifiers):

> "Python 3.0 introduces **additional characters** from outside the
> ASCII range" - emphasis mine.

> Python currently has - at least, per that documentation - a hybrid
> system with ASCII characters defined in the classic way, and non-ASCII
> characters defined by their Unicode character classes. I'm talking
> about a system that's _purely_ defined by Unicode character classes.
> It may turn out that the class list exactly compasses the ASCII
> characters listed, though, in which case you'd be right: it's not
> hypothetical.

> In any case, Pc is included, which I should have checked beforehand.
> So that part is, as you say, not hypothetical. Go for it! Use 'em.

Dunno if you really mean it or are just saying...

Steven gave the example the other day of confusing the identifiers
A and А. There must be easily hundreds (thousands?) of other such confusables.

So you think thats nice and APL(-ese), Scheme(-ish) is not...???

Confused by your stand...

Personally I dont believe that unicode has been designed with
programming languages in mind.

Assuming that unicode categories will naturally and easily fit
programming language lexical/syntax categories is rather naive.



More information about the Python-list mailing list