Why prefer != over <> for Python 3.0?

Jorge Vargas jorge.vargas at gmail.com
Tue Apr 1 03:15:57 EDT 2008


On Tue, Apr 1, 2008 at 6:03 AM, Gabriel Genellina
<gagsl-py2 at yahoo.com.ar> wrote:
> En Mon, 31 Mar 2008 16:17:39 -0300, Terry Reedy <tjreedy at udel.edu>
>  escribió:
>
>
> > "Bjoern Schliessmann" <usenet-mail-0306.20.chr0n0ss at spamgourmet.com>
>  > wrote
>  > in message news:65c0bfF2ffipiU1 at mid.individual.net...
>  > | > However, I'm quite sure that when Unicode has arrived almost
>  > | > everywhere, some languages will start considering such characters
>  > | > in their core syntax.
>  > |
>  > | This should be the time when there are widespread quasi-standardised
>  > | input methods for those characters.
>  >
>  > C has triglyphs for keyboards missing some ASCII chars.  != and <= could
>  > easily be treated as diglyphs for the corresponding chars.  In a sense
>  > they
>  > are already, it is just that the real things are not allowed ;=).
>
>  I think it should be easy to add support for ≠≤≥ and even λ, only the
>  tokenizer has to be changed.
>
show me a keyboard that has those symbols and I'm all up for it.

as for the original question, the point of going unicode is not to
make code unicode, but to make code's output unicode. thin of print
calls and templates and comments the world's complexity in languages.
sadly most english speaking people think unicode is irrelevant because
ASCII has everything, but their narrow world is what's wrong.


More information about the Python-list mailing list