[Python-3000] Type parameterization (was: Re: Type annotations: annotating generators)

Talin talin at acm.org
Sat May 20 23:38:06 CEST 2006


Guido van Rossum wrote:
> On 5/19/06, Talin <talin at acm.org> wrote:
> 
>> Side note: I'm actually in favor of the idea of Python adding
>> syntactical support for operators that have no "built-in" definition.
>> The use case would be for classes that define new operators that don't
>> correspond to the semantics of any existing operator. But that's another
>> thread, maybe one not worth starting :)
> 
> 
> That could be done for a fixed number of new operators with fixed
> priorities. (But you'd have to pick your set of operators somehow.)
> 
> It could not be done if you wanted to let users define their own
> combination of squiggles on the fly (the parser and lexer are too
> stupid).

That's a given. What I was thinking about was a fixed number of 
"placeholder" operators that would be available for application use.

As far as how to pick the set of operators, well my notion on that was 
to come up with a list of general mathematical and logical concepts, and 
see if any would be useful as operators. In other words, the placeholder 
operators would have general meanings assigned to them, but not specific 
implementations.

My train of thought was something like this: If you are planning on 
using the token '->' to indicate the return type of a function, then 
you're going to have to add that to the lexer; And once you do that, why 
not go the extra step and make it an operator?

In this case, the meaning of the '->' operator is "derives" as in 
"derives/produces/yields/maps-to/etc." So the expression "f(a) -> b" is 
a statement that the operation of 'f(a)' yields a result of 'b'.

(BTW, correct me if I am using the word 'derives' incorrectly. I'm 
specifically thinking of the meaning from the 'dragon book', but I may 
have it reversed.)

The operator's magic method would be something like __derives__, and its 
actual implementation would depend on the context. So for a type system, 
you could say things like:

    Function( int ) -> float

Meaning 'a function that takes an int and produces a float'.

Meanwhile, in an algebraic solver, you could say:

    Var('x') + 0 -> Var('x')

The 'Var' type overloads '+' operator to return an AST-like structure 
representing the addition of a variable and a constant zero; That 
structure in turn overloads the '->' operator to add generate a rule 
that says that adding 0 to a variable produces the same variable.

(Now all I need is a friendly way to spell "Var('x')" and I'm all set.)

One strange consequence of defining the expression 'a -> b' as 'a 
produces b' is how this applies to type declarations of dictionaries and 
lists. For dicts, there are two possible options:

     dict[ str -> int ]
     dict[ str ] -> int

The first is saying that 'within the context of a dict, a str produces 
an int'. The second is saying 'the operation dict[ str ] produces an 
int." (The latter is more Haskell-ish)

Both of those are fairly reasonable, but what happpens if we apply the 
same reasoning to lists:

     list[ int -> int ]
     list[ int ] -> int

This is unecessarily verbose, because the first argument is always going 
to be an int (or some scalar number type). I suppose one way around this 
is to simply declare that for type-description purposes, lists are not 
treated as mappings.

Another question that springs to mind is, what about the converse 
operator, '<-', and what does it represent? (Besides meaning "less than 
the negative of", which is a troubling ambiguity.) One possible meaning 
that could be assigned is "substitutes", and could be useful in grammar 
files, where the non-terminal is usually on the left. (Or you could 
simply make the BNF operator ::= a Python operator, but that's getting 
ridiculous, assuming that I haven't made myself ridiculous enough already.)

-- Talin




-- Talin


More information about the Python-3000 mailing list