[Types-sig] New syntax?

Tim Peters tim_one@email.msn.com
Sun, 19 Dec 1999 20:51:11 -0500


[Tim[
> ...
> If I had a lot of binary integer functions to declare, I
> would probably use a typedef, a la
>
>     decl typedef BinaryFunc(_T) = def(_T, _T) -> _T
>     decl typedef BinaryIntFunc = BinaryFunc(Int)
>     ...
>     decl var intHandlerMap: {string: BinaryIntFunc}
>     decl var floatHandlerMap: {string: BinaryFunc(Float)}

[GregS]
> Okay, Tim. I'm going to stop you right here :-)

Good -- the speed was killing me <wink>.

> The problem with using "decl" to do typedefs is that it does
> weird voodoo to associate the typedecl with the name (e.g.
> BinaryFunc).

Perhaps an earlier msg made this clearer:  I've viewed "decl"s as (purely!)
compile-time expressions.  IOW, BinaryFunc is a compile-time name in the
above; there's no implication that a name introduced by a "decl typedef"
will appear in any runtime namespace (this doesn't preclude that in some
modes the implementation may *want* to make a Python object of the same name
available at runtime).

> I believe my unary operator is much clearer to what is happening:
>
>   BinaryIntFunc = typedef BinaryFunc(Int)

This looks like a runtime stmt to me; if so, it's of no use to static
(compile-time) type declaration.  If it's not a runtime stmt, better to
stick a "decl" (or something) in front of it to make that crucial
distinction obvious.

> In this case, it is (IMO) very clear that you are storing a typedecl
> object into BinaryIntFunc, for later use. For example, we might see the
> following code:
>
>   import types
>   Int = types.IntType
>   List = types.ListType
>   IntList = typedef [Int]
>   ...

This all looks like runtime code to me -- if so, how is a *compiler*
supposed to get any benefit out of it?  Or if not, how is a compiler
supposed to recognize that it's not runtime code?

> Hrm. I don't have a ready answer for your first typedef, though. That
> is a new construct that we haven't seen yet. We've been talking about
> parameterizing *classes*, rather than typedecls.
>
> *ponder*

In my twisted little universe, I'm using a declarative language for
compile-time type expressions, and BinaryFunc(_T) can be thought of as a
compile-time macro -- same as the BinaryIntFunc typedef (except the latter
doesn't take any arguments -- or does take no arguments <wink>).

>> ("|") should suffice.

> "or" is more Pythonic.

Agreed.  I'm not sure what's in vogue among category theorists, though
<wink>.

> Bite me. :-)

Yummy!

> You do raise a good point in another post, however:
>
>   def foo(*args: (Int)):
>
> Looks awfully funny. For a Python programmer, that looks like
> grouping rather than a tuple. If it had a comma in there, then
> it would look like a tuple.

Worse, it would look like a tuple of length one, which *args is not.

> But of course: there will never be more than one typedecl inside
> there, so whythehell is there a comma?

I think it should be legal to do, e.g.,

    def foo(*args: (Int, Float, String)) -> whatever:

This says the function takes exactly three arguments, of the given types,
but gets them as the * tuple.  Some people do that (typically if they're
just going to pass the arglist on via apply(somfunc, args)).

> *grumble*  .... I don't have a handy resolution for this one.

So let's make one up.  The problem is spelling "tuple of unknown length"
(and Paul's complaint notwithstanding, that *is* Python so we gotta deal
with it).  Python has no notation for this.  OK:

    ...
    Tuple(T1, T2, T3) equivalent_to (T1, T2, T3)
    Tuple(T1, T2) equivalent_to (T1, T2)
    Tuple(T1,) equivalent_to (T1,)
    Tuple(T1) means tuple-of-T1 of unknown length

So it's always *legal* to stick "Tuple" in front of a tuple specifier, and
it's *required* in the last case.

Actually, tuples show up in type specifiers rarely enough-- and look so much
like grouping now --that I'd be happy requiring "Tuple" all the time.  Again
one of those things that could be relaxed later if it proved too irksome.

if-only-you-could-relax-me-too<wink>-ly y'rs  - tim