[Types-sig] type declaration syntax

Tim Peters tim_one@email.msn.com
Sat, 18 Dec 1999 15:56:44 -0500


[John Skaller]
> ...
> But the _return_ type doesn't need to be annotated as much.
> Why? Because the inferencer can usually deduce it:
> it's an output, the argument types are inputs.
>
> If the inferencer _cannot_ deduce the return type,
> it _also_ cannot check that the function is returning
> the correct type.

"The" correct type (as opposed to "a type consistent with the operations")
is impossible for an inferencer to determine, but this is addressed more to
the SIG than to John <wink>:

My bet is that the vast majority of Python people asking for "static typing"
have in mind a conventional explicit system of the Algol/Pascal/C (APC) ilk,
and that decisions based on what *inference* schemes can do are going to
leave them very unhappy.

Inference schemes commit two kinds of gross errors that the APC camp won't
abide:

1) Inferring types that aren't general enough.
2) Inferring types that are too general.

Both mistakes occur because inference can only look at the code that's
written, knowing nothing about the user's *intent*.  In APC, explicit type
declarations often serve the latter purpose, supplying (& enforcing) design
and semantic constraints that can't be deduced from the code:

1) Not general enough.  This is usually due to an implementation in
progress, where looking at the code that currently exists can't possibly
guess what will get implemented tomorrow; e.g., a function that returns an
int if it can, but is spec'ed to return None if it can't, but the author
hasn't yet gotten around to coding up the latter cases.  The clients of this
routine must nevertheless accept an IntOrNone result, and explicit
declarations can force that on clients long before the routine is actually
capable of producing a None.  The alternative is a large class of all too
familiar last-second "integration crises".

2) Too general.  This is very common in numeric programming.  An inferencer
sees a routine with nothing but +, *, / and infers "ah, any Number will do".
But it's *unusual* for any such routine to work *correctly* for all Numbers
(algorithms appropriate for complex numbers are often wildly different from
those appropriate for floats, and likewise for ints).  For example, I tell
Haskell

    intgamma 1 = 1
    intgamma n = x * intgamma x
                 where x = n-1

and it deduces the type

    intgamma :: Num a => a -> a

Arghghghgh <wink>.  Yes, every flavor of Num supports all the operations I
used, but no, call it with anything other than an Int and the *algorithm* is
plain wrong.  APC folk (Haskell folk too) routinely use explicit
declarations to enforce such constraints.

Explicit typing goes beyond what type inference can do "even in theory";
while types must be *consistent* with the code, only the author can know
*the* correct type (which may be more-- or less! --general than what an
inferencer determines is merely consistent).

in-the-apc-tradition-types-communicate-design-ly y'rs  - tim