[Types-sig] Sample declarations

Daniel Wang danwang@CS.Princeton.EDU
13 Mar 2001 16:42:58 -0500


qrczak@knm.org.pl (Marcin 'Qrczak' Kowalczyk) writes:

> 13 Mar 2001 14:40:03 -0500, Daniel Wang <danwang@CS.Princeton.EDU> pisze:
> 
> > given a method or function body
> >  def foo (....):
> >    random code...
> >=20
> > automatically compute a "type assertion" function that basically
> > simulates the behavior of the random code with respect to type errors
> 
> This will never work well in Python. Python is too dynamic and the
> behavior of libraries wrt. types is too irregular to be inferred
> statically from usage (perhaps except most simple cases, which is
> impractical).

Ahh.. but you missed the point... in the worst case, the system does
absolutely nothing. i.e. if it can't figure out how to move type errors
earlier, it just does nothing.

> 
> Sorry, a simple statement of
>     spam.spam(spam, spam)
> generates a complex assertion that
>     the object to which spam is bound, whatever it is at this point,
>     has the attribute 'spam', or at least is able to produce it,
>     which is a callable object, whatever type it is, which can accept
>     two arguments, not necessarily only two, whose types are required
>     to be some unknown supertypes of the type of the object to which
>     spam is bound, assuming that their requirements can be described
>     in terms of supertypes at all.
> 
> You can't even combine assertions about the 'spam' attribute derived
> from two calls, because nothing prevents rebinding the attribute at
> any point.


> Type inference works very well in SML, OCaml, Haskell, Clean and
> Miranda. These languages have a simple model of behavior of basic
> objects and simple type rules. Libraries are designed according to
> these rules. Polymorphism is present only in resticted ways, quite
> different from the Python way.

I think the real problem is the possibility that identifiers can get
rebound at almost any point in time. Other than that, python's type rules
aren't that much more complex. Also, I'm not attempting to suggest we do
type inference, just "type check hoisting".

> The result is that these languages allow to program with almost no
> explicit type annotations, yet fully protected from type errors and
> not too limited, if you agree to think according to their way. But
> it's far from Python, and not compatible with existing Python's style
> at all.

Again, I think you misunderstood the idea... in the limit it leaves your
Python program alone. If it can move the type error earlier it does. I don't
promise much in the way of ruling out type errors. I'm just suggesting a way
to make sure the type errors are reported earlier, if they occur.

> At most there can be (I hope) a soft type system, with optional
> assertion-like type annotations. It would even allow applying
> optimizations by a smart interpreter: fortunately rebinding of local
> variables is restricted to the body of the given function, so the
> interpreter can sometimes be sure that a local variable is indeed
> always an integer.

Soft typing has it's limitations with respect to separate compilation. What
i'm suggesting is more modular, will work with any Python program. I'm not
so sure it will however be useful in general or not... someone needs to bite
the bullet and do an experiment by hand..... 

If someone could point me to a reference for the Python bytecode language
that would be a plus. Doing this at the level of bytecodes will be much
easier than at the source code level.