python improvements (Was: Re: New Language)

Neel Krishnaswami neelk at brick.cswv.com
Tue May 16 16:35:06 EDT 2000


Martijn Faassen <m.faassen at vet.uu.nl> wrote:
> Neel Krishnaswami <neelk at brick.cswv.com> wrote:
> > Martijn Faassen <m.faassen at vet.uu.nl> wrote:
> >> 
> >> We have OPT, ERR and DOC reasons for doing type checking:
> >> 
> >> OPT -- the interpreter/compiler uses the type declarations to make our
> >>        code run faster. The more static the type checking, the better.
> 
> > I think this is a bit of a myth. 
> 
> Perhaps so, but..
> 
> > If you can declare that a variable is a machine integer or float, and
> > arrays of the same, you have enough information to get about 80% of
> > the speed benefit that static typing can win you.
> 
> What about classes and methods though? If you want to optimize some of
> the dispatching it may be pretty helpful to have static typing.

Strong typing doesn't help with dispatch elimination except in
combination with something like Java's 'final' classes or Dylan's
'sealed' classes, or if you do whole-program analysis (like Cecil's
Vortex compiler does).

This is because in order to eliminate a dispatch, you need to know the
direct class of an object, rather than just "this is an instance of
some subclass of Foo". For example, you couldn't do dispatch
elimination in this function:

    def square(n: Number):
    	return n * n

because the hypothetical Number class isn't a leaf class of the class
hierarchy. (Well, you could if you could do interprocedural dataflow
analysis and were able to prove that a particular call to square()
would receive a machine integer or float, but that's a *lot* of work.)

> Similarly, in a dynamic system you'd *lose* your nice declarations
> as soon as you pass them into some function that doesn't have them
> (barring type inferencing, which is about as non-easy as static typing,
> if not more :).

Lisp compilers are pretty smart about this: if you pass a typed
function to a dynamic fn, it will check the types on the function and
raise an error if they don't match. When they *can* compute all the
types then they don't do the checks. 

I don't know if anyone has written an OO language that supports type
inferencing -- I was under the impression that adding subtyping rules
to Hindley-Milner type systems made them undecidable.

> > Before static typing for speed is tried (with its complicating effects
> > on implementation), I think it's better to go for the factor of 2
> > improvement that can be won via easy tricks like method caches,
> > variable lookup optimization, and reducing function call
> > overhead. (Look at Squeak Smalltalk for an example.)
> 
> Definitely agreed. (method caching may not always be that easy, though,
> given __getattr__ and friends..)

True. I wouldn't mind if there were a smaller hammer than __getattr__,
though -- usually I want to catch only a few attributes dynamically, 
rather than all of them. 

> >> ERR -- we get more robust code as the types are checked. It's nice when
> >>        this happens during compile-time, though run-time can also be
> >>        helpful.
> 
> > With an interactive read-eval-print loop, run-time is just as good as
> > compile-time, IMO, because you can test your system incrementally, as
> > you build it.
> 
> Not all people agree on this. I think some interface conformance
> checking at compile-time would for instance be nice. But I
> definitely agree that the ERR component of static type checking is
> overrated.

That's why I put in the IMO. :) Seriously, types are most helpful to
me as documentation of intent, and as it happens I try to put the
types of arguments into the docstrings of code I write.

I have experimented with automatically checking the docstrings (see
http://www.sff.net/people/neelk/open-source/TypeChecker.py), but it
was never useful enough for me to pursue very far.


Neel



More information about the Python-list mailing list