> [marc w]
> I think that need not be as horrible as in Cecil when one doesn't use
> multi-methods for everything. ... not sure whether it's so difficult to
> come up with a reasonable scheme for method placement etc. in the
> common cases.
It's just another unknown. For the proverbial "general numeric classes"
that few people write <wink>, it's clear that, e.g., if the rational
class came first, then when adding a complex class the mixed complex/
rational cases "should go" with the latter, then when adding a matrix
class the mixed complex/matrix and complex/rational cases should go
with the matrix code, etc. In the absence of practical experience,
though, I'm inclined to take Chambers at his word that idiomatic MM usage
isn't so easy to lay out on a flat piece of paper.
In Cecil the "argument specializers" are also annotations on methods'
formal argument lists, so that the machinery for setting up MM dispatch
is easy to spell and attached directly to method definitions; Python
would need at least 3 kinds of other extensions to make it that clear
(i.e., a type system suitable for MM dispatch; a way to spell type
literals; and arglist declaration annotations).
> > [in cecil] decent speed requires sophisticated profile-based feedback
> > to the compiler, specifically aimed at speeding MM-style dispatch
> > searches.
> This is again with a very different expectation about speed than any-
> one using Python today might even dream of. ...
I believe you're thinking of Self. The profiles.ps.Z paper points out
that Cecil's implementation is very simple compared to Self's. While
they didn't say what kind of box they were running on, the times reported
for unoptimized Cecil programs appeared to be comparable to Python times
(around an hour and a half for Cecil to compile itself; about 5 seconds
to run a Queens program; etc).
Cecil's static optimizer (advertised as doing things like inlining, &
resolving as many MM dispatches as it can with compile-time info) gave
speedups of 2-4 times. The separate profile-feedback-based MM dispatch
optimizations gave speedups of _another_ factor of 2-4 times. So at
least as Chambers uses MM, MM dispatch is clearly a major time sink for
Cecil in the absence of aggressive optimizations.
I don't draw any conclusion from that other than that it's not obvious
how introducing MMs would affect Python. It is an expense that Python
doesn't pay today, & at least in one context it's a major expense.
Guido has so far mostly stayed away from features whose explanation and
implementation aren't clear, and we _already_ can't guess how changing a
line of Python will affect the speed of a program <0.9 grin> -- and the
other half of this thread has been trying to find a way to replace the
one feature he put in whose explanation and implementation weren't clear
<0.6 grin>. Suspect the only way to find out what MM's would do to the
equation is to prototype them and see. But that's cool!
Another consideration is that even if some way to make simple/common uses
work nicely and fast can be found (my intuition matches yours on that,
i.e. yes), will people let it rest there? Guido already regrets
implementing the simple lambda. Could be the only way to _really_ sell
MM's is to prove that they're indispensable for interfacing to Tk <grin>.
fan-of-model-33-teletypes-ly y'rs - tim
Tim Peters email@example.com
not speaking for Kendall Square Research Corp