Isn't in the demo2 distributed with 1.0., but I'll keep my eyes open!
> Speed does matter some times
No argument, and I think Steve hit the nail on the head later when he
noted how many of the proposals for new builtin this-or-that aren't
driven by functionality but by "go fast" reasons. I've got nothing
against speeding it up.
> > [tim]
> > ... If the question really is one of _fast_ symbol table
> > functionality (faster than dicts, which aren't too shabby given their
> > flexibility) ...
> Granted; as I suggested, all we really need is a standard 'intern()'.
> Right, it's already here, but it helps alot if a feature is supported
> 'out-of-the-box', so new users can use it without knowing about classes,
> dictionaries, name spaces, etc.
I doubt that a builtin C-speed 'intern()' will be enough for you. That
is, it would let you say "x is y" with the hoped-for semantics (if x and
y are intern'ed), but by itself does nothing to speed attribute lookup.
Not even clear that, without other changes, indexing a dict _by_
intern'ed objects would be noticably faster than indexing a dict by a
string today. I.e., I believe you want a significantly faster mapping
method, and I doubt intern alone is near enough to get us there (but is a
step in a promising direction).
> Well, I've at times been a part of that community (I used to dabble
> in Prolog implementations),
No no no no no. _Real_ AI people wouldn't touch Prolog with a New Year's
> and I would suggest that Python can be an excellent symbol-
> processing language. ...
> Python's already got most of what you need. For example,
> we've got Lisp and Prolog lists/cons-cells as tuple-trees:
> (car, cdr) <<- cons(car, cdr), [car|cdr]
But, Mark! An object of email@example.com's C-coded CELL type takes 40 bytes
on the KSR system (exclusive of malloc overhead; this is a 64-bit
machine, BTW), well over twice as large as an implementation of a native
Lisp cons cell _should_ take here. And a 2-tuple takes 48 bytes (again
exclusive of malloc overhead). Python will never compete with
Lisp/Prolog on this _style_ of coding for that reason alone; Python is
much happier with "big" operations on "big" data structures. Lisp and
Prolog are already real good at what they do; Python needs its own niche.
> [other lisp-like things python kinda supports]
> IMHO, Python's a good balance between procedural and more exotic
Same here, but balance comes at the price of not going over an
> [various agreements deleted]
> ... Guido's got to walk a fine-line between flexibility and efficiency.
> It's probably too late to move towards the latter, and I'm not sure
> doing so we be in the spirit of the language.
Well, I don't see anything in the legendary Python Manifesto that
elevates needless inefficieny to a goal <snort>. If someone had time to
work on it, there's much that could be done to speed the implementation
without loss of functionality, and a world that could be done if people
were willing to give up some of the truly bizarre dynamic flexibility
(e.g., like having an instance "see" changes to a deep base class's
set of attributes long after the instance was created). I think most of
the possible really effective schemes would have this property (where the
"if"iest part is set off by added  brackets):
> speed up "normal" class use, but slow down programs that create
> attributes dynamically [but probably not very much] (at least they'd
> still work).
At this point I suggest we wait for Guido to come back, take 6 months to
catch up on his mail <0.9 grin>, and see whether he has any interest. He
already hinted he's been thinking along relevant lines.
beats-reading-goat-entrails-ly y'rs - tim
Tim Peters firstname.lastname@example.org
not speaking for Kendall Square Research Corp