Language design

Mark Janssen dreamingforward at gmail.com
Sat Sep 14 22:37:44 EDT 2013


>>>> Really?  Are you saying you (and the community at-large) always derive
>>>> from Object as your base class?
>>>
>>> Not directly, that would be silly.
>>
>> Silly?  "Explicit is better than implicit"... right?
>
> If I'm inheriting from str, I inherit from str explicitly:
>
> class MyStr(str): ...
>
> and then str in turn inherits from object explicitly. I certainly do not
> inherit from object and then re-implement all the string methods from
> scratch:

I know that.  Str already inherits from object (due to the language
definition).  Your inheritance from object is implied by your
inheritance from a child class (str), but note there is an implied
directionality:  you don't say str is the parent of object.  But tell
me this:  is str the superclass of object or is it the other way
around?

> class MyStr(object):
>     def __new__(cls, value): ...
>     def upper(self): ...
>     def lower(self): ...
>     # and so on...
>
> That would be ridiculous, and goes against the very idea of inheritance.
> But nor do I feel the need to explicitly list the entire superclass
> hierarchy:
>
> class MyStr(str, object):
>     ...

Now you've lost your marbles.  You are arguing points that a python
programmer would not argue.  Now, since I know you to be a decent
python programmer, I can only conclude that your sanity is in
question.

> which would be silly. Only somebody who doesn't understand how
> inheritance works in Python would do that. There's simply no need for it,
> and in fact it would be actively harmful for larger hierarchies.

Explicitly inheriting from object ("class myBase(object):" rather than
"class myBase():") would not be "actively harmful" in any way.

>>>> But wait is it the "base" (at the bottom of the hierarchy) or is it
>>>> the "parent" at the top?  You see, you, like everyone else has been
>>>> using these terms loosely, confusing yourself.
>>>
>>> Depends on whether I'm standing on my head or not.
>>>
>>> Or more importantly, it depends on whether I visualise my hierarchy
>>> going top->down or bottom->up. Both are relevant, and both end up with
>>> the *exact same hierarchy* with only the direction reversed.
>>
>> Ha,  "only the direction reversed".  That little directionality that
>> you're passing by so blithely is the difference between whether you're
>> talking about galaxies or atoms.
>
> It makes no difference whether I write:
>
>     atoms -> stars -> galaxies
>
> or
>
>     galaxies <- stars <- atoms
>
> nor does it make any difference if I write the chain starting at the top
> and pointing down, or at the bottom and pointing up.

Here again, your sanity is questioned.  You are simply wrong.  Atoms
lie within galaxies, but galaxies do not lie within atoms (poetic
license excluded); i.e. there is a difference, whether your talking
about syntactically by the parser or conceptually by a human being.
Somewhere you have to put yourself in the middle.  And that point
defines how you relate to the machine -- towards abstraction (upwards)
or towards the concrete (to the machine itself).

>> The simplicity of Python has seduced you into making an "equivocation"
>> of sorts.  It's subtle and no one in the field has noticed it.  It crept
>> in slowly and imperceptively.
>
> Ah, and now we come to the heart of the matter -- people have been
> drawing tree-structures with the root at the top of the page for
> centuries, and Mark Janssen is the first person to have realised that
> they've got it all backwards.

I'll be waiting for your apology once you simply grasp the simple
(however inconvenient and unbelievable) truth. ;*)

>>>> By inheriting from sets you get a lot of useful functionality for
>>>> free.  That you don't know how you could use that functionality is a
>>>> failure of your imagination, not of the general idea.
>>>
>>> No you don't. You get a bunch of ill-defined methods that don't make
>>> sense on dicts.
>>
>> They are not necessarily ill-defined.  Keep in mind Python already chose
>> (way back in 1.x) to arbitrary overwrite the values in a key collision.
>> So this problem isn't new.  You've simply adapted to this limitation
>> without knowing what you were missing.
>
> No, Python didn't "arbitrarily" choose this behaviour.

Perhaps you don't recall the discussion.

> It is standard,
> normal behaviour for a key-value mapping, and it is the standard
> behaviour because it is the only behaviour that makes sense for a general
> purpose mapping.

No.  Please don't propagate your limited sense of things as if it "the
only way to do it".

> Python did not invent dicts (although it may have invented the choice of
> name "dict").
>
> If you think of inheritance in the Liskov Substitution sense, then you
> might *consider* building dicts on top of sets. But it doesn't really
> work, because there is no way to sensibly keep set-behaviour for dicts.

There's no need to preserve LSP -- it's just one way to think about
class relations.  In fact, I'll argue that one should not -- because
the field has not perfected the object model adequately, so it would
lead to a suboptimal situation akin to premature optimization.   The
conceptual abstraction is most important:  a subtype (child class)
should do everything that its parent does.

(That's far from a complete definition, but I'm just laying the
groundwork to start to understand the breadth of ideas and in this
landscape of OOP and how they relate to each other.)

> For example, take intersection of two sets s and t. It is a basic
> principle of set intersection that s&t == t&s.

That's called "commutation", grasshopper.  That's the word you want to use.

> But now, take two dicts with the same keys but different values, d and e.
> What values should be used when calculating d&e compared to e&d? Since
> they are different values, we can either:
>
> * prefer the values from the left argument over that from the right;
> * prefer the values from the right argument over that from the left;

...the way python effectively does it:  d1.update(d2) -- the values in
d2 overwrite the values in d1.  This is a semi-arbitrary decision.

But here's how it should be done:  any (op)eration:  d1 op d2, should
first apply op to the keys of (d1,d2), and then again apply to the
values in a recursive fashion to the values.  Now one will have to
define what should happen for things like __sub__ and such but at
least we have a solid framework for starting to think about it in a
more sophisticated way.

> * refuse to choose and raise an exception;
> * consider the intersection empty
>
> The first three choices will break the Liskov Substitution Principle,
> since now dicts *cannot* be substituted for sets. The fourth also breaks
> Liskov, but for a different reason:

I'm not going to evaluate your claim because I don't care about LSP.
Why you obsess over it is simply because it's how you've come to make
sense of the confusion that is in the field.

> # sets
> (key in s) and (key in t) implies (key in s&t);
>
> but
>
> # dicts
> (key in d) and (key in e) *does not* imply (key in d&e)
>

Eh?  How so?  There is a straightforward and obvious mapping from a
dict to a set, so your last line could be re-written as key in
(set(d)&set(e)) where it is clear that it *is* indeed equivalent.

> So either way, whether you are an OOP purist who designs your classes
> with type-theoretic purity and the Liskov Substitution Principle in mind,
> or a pragmatist who designs your classes with delegation of
> implementation in mind, you can't sensibly derive dicts from sets.

You *can* sensibly derive dicts from sets, because there is a clear
function that can map a dict to a set.  The dict, then, is just a
specialization of set.

As an attempt to try to nail this terminology down.  There's
"extending" a type, and there's "specializing" a type.  One adds more
methods and one refines what the methods do.  They are different.

>>>> Right.  The dict literal should be {:} -- the one obvious way to do
>>>> it.
>>>
>>> I don't agree it is obvious. It is as obvious as (,) being the empty
>>> tuple or [,] being the empty list.
>>
>> You're just being argumentative.  If there are sets as built-ins, then
>> {:} is the obvious dict literal, because {} is the obvious one for set.
>> You don't need [,] to be the list literal because there is no simpler
>> list-type.
>
> The point is that the obvious way to write an empty collection is using a
> pair of delimiters, not to shove an arbitrary separator separating
> nothing at all in there:

It's not a arbitrary separator, it is the obvious one, given that set
is already using the the empty curly braces and dict uses the colon as
a separator.

>>>>> And the obvious way to form an empty set is by calling set(), the
>>>>> same as str(), int(), list(), float(), tuple(), dict(), ...
>>>>
>>>> Blah, blah.  Let me know when you got everyone migrated over to
>>>> Python.v3.
>>>
>>> What does this have to do with Python 3? It works fine in Python 2.
>>
>> I mean, you're suggestions are coming from a "believer", not someone
>> wanting to understand the limitations of python or whether v3 has
>> succeeded at achieving its potential.
>
> "not someone wanting to understand the limitations of python..." -- are
> you aware that I started this thread?

Hence, the irony.

-- 
MarkJ
Tacoma, Washington



More information about the Python-list mailing list