[Python-3000] why interfaces shouldn't be trusted

tomer filiba tomerfiliba at gmail.com
Wed May 10 22:03:52 CEST 2006


i think it's only Bill who said that, but i wanted to show why interfaces
(and inheritance) shouldn't be the basis for type-checking.

here's a nice interface:

class IFile(object):
    def write(self, data):
        pass
    def read(self, count):
        pass

and here's a class that "implements/derives" from this interface, but
completely ignores/breaks it:

class MyFile(IFile):
    def write(self, x, y, z):
        ....

m = MyFile()

now m is an instance of IFile, but calling m.write with the signature of
IFile.write won't work...

so, you could say, just enforce method signatures:
* the metaclass would check method signatures prior to creating the
class
* add casting, i.e., looking at the object at more than one way,
according to its bases:
cast(m, IFile).write()
so that MyFile could define a non-compatible method signature, but
up-casting it, would give you the original IFile method you'd
 expect.

and then you get java with a python syntax. so it's pointless.

in python, the type of x is the method-resolution-order (mro) of x, i.e.,
it defines what happens when you do x.y. (by the way, it would be
nice if __mro__ was mutable)

doing isinstance(a, b) only says it "a" has "b" in it's bases, not that "a"
compiles with "b"'s APIs or method signatures. there's no way to tell
the method signature is what you expect before calling it. therefore, i
see no reason why people should use type()/isinstance() on objects.

call it protocol, call it capabilities, in the end, it's "something the object
has" (hasattr) , rather than "something about the object" (type). so
checking "something about the object", i.e. if it inherits some interface
or base class, is not helpful.

of course ridding python of types is quite a drastic move. i would
suggest leaving the current mechanism, perhaps making it more lax,
but as the standard mechanism to early-check for type conformness,
use hasattr over isinstance.

that leaves in tact pyprotocols and the adapt PEP, which, imho, are
a very good example for a dynamic type system: casting is enforced
by the language, where as adaptation can be performed by the objects
themselves, which is of course much more powerful.

----

Oleg Broytmann wrote:
>   Even this is meaningless: '1'+1. So duck typing is:

yes, you *can* syntactically add "1" with 1, i.e., the code would compile.
evaluating the code would result in an exception, "unsupported operand
type(s) for +: 'int' and 'str'". that's the situation today, and i
don't see why
it should change.

builtin types, like ints and strings, have an internal real "underlying type",
but python-level types shouldn't: a pythonic type is just a collection of
attributes.

----

one thing that worries me with generic methods is, they would dispatch
based on types... which means my proxies would all break. please, think
of the proxies! ;)

or at least add a __type__ special method that can affect type(x), i.e.
type(x) --> x.__type__(), so proxies could return the type of the proxied
object, rather than the proxy object itself.

that way, generic methods could work with proxies.



-tomer


More information about the Python-3000 mailing list