The Cost of Dynamism (was Re: Pyhon 2.x or 3.x, which is faster?)

Steven D'Aprano steve at pearwood.info
Sat Mar 12 11:56:45 EST 2016


On Sun, 13 Mar 2016 12:42 am, BartC wrote:

> Ad-hoc attributes I don't have as much of a problem with, as they can be
> handy. But predefined ones also have their points. (For one thing, I
> know how to implement those efficiently.)
> 
> However, when you have a function call like this: M.F(), where M is an
> imported module, then it is very unlikely that the functions in M are
> going to be created, modified, deleted or replaced while the program
> runs. [I mean, after the usual process of executing each 'def' statement.]

What do you consider "very unlikely"? And how do you know what people will
choose to do?


> Why then should it have to suffer the same overheads as looking up
> arbitrary attributes? And on every single call?

Because they *are* arbitrary attributes of the module. There's only one sort
of attribute in Python. Python doesn't invent multiple lookup rules for
attributes-that-are-functions, attributes-that-are-classes,
attributes-that-are-ints, attributes-that-are-strings, and so on. They are
all the same.

You gain a simpler implementation, a simpler execution model, simpler rules
for users to learn, and the ability to perform some pretty useful dynamic
tricks on those occasions where it is useful. For example, monkey-patching
a module for testing or debugging purposes.

In languages where functions are different from other values, you have to
recognise ahead of time "some day, I may need to dynamically replace this
function with another" and write your code specially to take that into
account, probably using some sort of "Design Pattern". In Python, you just
write your code in the normal fashion.


>> I also have a high level of method/attribute transparency. It doesn't
>> matter if I declare:
>>
>>         def this_or_that(self):
>>             if self.that:
>>                 self.that()
>>             else:
>>                 self.this()
>>
>>         [...]
>>
>>             self.that = True
>>
>> or:
>>
>>             self.this_or_that = self.that
> 
> This example, I don't understand. Do you mean that when your write X.Y,
> that Y can be an attribute of X one minute, and a method the next? (In
> which case I wouldn't want to have to maintain your code!)

Again, methods *are* attributes. Notice that Python doesn't have two
complete sets of functions:

getattr, setattr, delattr
getmethod, setmethod, delmethod

Again, as above, there is *one* look-up rule, that applies equally to
attributes-that-are-methods and attributes-that-are-floats (say). What
distinguishes the two cases is whether or not the attribute is a method
object or a float object, not how you access it or define it.

In practice, people rarely do something like:

    x.y = method
    # later
    x.y = 999
    # later still
    x.y = method # again


because that would be a strange thing to do, not because it would be
impossible. They don't do it because there's no reason to do so, not
because they can't: the same reason we don't walk around with pots of honey
carefully nestled in our hair.


>> Somewhat related, every method is an automatic delegate. Defining
>> callbacks is a breeze:
>>
>>         def clickety_click(self, x, y):
>>             [...]
>>
>>         [...]
>>             window.register_mouse_click_callback(self.clickety_click)
> 
> I don't follow this either. What's the advantage of dynamism here?

I think that Marko's point is that because obj.clickety_click is just a
regular attribute that returns a method object, you don't have to invent
special syntax for referring to method *without* calling them. You just use
the same attribute access syntax, but don't call the result.


>> That optimization wouldn't have any effect on any of my code.
>>
>> More generally, every method call in Python is such an elaborate
>> exercise that dabbling with character constants is going to be a drop in
>> the ocean.
> 
> When you dabble with lots of little things, then they can add up. To the
> point where an insignificant optimisation can become significant.

Of course. Reduced runtime efficiency is the cost you pay for the
flexibility gained by significant dynamism. It's a trade-off between
efficiency, convenience, simplicity, etc. It's quite legitimate for
language designers to choose to put that trade-off in different places, or
indeed for the trade-off to change over time.

For example, Java's strictness was found to be too limiting and static, and
so reflection was added to the language to add some dynamism. Here's the
description from a Stackoverflow answer:

http://stackoverflow.com/questions/37628/what-is-reflection-and-why-is-it-useful

    For example, say you have an object of an unknown type in Java, 
    and you would like to call a 'doSomething' method on it if one 
    exists. Java's static typing system isn't really designed to 
    support this unless the object conforms to a known interface, 
    but using reflection, your code can look at the object and find 
    out if it has a method called 'doSomething' and then call it if 
    you want to.



In Java, you have to write something like:

Method method = foo.getClass().getMethod("doSomething", null);
method.invoke(foo, null);


In Python, you never talk about reflection, because you don't need any
special syntax to make it work. You just write:

foo.doSomething()

exactly the same as the "ordinary" case of calling foo.doSomething(). From
the perspective of a Python programmer, the idea that Java programmers need
to talk about "reflection" to get it to work is quite weird.

(Python does have something like getMethod, namely getattr, but you wouldn't
use that with a string literal. You would only use it when the name of the
method wasn't known until runtime.)


-- 
Steven




More information about the Python-list mailing list