[Python-Dev] 'hasattr' is broken by design

Michael Foord fuzzyman at voidspace.org.uk
Wed Aug 25 19:58:20 CEST 2010


  On 25/08/2010 19:27, P.J. Eby wrote:
> At 12:10 PM 8/25/2010 +1200, Greg Ewing wrote:
>> Consider an object that is trying to be a transparent
>> proxy for another object, and behave as much as possible
>> as though it really were the other object. Should an
>> attribute statically defined on the proxied object be
>> considered dynamically defined on the proxy? If so, then
>> the proxy isn't as transparent as some people may want.
>
> Yep.  That's why the proposed addition to inspect is a bad idea.  If 
> we encourage that sort of static thinking, it will lead to people 
> creating all sorts of breakage with respect to more dynamic code.
>
> AFAICT, the whole "avoid running code" thing only makes sense for a 
> debugging tool 

I mentioned another use case - pulling out docstrings. IDEs or other 
tools that work with live objects may have many such use cases.

For proxying objects you can't use the __getattr__ approach for the 
"magic methods" which aren't looked up through the dynamic attribute 
'faking' process. Several proxy libraries I've seen get round this by 
providing *every* magic method on the proxy class and delegating. This 
still breaks certain types of duck typing. For example with Python 2.6:

 >>> class x(object):
...  @property
...  def __call__(self):
...   raise AttributeError
...
 >>> a = x()
 >>> callable(a)
True


If your proxy class defines __call__ then callable returns True, even if 
the delegation to the proxied object would cause an AttributeError to be 
raised. A *better* approach (IMO), for both the magic methods and the 
normal attributes / methods, is to dynamically generate a class that 
mimics the proxied object (caching generated classes for proxying 
objects of the same type if you are worried about overhead). This would 
also work with the suggested "passive introspection" function.

All the best,

Michael Foord


> -- at which point, you can always use the trace facility and throw an 
> error when any Python code runs that's not part of your debugging 
> tool.  Something like:
>
> def exists(ob, attr):
>    __running__ = True
>    # ... set trace function here
>    try:
>        try:
>            getattr(ob, attr)
>            return True
>        except AttributeError:
>            return False
>        except CodeRanError:
>             return True   # or False if you prefer
>    finally:
>        __running__ = False
>        # restore old tracing here
>
> Where the trace function is just something that throws CodeRanError if 
> it detects a "call" event and the __running__ flag is True.  This 
> would stop any Python code from actually executing.  (It'd need to 
> keep the same trace function for c_call events, since that might lead 
> to nested non-C calls .)
>
> Of course, a debugger's object inspection tool would probably actually 
> want to return either the attribute value, or a special value to mean 
> "dyanmic calculation needed".
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk


-- 
http://www.ironpythoninaction.com/



More information about the Python-Dev mailing list