Best practice for caching hash

2QdxY4RzWzUUiLuE at potatochowder.com 2QdxY4RzWzUUiLuE at potatochowder.com
Sat Mar 12 16:16:03 EST 2022


On 2022-03-12 at 21:45:56 +0100,
Marco Sulla <Marco.Sulla.Python at gmail.com> wrote:

[ ... ]

> So if I do not cache if the object is unhashable, I save a little
> memory per object (1 int) and I get a better error message every time.

> On the other hand, if I leave the things as they are, testing the
> unhashability of the object multiple times is faster. The code:
> 
> try:
>     hash(o)
> except TypeError:
>     pass
> 
> execute in nanoseconds, if called more than 1 time, even if o is not
> hashable. Not sure if this is a big advantage.

Once hashing an object fails, why would an application try again?  I can
see an application using a hashable value in a hashable situation again
and again and again (i.e., taking advantage of the cache), but what's
the use case for *repeatedly* trying to use an unhashable value again
and again and again (i.e., taking advantage of a cached failure)?

So I think that caching the failure is a lot of extra work for no
benefit.


More information about the Python-list mailing list