@lru_cache on functions with no arguments

Ian Kelly ian.g.kelly at gmail.com
Thu Aug 3 12:02:54 EDT 2017


On Thu, Aug 3, 2017 at 9:55 AM, Serhiy Storchaka <storchaka at gmail.com> wrote:
> 03.08.17 18:36, Ian Kelly пише:
>>
>> The single variable is only a dict lookup if it's a global. Locals and
>> closures are faster.
>>
>> def simple_cache(function):
>>      sentinel = object()
>>      cached = sentinel
>>
>>      @functools.wraps(function)
>>      def wrapper(*args, **kwargs):
>>          nonlocal cached
>>          if args or kwargs:
>>              return function(*args, **kwargs)  # No caching with args
>>          if cached is sentinel:
>>              cached = function()
>>          return cached
>>      return wrapper
>>
>> *Zero* dict lookups at call-time. If that's not (marginally) faster
>> than lru_cache with maxsize=None I'll eat my socks.
>
>
> With salt?
>
> $ ./python -m timeit -s 'from simple_cache import simple_cache; f =
> simple_cache(int)' -- 'f()'
> 500000 loops, best of 5: 885 nsec per loop
> $ ./python -m timeit -s 'from functools import lru_cache; f =
> lru_cache(maxsize=None)(int)' -- 'f()'
> 1000000 loops, best of 5: 220 nsec per loop

Fixed:

$ python3 -m timeit -s 'from simple_cache import simple_cache; f =
simple_cache(int)' -- 'f()'
1000000 loops, best of 3: 0.167 usec per loop
$ python3 -m timeit -s 'import sys; sys.modules["_functools"] = None;
from functools import lru_cache; f = lru_cache(maxsize=None)(int)' --
'f()'
1000000 loops, best of 3: 0.783 usec per loop



More information about the Python-list mailing list