[issue32422] Reduce lru_cache memory overhead.

INADA Naoki report at bugs.python.org
Mon Dec 25 01:53:12 EST 2017


INADA Naoki <songofacandy at gmail.com> added the comment:

> Please stop revising every single thing you look at.  The traditional design of LRU caches used doubly linked lists for a reason.  In particular, when there is a high hit rate, the links can be updated without churning the underlying dictionary.

I don't proposing removing doubly linked list; OrderedDict uses
doubly-linked list too, and I found no problem for most-hit scenario.

On the other hand, I found problem of OrderedDict for most mis hit
scenario.

Now I think lru_cache's implementation is better OrderedDict.
PyODict is slower than lru_cache's dict + linked list because of
historical reason (compatibility with pure Python implemantation.)

So I stop trying to remove lru_cache's own implementation.
I'll try to reduce overhead of lru_cache, by removing GC header
from link node.

----------
title: Make OrderedDict can be used for LRU from C -> Reduce lru_cache memory overhead.

_______________________________________
Python tracker <report at bugs.python.org>
<https://bugs.python.org/issue32422>
_______________________________________


More information about the Python-bugs-list mailing list