|Jeremy Hylton : weblog : 2003-11-04|
Tuesday, November 04, 2003, 1:05 a.m.
We are working on a new ZEO cache design as part of the multi-version concurrency control project for ZODB. Today Tim Peters suggested keeping the cache in memory all of the time.
In our Zope clusters, we tend to run machines with one or two gigabytes of RAM. At one customer site, we noticed that the storage server never used more than a few hundred megabytes of that memory. We had designed the new FileStorage pack implementation to be simple, then optimized it to use less memory. For that customer, at least, we should have used lots more; we could have made it fast instead.
An in-memory ZEO cache is appealing, because you don't want to waste time seeking around a file or files to keep the cache up to date. In normal operation, we expect the cache is full and each time the client fetches a new object, we need to evict old objects to make space for the new one. If it was all in-memory operations, it would be cheap.
One question is whether this works for applications that have more modest system memory. My older desktop machine has 256MB of memory, but my newer desktop and Laptop both have 512MB. What's typical these days? Maybe it doesn't matter, so long as the cache size is tunable.
There's still some benefit to having a persistent cache that survives across processes. If in memory is a win, you could periodically dump a snapshot to disk. Perhaps with an append-only log that would at least record invalidations so that you could quickly toss invalid stuff accumulated between the snapshot and the restart. Of course, on a clean shutdown, you can write a new snapshot.