low-end persistence strategies?

Cameron Laird claird at lairds.us
Wed Feb 16 15:19:32 EST 2005


In article <7xll9pus7o.fsf_-_ at ruckus.brouhaha.com>,
Paul Rubin  <http://phr.cx@NOSPAM.invalid> wrote:
>I've started a few threads before on object persistence in medium to
>high end server apps.  This one is about low end apps, for example, a
>simple cgi on a personal web site that might get a dozen hits a day.
>The idea is you just want to keep a few pieces of data around that the
>cgi can update.
>
>Immediately, typical strategies like using a MySQL database become too
>big a pain.  Any kind of compiled and installed 3rd party module (e.g.
>Metakit) is also too big a pain.  But there still has to be some kind
>of concurrency strategy, even if it's something like crude file
>locking, or else two people running the cgi simultaneously can wipe
>out the data store.  But you don't want crashing the app to leave a
>lock around if you can help it.
>
>Anyway, something like dbm or shelve coupled with flock-style file
>locking and a version of dbmopen that automatically retries after 1
>second if the file is locked would do the job nicely, plus there could
>be a cleanup mechanism for detecting stale locks.
>
>Is there a standard approach to something like that, or should I just
>code it the obvious way?
>
>Thanks.

I have a couple of oblique, barely-helpful reactions; I
wish I knew better solutions.

First:  I'm using Metakit and SQLite; they give me more
confidence and fewer surprises than dbm.

Second:  Locking indeed is a problem, and I haven't 
found a good global solution for it yet.  I end up with
local fixes, that is, rather project-specific locking
schemes that exploit knowledge that, for example, there
are no symbolic links to worry about, or NFS mounts, or
...

Good luck.



More information about the Python-list mailing list