Python ORM library for distributed mostly-read-only objects?

Roy Smith roy at panix.com
Mon Jun 23 11:11:06 EDT 2014


In article <mailman.11202.1403534666.18130.python-list at python.org>,
 William Ray Wing <wrw at mac.com> wrote:

> On Jun 23, 2014, at 12:26 AM, smurfix at gmail.com wrote:
> 
> > On Sunday, June 22, 2014 3:49:53 PM UTC+2, Roy Smith wrote:
> > 
> >> Can you give us some more quantitative idea of your requirements?  How 
> >> many objects?  How much total data is being stored?  How many queries 
> >> per second, and what is the acceptable latency for a query?
> > 
> > Not yet, A whole lot, More than fits in memory, That depends.
> > 
> > To explain. The data is a network of diverse related objects. I can keep 
> > the most-used objects in memory but not all of them. Indeed, I _need_ to 
> > keep them, otherwise this will be too slow, even when using Mongo instead 
> > of SQLAlchemy. Which objects are "most-used" changes over time.
> > 
> 
> Are you sure it won¹t fit in memory?  Default server memory configs these 
> days tend to start at 128 Gig, and scale to 256 or 384 Gig.

I'm not sure what "default" means, but it's certainly possible to get 
machines with that much RAM.  On the other hand, even the amount of RAM 
on a single machine is not really a limit.  There are very easy to use 
technologies these days (i.e. memcache) which let you build clusters to 
effectively aggregate the physical RAM from multiple machines.  And 
database sharding lets you do a different flavor of memory aggregation.



More information about the Python-list mailing list