python-noob - which container is appropriate for later exporting into mySql + matplotlib ?

someone newsboost at gmail.com
Sat Apr 13 17:36:07 EDT 2013


On 04/13/2013 10:01 PM, Dennis Lee Bieber wrote:
> On Sun, 14 Apr 2013 00:03:25 +1000, Chris Angelico <rosuav at gmail.com>
> declaimed the following in gmane.comp.python.general:

[ ....]

>> * Create a table with a number of rows with an ID and a counter,
>> initialized to 0
>> * Repeatedly, in parallel, perform a transaction:
>>    - Increment the counter on one of the rows (at random)
>>    - Increment a "possible" in-memory counter for that row
>>    - Commit the database transaction
>>    - Increment a "confirmed" in-memory counter for that row
>> * When an error of "database seems to be down" is detected, wait for
>> it to come up again, then query the table. The counters must all be at
>> least their corresponding "possible" value and at most the
>> "confirmed".
>>
> 	SQLite is a "file server" database (like M$ JET engine [aka:
> "Access"]). It's locking system is multi-stage. It allows multiple
> concurrent readers on a "shared" lock state. Only one connection can
> perform write operations ("reserved" lock) alongside the readers. A
> second connection attempting to perform a write will be rejected with a
> database locked condition. Then it really gets nasty -- the writer
> attempts to commit the update: The first step is to block other
> connections from even entering the read state (the "pending" lock).
> However, the writer itself is blocked until all remaining readers have
> exited; only then does it have exclusive access to and SQLite makes
> changes to the database file itself (prior to that, the writer
> connection is changing page images in memory)

Ok, this makes sense... It's not something I'll bother about to begin 
with, but maybe later (for critical apps) I can see that this is important.

[ ....]

> 	In the commit phase, SQLite first tries to ensure the rollback
> journal is flushed to disk -- but that apparently is out of its control;
> it can submit a sync command to the OS, but has to rely on what the OS
> tells it about the state of the writes to disk (the book indicates that
> some IDE drives would lie when queried about sync status, while still
> having unwritten data in the on-board buffers). After the rollback
> journal it submits the data to the database. I

I agree, this must be a problem, when the OS is lying...

> Crash during journal write: restart finds no journal, that transaction
> is lost but the database itself is clean
>
> Crash after journal during database update, restart finds journal,
> assumes database is suspect, and rolls back the pages, database is
> restored to pre-transaction state
>
> Crash after database sync during removal of journal, restart either
> finds journal still there and rolls back the pages restoring to
> pretransaction state, or the file was removed from the directory and
> SQLite determines database file is good with the last transaction in
> place.

Ok, this is a bit more advanced - I'll try to make my own experiments 
now and then after some time I guess I can dig more into these details, 
thanks.




More information about the Python-list mailing list