Py+SQLite or other (big output) ?

Magnus Lycka lycka at carmen.se
Tue Apr 11 14:10:04 EDT 2006


DurumDara wrote:
> I want to process many data with python, and want to store in database.
...
> So I want to use one database file - but I want to protect it.
> How to do it with SQLite ?
> I see that solutions:
> - 1. I use transactions.
> - 2. I create full copy of database after every bigger transation.
> - 3. Shadow file ???
> - 4. Mirror database (this is problematic to synch.).
> 
> The transactions are very good things, but does not protect the database 
> from injuring.
> The copy operation is better, but very decrease the processing speed, 
> because the result db grow fast, and copy of 1/2,2/3 GBs is slow, and 
> not too good.

With these requirements (data recovery, sizes of several gigabytes,
transaction safety etc) you might consider something "heavier" than
SQLite.

Of course, there is more work to administer something like DB2, Oracle
or PostgreSQL than SQLite, but at least the code is as easy as for
SQLite, and they are built to provide very robust storage of large
amounts of data in a transaction safe way, with ample possibilities
to spread out data across disks etc.

Also, with e.g. Oracle, you can define the max sizes of the database
files so that disks never get full. If the allotted files are all
full, there won't be any crash. You will just get a error from the
last INSERT, and stay in your transaction. If you catch this error
and alert the user, more disk space could be made available for the
database, the erring INSERT repeated and then you just go on with the
rest. I think you could do the same in recent PostgreSQL versions
by using savepoints. (PostrgeSQL requires a rollback after an SQL
error--savepoints enables you to rollback less than a full
transacion.)



More information about the Python-list mailing list