PySQLLite Speed

Gerhard Haering gh at ghaering.de
Fri Dec 3 07:36:25 EST 2004


On Fri, Dec 03, 2004 at 06:06:11AM -0500, Kent Johnson wrote:
> If your data is (or can be) created by an iterator, you can use this recipe 
> to group the data into batches of whatever size you choose and write the 
> individual batches to the db.
> http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/303279

If your data is (or can be) created by an iterator, then you might find it
interesting that *pysqlite2*'s .executemany() not only works on lists, but also
on iterators.

Example:

import pysqlite2.dbapi2 as sqlite
...
# A generator function (which returns an iterator)
def gen():
    for i in xrange(5):
        yield (5, 'foo')

cu.executemany("insert into foo(x, y) values (?, ?)", gen())

So, in pysqlite2, .executemany() and iterators provide best
performance.  .executemany() reuses the compiled SQL statement (so the
engine only needs to parse it once), and the iterator, if used
smartly, reduces the amount of memory used because you don't need to
construct large lists any more.

I hope I don't create too much confusion here ;-)

-- Gerhard
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 196 bytes
Desc: Digital signature
URL: <http://mail.python.org/pipermail/python-list/attachments/20041203/acf90857/attachment.sig>


More information about the Python-list mailing list