[DB-SIG] Re: cx_Oracle cursor.executemanyprepared()

Paul Moore pf_moore at yahoo.co.uk
Fri Apr 30 15:04:51 EDT 2004


"Orr, Steve" <sorr at rightnow.com> writes:

> I need to develop a fast data insert routine for query result sets
> coming from the MySQLdb python module. I'm thinking about iterating
> through a MySQL result set (which is list of tuples), say 1,000 or
> 10,000 rows at a time, followed by mass inserts of those rows into
> Oracle committing every 1,000 or 10,000 rows. 

You do know that for an Oracle database doing a single commit at the
end is better than multiple commits as you go through the loop? You
need enough rollback to do it, but if you commit in the loop, you risk
ORA-01555 (snapshot too old) errors and the need to restart a
partially completed load...

Other thoughts for the load - use direct-mode inserts (with the
/*+ APPEND */ hint) and NOLOGGING tables to reduce redo usage. If you
can afford a flat file of the data, external tables (or SQL*Loader in
earlier versions of Oracle) are useful.

Paul.
-- 
This signature intentionally left blank




More information about the DB-SIG mailing list