[DB-SIG] cx_Oracle cursor.executemanyprepared()

Orr, Steve sorr at rightnow.com
Fri Apr 30 13:28:21 EDT 2004


FYI... I'm avoiding the Oracle SQL*Loader product because I'm
contemplating large amounts of data and I don't want an intervening
file... Just a straight copy of data out of one database engine and into
another. Also there's BLOBs and CLOBs to consider. One advantage of
SQL*Loader is that you can use the directload and nologging options
which saves on the redo and rollback segments. I hope to get around a
dependency on the SQL*Loader product by using temporary tables and
create table as select (CTAS) nologging or by using some advanced
techniques available in OCI. I also plan on creating the indexes AFTER
data population using the nologging option.


Steve Orr



-----Original Message-----
From: db-sig-bounces at python.org [mailto:db-sig-bounces at python.org] On
Behalf Of Bob Kimble
Sent: Friday, April 30, 2004 11:18 AM
To: db-sig at python.org
Subject: Re: [DB-SIG] cx_Oracle cursor.executemanyprepared()


On Friday 30 April 2004 12:51 pm, Orr, Steve wrote:
> I don't see a test case on the executemanyprepared() function. Can 
> someone give me some sample code to demonstrate how to use this?
>
> I need to develop a fast data insert routine for query result sets 
> coming from the MySQLdb python module. I'm thinking about iterating 
> through a MySQL result set (which is list of tuples), say 1,000 or 
> 10,000 rows at a time, followed by mass inserts of those rows into 
> Oracle committing every 1,000 or 10,000 rows.
>
> Other suggestions?

Another approach that I use frequently is to write a "bulk upload" file
and 
then run the bulk upload command in the database. For PostgreSQL it's
"COPY 
<table> FROM <file> ...." I know that Sybase and MS SQL Server have a
similar 
command. Typically a tab separated format is suitable, and it's rather
easy 
to generate. I suspect that Oracle has a similar capability. This
technique 
gives you a copy of the data you're uploading, and the bulk upload
commands 
are usually faster than doing an equivalent number of inserts. You can
help 
matters by deactivating or dropping the indexes before the copy and then

reactivating/restoring them afterward.

Regards,

.... Bob Kimble

>
>
> TIA !!!
> Steve Orr
>
>
> _______________________________________________
> DB-SIG maillist  -  DB-SIG at python.org 
> http://mail.python.org/mailman/listinfo/db-sig

_______________________________________________
DB-SIG maillist  -  DB-SIG at python.org
http://mail.python.org/mailman/listinfo/db-sig




More information about the DB-SIG mailing list