psycopg2: connect copy_from and copy_to

james.pye at gmail.com james.pye at gmail.com
Tue Feb 19 12:38:22 EST 2008


On Feb 19, 9:23 am, Thomas Guettler <h... at tbz-pariv.de> wrote:
> Yes, you can use "pg_dump production ... | psql testdb", but
> this can lead to dead locks, if you call this during
> a python script which is in the middle of a transaction. The python
> script locks a table, so that psql can't write to it.

Hrm? Dead locks where? Have you considered a cooperative user lock?
Are just copying data? ie, no DDL or indexes?
What is the script doing? Updating a table with unique indexes?

> I don't think calling pg_dump and psql/pg_restore is faster.

Normally it will be. I've heard people citing cases of COPY at about a
million records per second into "nicely" configured systems.
However, if psycopg2's COPY is in C, I'd imagine it could achieve
similar speeds. psql and psycopg2 both being libpq based are bound to
have similar capabilities assuming the avoidance of interpreted Python
code in feeding the data to libpq.

> I know, but COPY is much faster.

yessir.



More information about the Python-list mailing list