[DB-SIG] New take on PostgreSQL bindings for Python

Andy Chambers andychambers2002 at yahoo.co.uk
Wed Sep 6 10:41:46 CEST 2006


> - bind parameters. The current pythonesque %(name)s specification is not 
> ideal. It requires various levels of pain in escaping the arguments passed 
> to a query. Most database backends accept bind parameters, albeit with 
> different syntaxes. For code portability, I'd rather parse the SQL query 
> and rewrite it to use the proper element for the bound parameter than do 
> the crazy escaping on the user input that is currently being done.

I agree about using the native parameter format where possible.  For what
its worth, my interpretation of the DBAPI, is that the paramstyles define
a "class" of parameter styles rather than a hard and fast rule of what
the parameters should be.  I believe that the postgres $1, $2 format could
be directly used without any query rewriting.  It is just one character away
from the numeric style.  Yes you lose in portability but if you can rewrite
the query every time you execute one, then you can rewrite your source once
if you have to.

Incidentally, the webpy module has an interesting solution to this.  In their
db wrapper, there is a method aparam(), which returns the paramater style for
the db currently being used.  If someone wants to write portable code, they
could implement this themselves and instead of writing

"select *
  from table
 where param = $1"

..they write this

"select *
  from table
 where param = %s" % (aparam(),)

Then if you change databases, you only need to redefine aparam().


> - parsed statements. On large loops this is a real gain. For lack of a 
> better specification, I currently use something like:
>  	prepcu = db.prepare("select * from foo where id = $1")
>  	prepcu.execute(2)
> The prepared cursor statement has the same fetchone(), fetchall() and 
> other properties of the regular cursors, except it only accepts parameters 
> to its execute() and executemany() calls.

Have you seen how much this actually improves performance?  I myself tried
writing a dbapi that made use of prepared queries but found that there
was no improvement over psycopg.
 
As I understand it, what you win from not parsing the query, you lose
in sub-optimal execution path for many types of query.  This is because
in postgres, the planner uses information in the query to decide which type
of scan it should use in searching the respective tables.  By using
PQPrepare, you make the plan without all possible information then keep using
that sub-optimal plan


Regards,
Andy


	
	
		
___________________________________________________________ 
All new Yahoo! Mail "The new Interface is stunning in its simplicity and ease of use." - PC Magazine 
http://uk.docs.yahoo.com/nowyoucan.html


More information about the DB-SIG mailing list