From twisted@itamarst.org Sun Jul 7 13:10:07 2002 From: twisted@itamarst.org (Itamar Shtull-Trauring) Date: Sun, 07 Jul 2002 15:10:07 +0300 Subject: [DB-SIG] Suggestions for DB-API improvements Message-ID: <3D282F9F.5010607@itamarst.org> 1. The standard should specify what style of parameter quoting should be supported by database adapters. The current situation where each implements their own means you can't use this feature if you want to support multiple databases, which is very annoying. 2. There should be a standard for dealing with BLOBs that uses file-like objects (the way JDBC uses streams.) Too many adapters just assume that the BLOB is a string, which means you're screwed if you want to say use a 10MB BLOB, since now you have a 10MB string in memory. Comments? From jacobs@penguin.theopalgroup.com Sun Jul 7 14:40:50 2002 From: jacobs@penguin.theopalgroup.com (Kevin Jacobs) Date: Sun, 7 Jul 2002 09:40:50 -0400 (EDT) Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: <3D282F9F.5010607@itamarst.org> Message-ID: On Sun, 7 Jul 2002, Itamar Shtull-Trauring wrote: > 1. The standard should specify what style of parameter quoting should be > supported by database adapters. The current situation where each implements > their own means you can't use this feature if you want to support multiple > databases, which is very annoying. > > 2. There should be a standard for dealing with BLOBs that uses file-like > objects (the way JDBC uses streams.) Too many adapters just assume that the > BLOB is a string, which means you're screwed if you want to say use a 10MB > BLOB, since now you have a 10MB string in memory. > > Comments? YES! Here are a few more items from my wish list: 3. Better date/time specification that does not rely on unix time-since-epoch. Also one that can return datetime with timezone, when available. This may be a good time to look into such a proposal since Python 2.3 is about to grow a new standard datetime object. 4. A simple API which exposes the unquoted normalization rules for identifiers. 5. A simple API which exposes the string and identifier quoting/unquoting schemes. 6. A standard for per-cursor transaction support for those backends that support it. 7. More functional query result sets -- ones that have the memory footprint of tuples, and the flexibility of dictionaries. 8. More fine-grained type system -- something closer to what SQL92 provides, with a few extras from the big players. Plus a great deal more, though these are the more important ones for me. Comments? The question is how to get driver authors to support all these new requirements. Maybe we need multiple levels of compliance within the DB-API? What exists now could be "entry-level" compliance, so that we can define "intermediate-level" to include some of the easier features, and have "full-compliance" include some of the more complex things. Comments? -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From twisted@itamarst.org Sun Jul 7 14:47:59 2002 From: twisted@itamarst.org (Itamar Shtull-Trauring) Date: Sun, 07 Jul 2002 16:47:59 +0300 Subject: [DB-SIG] Suggestions for DB-API improvements References: Message-ID: <3D28468F.6040800@itamarst.org> Kevin Jacobs wrote: > The question is how to get driver authors to support all these new > requirements. Maybe we need multiple levels of compliance within the > DB-API? What exists now could be "entry-level" compliance, so that we can > define "intermediate-level" to include some of the easier features, and have Maybe it's time to make a DB-API *library*, so authors of support for specific database adapters don't need to do all this work from scratch. From jacobs@penguin.theopalgroup.com Sun Jul 7 14:58:31 2002 From: jacobs@penguin.theopalgroup.com (Kevin Jacobs) Date: Sun, 7 Jul 2002 09:58:31 -0400 (EDT) Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: <3D28468F.6040800@itamarst.org> Message-ID: On Sun, 7 Jul 2002, Itamar Shtull-Trauring wrote: > Kevin Jacobs wrote: > > The question is how to get driver authors to support all these new > > requirements. Maybe we need multiple levels of compliance within the > > DB-API? What exists now could be "entry-level" compliance, so that we can > > define "intermediate-level" to include some of the easier features, and have > > Maybe it's time to make a DB-API *library*, so authors of support for specific > database adapters don't need to do all this work from scratch. I would like to see this happen, but I am not sure how realistic it is. For example, how would we integrate support for products like mxODBC, which is commercially licensed? Some advantages of a library are: 1) Unified exception hierarchy 2) Ability to build higher-level abstractions, like business-objects (think ADO), driver-agnostic connection pooling, managed connection configuration, etc. 3) More shared implementation 4) More active maintenance (more eyes == better code) -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From djc@object-craft.com.au Sun Jul 7 16:05:41 2002 From: djc@object-craft.com.au (Dave Cole) Date: 08 Jul 2002 01:05:41 +1000 Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: References: Message-ID: >>>>> "Kevin" == Kevin Jacobs writes: Kevin> On Sun, 7 Jul 2002, Itamar Shtull-Trauring wrote: >> Kevin Jacobs wrote: > The question is how to get driver authors to >> support all these new > requirements. Maybe we need multiple >> levels of compliance within the > DB-API? What exists now could be >> "entry-level" compliance, so that we can > define >> "intermediate-level" to include some of the easier features, and >> have >> >> Maybe it's time to make a DB-API *library*, so authors of support >> for specific database adapters don't need to do all this work from >> scratch. Kevin> I would like to see this happen, but I am not sure how Kevin> realistic it is. For example, how would we integrate support Kevin> for products like mxODBC, which is commercially licensed? Kevin> Some advantages of a library are: Kevin> 1) Unified exception hierarchy 2) Ability to build Kevin> higher-level abstractions, like business-objects (think ADO), Kevin> driver-agnostic connection pooling, managed connection Kevin> configuration, etc. 3) More shared implementation 4) More Kevin> active maintenance (more eyes == better code) If someone was willing to start building a higher level interface which could hide the details of the specific database underneath then I would be happy to try integrating my database stuff. - Dave -- http://www.object-craft.com.au From jacobs@penguin.theopalgroup.com Sun Jul 7 16:10:48 2002 From: jacobs@penguin.theopalgroup.com (Kevin Jacobs) Date: Sun, 7 Jul 2002 11:10:48 -0400 (EDT) Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: Message-ID: On 8 Jul 2002, Dave Cole wrote: > If someone was willing to start building a higher level interface > which could hide the details of the specific database underneath then > I would be happy to try integrating my database stuff. Great! Would you mind going through the list of suggestions from Itamar and me and tell us which ones seem most valuable/feasible? Thanks, -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From dustin+pydbsig@spy.net Sun Jul 7 20:13:19 2002 From: dustin+pydbsig@spy.net (Dustin Sallings) Date: Sun, 7 Jul 2002 12:13:19 -0700 (PDT) Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: Message-ID: Around 09:40 on Jul 7, 2002, Kevin Jacobs said: # The question is how to get driver authors to support all these new # requirements. Maybe we need multiple levels of compliance within the # DB-API? What exists now could be "entry-level" compliance, so that we # can define "intermediate-level" to include some of the easier features, # and have "full-compliance" include some of the more complex things. A base set of classes from which all drivers must derive would do the trick nicely, I'd think. Python doesn't seem to have the concept of abstract classes, but a class definition where every method throws an exception would at least leave people knowing what to expect. I think there needs to be at least the following: A class representing a connection to a DB. A class representing a statement. A class representing a result set. A set of classes representing common exceptions. The current design doesn't seem to have logical separation i.e. A cursor object is the result set, but it is also the thing from which you run queries and will only have results if a query's been run. Separating it such that you only have a result set if you have issued a query that returns results (even if there were no matching results) makes this a lot clearer of a separation, not to mention it makes it easier to provide database caching layers and things like that. The reason for the separate statement class is so you can prepare a statement and loop an execute. If it weren't completely obvious I'm stealing design ideas from JDBC, I'd suggest something similar to database and result set metadata classes as well, which are extremely helpful when you need them. -- SPY My girlfriend asked me which one I like better. pub 1024/3CAE01D5 1994/11/03 Dustin Sallings | Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE L_______________________ I hope the answer won't upset her. ____________ From fog@initd.org Mon Jul 8 00:45:06 2002 From: fog@initd.org (Federico Di Gregorio) Date: 08 Jul 2002 01:45:06 +0200 Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: References: Message-ID: <1026085506.1064.11.camel@momo> --=-kNF7XxWTQ/ZuDoTTbQjb Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Il dom, 2002-07-07 alle 21:13, Dustin Sallings ha scritto: > Around 09:40 on Jul 7, 2002, Kevin Jacobs said: >=20 > # The question is how to get driver authors to support all these new > # requirements. Maybe we need multiple levels of compliance within the > # DB-API? What exists now could be "entry-level" compliance, so that we > # can define "intermediate-level" to include some of the easier features, > # and have "full-compliance" include some of the more complex things. >=20 > A base set of classes from which all drivers must derive would do > the trick nicely, I'd think. Python doesn't seem to have the concept of > abstract classes, but a class definition where every method throws an > exception would at least leave people knowing what to expect. >=20 > I think there needs to be at least the following: >=20 > A class representing a connection to a DB. > A class representing a statement. > A class representing a result set. > A set of classes representing common exceptions. mm.. python or C? some drivers are C-only and would not be very nice to force them go python. --=20 Federico Di Gregorio Debian GNU/Linux Developer & Italian Press Contact fog@debian.org INIT.D Developer fog@initd.org "Yes, your honour, I have RSA encryption code tattood on my penis. Shall I show the jury?" -- --=-kNF7XxWTQ/ZuDoTTbQjb Content-Type: application/pgp-signature; name=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQA9KNKCvcCgrgZGjesRAlmWAJ9HYpTpjLhEe5rwcYSYI03uutqJvwCgtozk CgM4/Yu2t/sJUMNly0YrkQg= =pWGA -----END PGP SIGNATURE----- --=-kNF7XxWTQ/ZuDoTTbQjb-- From huy@tramada.com.au Mon Jul 8 04:24:30 2002 From: huy@tramada.com.au (Huy Do) Date: Mon, 8 Jul 2002 13:24:30 +1000 Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: <1026085506.1064.11.camel@momo> Message-ID: Hi All, It would be great if we had database independent calls to get catalog information eg. foreign keys etc. From dustin+pydbsig@spy.net Mon Jul 8 07:37:00 2002 From: dustin+pydbsig@spy.net (Dustin Sallings) Date: Sun, 7 Jul 2002 23:37:00 -0700 (PDT) Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: <1026085506.1064.11.camel@momo> Message-ID: Around 01:45 on Jul 8, 2002, Federico Di Gregorio said: # mm.. python or C? some drivers are C-only and would not be very nice to # force them go python. I admit that I've not done any python modules in C, however, my answer to that question would be python. If you can't extend python classes in C, you can at least create the glue work and write a thin python layer to implement the required python classes. It would probably be possible to create a driver developer kit including a skeleton framework in C which could be used to implement many drivers more easily. -- SPY My girlfriend asked me which one I like better. pub 1024/3CAE01D5 1994/11/03 Dustin Sallings | Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE L_______________________ I hope the answer won't upset her. ____________ From ianb@colorstudy.com Mon Jul 8 08:13:07 2002 From: ianb@colorstudy.com (Ian Bicking) Date: 08 Jul 2002 02:13:07 -0500 Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: References: Message-ID: <1026112388.19936.5730.camel@lothlorien.colorstudy.net> On Sun, 2002-07-07 at 14:13, Dustin Sallings wrote: > A class representing a connection to a DB. > A class representing a statement. > A class representing a result set. Should all of these be implemented as a wrapper around current (DB API 2.0) modules? It doesn't seem terribly difficult, and it would mean good support for the new API would be quick to come. A lot of these could be phrased fairly completely in terms of the 2.0 API, with minor changes (via subclasses) for each actual DB. Of course, there's lots of DB wrappers out there, but if this one gets graced with the label DB API 3.0, then it will be more important. It might be interesting, as a conversation piece and the basis for more discussion on wrappers, to clone JDBC in Python. Of course, that would be followed by cleaning it up and simplifying, since this is Python after all and not Java. > A set of classes representing common exceptions. And of course, this is the one that couldn't be a wrapper, but would have to be built in. Well, perhaps with some cleverness, but it would be particularly annoying to try to wrap this. -- Ian Bicking Colorstudy Web Development ianb@colorstudy.com http://www.colorstudy.com 4869 N Talman Ave, Chicago, IL 60625 / (773) 275-7241 From mal@lemburg.com Mon Jul 8 09:11:37 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 08 Jul 2002 10:11:37 +0200 Subject: [DB-SIG] Suggestions for DB-API improvements References: Message-ID: <3D294939.70808@lemburg.com> Kevin Jacobs wrote: > On Sun, 7 Jul 2002, Itamar Shtull-Trauring wrote: > >>Kevin Jacobs wrote: >> >>>The question is how to get driver authors to support all these new >>>requirements. Maybe we need multiple levels of compliance within the >>>DB-API? What exists now could be "entry-level" compliance, so that we can >>>define "intermediate-level" to include some of the easier features, and have No way :-) ODBC did this and the result is a complete mess. We don't want to go down that road. If you need more capabilities or features, these should be crafted on top of what we have and use a different name, e.g. Abstract DB API. >>Maybe it's time to make a DB-API *library*, so authors of support for specific >>database adapters don't need to do all this work from scratch. > > > I would like to see this happen, but I am not sure how realistic it is. For > example, how would we integrate support for products like mxODBC, which is > commercially licensed? Why should that be a problem ? I you can come up with a DB API support library on top of the existing DB API standard, then plugging in the various underlying DB API compatible modules would not cause you any license issue. > Some advantages of a library are: > > 1) Unified exception hierarchy > 2) Ability to build higher-level abstractions, like business-objects > (think ADO), driver-agnostic connection pooling, managed connection > configuration, etc. > 3) More shared implementation > 4) More active maintenance (more eyes == better code) Looks like you're looking for a standard abstraction layer on top of the DB API specs. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From mal@lemburg.com Mon Jul 8 09:22:13 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Mon, 08 Jul 2002 10:22:13 +0200 Subject: [DB-SIG] Suggestions for DB-API improvements References: Message-ID: <3D294BB5.7070107@lemburg.com> Kevin Jacobs wrote: > On Sun, 7 Jul 2002, Itamar Shtull-Trauring wrote: > >>1. The standard should specify what style of parameter quoting should be >>supported by database adapters. The current situation where each implements >>their own means you can't use this feature if you want to support multiple >>databases, which is very annoying. The DB API standard does define the set of possible parameter styles already. A standard SQL rewriting routine in a support module would be the way to go (it can query the paramstyle attribute and then do the proper processing). >>2. There should be a standard for dealing with BLOBs that uses file-like >>objects (the way JDBC uses streams.) Too many adapters just assume that the >>BLOB is a string, which means you're screwed if you want to say use a 10MB >>BLOB, since now you have a 10MB string in memory. That's a good idea. In ODBC several vendors have included extensions in their drivers to support reading data from streams or writing it to streams. Unfortunately, there's no standard there yet, so these extensions are only usable when linking directly to the ODBC driver (rather than through an ODBC manager). > YES! Here are a few more items from my wish list: > > 3. Better date/time specification that does not rely on unix > time-since-epoch. Also one that can return datetime with timezone, when > available. This may be a good time to look into such a proposal since > Python 2.3 is about to grow a new standard datetime object. The standard already addresses this, in fact, mxDateTime is the suggested type to use and many DB API modules do so. > 4. A simple API which exposes the unquoted normalization rules for > identifiers. > > 5. A simple API which exposes the string and identifier quoting/unquoting > schemes. This can easily be provided through a database support module, e.g. dbsupport.py. > 6. A standard for per-cursor transaction support for those backends that > support it. Hmm, how many backends do support this ? Per connection transactions are common and some backends also support sub-transactions via plain SQL. Both can be had without extending the DB API. > 7. More functional query result sets -- ones that have the memory footprint > of tuples, and the flexibility of dictionaries. This is allowed by the DB API. If you can provide a working C implementation, I'm sure more authors will start using it. > 8. More fine-grained type system -- something closer to what SQL92 provides, > with a few extras from the big players. This would break existing applications which rely on the existing data types. Note that extending these is easy: I've done that in mxODBC by simply returning the raw SQL type code (which are much more fine-grained) and providing support type code objects which map these to the ones expected by the DB API compatible application. > Plus a great deal more, though these are the more important ones for me. > > Comments? > > The question is how to get driver authors to support all these new > requirements. See my other mail: provide public domain support modules which DB API module authors can use. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From fog@initd.org Mon Jul 8 09:57:22 2002 From: fog@initd.org (Federico Di Gregorio) Date: 08 Jul 2002 10:57:22 +0200 Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: References: Message-ID: <1026118642.966.10.camel@momo> --=-4IY3dcK5tE2laIpyjYC+ Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Il lun, 2002-07-08 alle 08:37, Dustin Sallings ha scritto: > Around 01:45 on Jul 8, 2002, Federico Di Gregorio said: >=20 > # mm.. python or C? some drivers are C-only and would not be very nice to > # force them go python. >=20 > I admit that I've not done any python modules in C, however, my > answer to that question would be python. >=20 > If you can't extend python classes in C, you can at least create > the glue work and write a thin python layer to implement the required > python classes. >=20 > It would probably be possible to create a driver developer kit > including a skeleton framework in C which could be used to implement many > drivers more easily. i've looked at the code of 3 diferent drivers and they *so* different that i think it would not be that much usefull to have such layer. and having a common base in python then go from python to C to python just to unify drivers is overkill. you just want a middle layer over the drivers, imo. --=20 Federico Di Gregorio Debian GNU/Linux Developer & Italian Press Contact fog@debian.org INIT.D Developer fog@initd.org All programmers are optimists. -- Frederick P. Brooks, Jr. --=-4IY3dcK5tE2laIpyjYC+ Content-Type: application/pgp-signature; name=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQA9KVPyvcCgrgZGjesRAvFjAJ9ZGDk9Hnj8cN0eUNwRsiBXVO6lLwCeMAC0 Hhaq7f7FScDBkEMWYXJhcWw= =fKxp -----END PGP SIGNATURE----- --=-4IY3dcK5tE2laIpyjYC+-- From jacobs@penguin.theopalgroup.com Mon Jul 8 11:47:33 2002 From: jacobs@penguin.theopalgroup.com (Kevin Jacobs) Date: Mon, 8 Jul 2002 06:47:33 -0400 (EDT) Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: <3D294939.70808@lemburg.com> Message-ID: On Mon, 8 Jul 2002, M.-A. Lemburg wrote: > >>Kevin Jacobs wrote: > >>>The question is how to get driver authors to support all these new > >>>requirements. Maybe we need multiple levels of compliance within the > >>>DB-API? What exists now could be "entry-level" compliance, so that we can > >>>define "intermediate-level" to include some of the easier features, and have > > No way :-) > > ODBC did this and the result is a complete mess. We don't want > to go down that road. > > If you need more capabilities or features, these should be crafted > on top of what we have and use a different name, e.g. Abstract DB API. That part of my proposal was somewhat rhetorical. It irks me to no end that some database vendors claim "full SQL{92,99} conformance" when they only have Entry level conformance. > >>Maybe it's time to make a DB-API *library*, so authors of support for specific > >>database adapters don't need to do all this work from scratch. > > > > I would like to see this happen, but I am not sure how realistic it is. For > > example, how would we integrate support for products like mxODBC, which is > > commercially licensed? > > Why should that be a problem ? The library I envision would have shared components, like a common exception hierarchy, common user-defined type binding system, etc. If you decided not to support these features natively, it would be difficult to supply suitably patched versions of mxODBC to commercial users. > If you can come up with a DB API support library on top of the > existing DB API standard, then plugging in the various underlying > DB API compatible modules would not cause you any license issue. This being the problem. I do appreciate the simplicity and parsimonious nature of DB-API 2.0, and how sucessfull it has been at addressing the top 80% of features. However, I and many other enterprise users seem to be constantly running into cases where we need something in that remaining 20%. For my own uses, I already have a modular library that allows me to plug in various underlying DB-API components. It is in use in many large companies, and performs fairly well in our products. It is lacking in several significant ways, and is also legally encumbered, so I am looking to build a community driven Abstract DB-API that addresses the most important of the missing features. > > Some advantages of a library are: > > > > 1) Unified exception hierarchy > > 2) Ability to build higher-level abstractions, like business-objects > > (think ADO), driver-agnostic connection pooling, managed connection > > configuration, etc. > > 3) More shared implementation > > 4) More active maintenance (more eyes == better code) > > Looks like you're looking for a standard abstraction layer > on top of the DB API specs. Number 1 is hard to do. I have a way of doing it, but it is ugly and will break if/when exceptions become new-style classes. Number 2 is clearly doable in an abstraction layer, though a significant amount of efficiency is lost for some drivers. Number 3 cannot be done as an abstraction layer. Shared implementation makes writing and maintaining DB-API drivers easier. This includes things like common exception hierarchies, type reflection systems, and custom type binding systems. Number 4 is also not as effective when dealing with DB-API drivers through a pure abstraction layer. It is debatable if this is a good or bad thing. Anyhow, the practicalities of the situation may dictate that an abstraction layer is the only feasible way to go. I'm still open-minded about the whole process, though I do have the perspective of having already written a fairly comprehensive abstraction layer over DB-API. From this experience, I feel that an abstraction layer cannot quite to everything I need it to. The ideal solution would be to minimally augment DB-API to support the needs of abstraction layers, as I am discussing in other parts of this thread. I would be interested to hear what you think about those specific feature proposals. Thanks, -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From jacobs@penguin.theopalgroup.com Mon Jul 8 12:30:57 2002 From: jacobs@penguin.theopalgroup.com (Kevin Jacobs) Date: Mon, 8 Jul 2002 07:30:57 -0400 (EDT) Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: <3D294BB5.7070107@lemburg.com> Message-ID: On Mon, 8 Jul 2002, M.-A. Lemburg wrote: > Kevin Jacobs wrote: > > On Sun, 7 Jul 2002, Itamar Shtull-Trauring wrote: > >>1. The standard should specify what style of parameter quoting should be > >>supported by database adapters. The current situation where each implements > >>their own means you can't use this feature if you want to support multiple > >>databases, which is very annoying. > > The DB API standard does define the set of possible parameter > styles already. A standard SQL rewriting routine in a support > module would be the way to go (it can query the paramstyle > attribute and then do the proper processing). I have an SQL rewriting routine, or more correctly a full SQL92 parser and unparser. The major problem is that it is extremely slow. Part of this is an implementation issue -- I wrote that parser using John Aycock's SPARC parser -- it is very easy to use but also very slow. > >>2. There should be a standard for dealing with BLOBs that uses file-like > >>objects (the way JDBC uses streams.) Too many adapters just assume that the > >>BLOB is a string, which means you're screwed if you want to say use a 10MB > >>BLOB, since now you have a 10MB string in memory. > > That's a good idea. In ODBC several vendors have included > extensions in their drivers to support reading data from streams > or writing it to streams. Unfortunately, there's no standard there > yet, so these extensions are only usable when linking directly to > the ODBC driver (rather than through an ODBC manager). If you are linking through ODBC at all. ;) > > YES! Here are a few more items from my wish list: > > > > 3. Better date/time specification that does not rely on unix > > time-since-epoch. Also one that can return datetime with timezone, when > > available. This may be a good time to look into such a proposal since > > Python 2.3 is about to grow a new standard datetime object. > > The standard already addresses this, in fact, mxDateTime is the > suggested type to use and many DB API modules do so. Well, either mxDateTime does not parse or store timezone offsets, and even if it did, or DB-API drivers I have used do not preserve these offsets. Thus, using the SQL type DATETIME WITH TIMEZONE results in round-trip data loss through Python DB-API drivers. > > 4. A simple API which exposes the unquoted normalization rules for > > identifiers. > > 5. A simple API which exposes the string and identifier quoting/unquoting > > schemes. > > This can easily be provided through a database support > module, e.g. dbsupport.py. Sure, but the connection and cursor objects should encapsulate this module. Why? Because a lot of code is (or should be) written so that it is abstract of the backend as much as possible. > > 6. A standard for per-cursor transaction support for those backends that > > support it. > > Hmm, how many backends do support this ? Per connection transactions > are common and some backends also support sub-transactions via plain > SQL. Both can be had without extending the DB API. I agree -- this point does not require extending the DB API, though there are advantages to doing so. I'll write up some notes on this so we can discuss it more. > > 7. More functional query result sets -- ones that have the memory footprint > > of tuples, and the flexibility of dictionaries. > > This is allowed by the DB API. If you can provide a working > C implementation, I'm sure more authors will start using it. I have a working C implementation for my own uses and am willing to adapt the API to be more generally acceptible. I call these result objects 'Rows' for obvious reasons. Here is what it currently does: 1) Rows act like tuples in almost every respect. This is for backward compatiblity with code that expects 'standard' DB-API behavior. e.g.: for r = row( (1,2,3) ) r == (1,2,3) r[0] == 1 r[1:3] == (2,3) r + (4,) == (1,2,3,4) 2) Rows also provide case insensitive access to fields by name using the getitem syntax: r['a'] == r['A'] == r[0] == 1 r['b'] == r['B'] == r[1] == 2 r['c'] == r['C'] == r[2] == 3 3) Rows also provide a partial dict-like interface: r.keys() == ['a','b','c'] r.values() == [1,2,3] r.items() == [('a',1),('b',2),('c',3)] 4) Rows provide direct case-insenstive attribute access to fields r.fields.a == r.fields.A == 1 r.fields.b == r.fields.B == 2 r.fields.c == r.fields.C == 3 5) Row instances consume about as much memory as an object instance and a tuple. For instance, 200,000 tuple instances (that store 11 integers) require about 19MB of memory, dictionaries require over 117MB of memory. My row objects require about 30MB of memory -- a pretty good trade-off for me. > > 8. More fine-grained type system -- something closer to what SQL92 provides, > > with a few extras from the big players. > > This would break existing applications which rely on the existing > data types. Note that extending these is easy: I've done that in > mxODBC by simply returning the raw SQL type code (which are much > more fine-grained) and providing support type code objects which > map these to the ones expected by the DB API compatible application. We can do much of this without breaking existing types -- it would require that type objects not be mutually exclusive (although there is currently no explict requirement that they are now). e.g.: STRING == VARCHAR STRING == CHAR NUMBER == INT DATETIME == TIMESTAMP DATETIME == TIME ... etc ... -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From dietmar@schwertberger.de Mon Jul 8 19:43:06 2002 From: dietmar@schwertberger.de (Dietmar Schwertberger) Date: Mon, 8 Jul 2002 20:43:06 +0200 (BST) Subject: [DB-SIG] INSERT: getting the id Message-ID: Hi, I'm using DCOracle and cx_Oracle. After creating a new data set using cursor.execute("INSERT...") I'd like to know the id of the new set. Unfortunately execute doesn't return the id and neither DA supports the lastrowid attribute. Any way to get the id? Regards, Dietmar From pk1u@yahoo.com Tue Jul 9 00:55:05 2002 From: pk1u@yahoo.com (Praveen Kumar) Date: Mon, 8 Jul 2002 16:55:05 -0700 (PDT) Subject: [DB-SIG] DCOracle2 w/ apache + mod_python under rh7.2 ? In-Reply-To: <20020708230802.14200.80386.Mailman@mail.python.org> Message-ID: <20020708235505.44584.qmail@web10406.mail.yahoo.com> Has anyone successfully used DCOracle2 w/ apache + mod_python under rh7.2 ? My system consists of rh7.2, apache1.3.26, mod_python-2.7.8, python2.1.3, Oracle9i . I see the following when my program accesses the DCOracle2 module: File "/usr/lib/python2.1/site-packages/DCOracle2/__init__.py", line 37, in ? from DCOracle2 import * File "/usr/lib/python2.1/site-packages/DCOracle2/DCOracle2.py", line 104, in ? import dco2 ImportError: libclntsh.so.9.0: cannot open shared object file: No such file or directory I've tried the following: ----- Setting LD_LIBRARY_PATH from the same shell where I start apache: export LD_LIBRARY_PATH=/home/pk/OraHome1/lib ----- Setting LD_LIBRARY_PATH to /home/pk/OraHome1/lib via Apache's PassEnv and SetEnv directives. ----- Placed libclntsh.so.9.0 in /usr/lib/python2.1/site-packages/DCOracle2 ( same path as dco2.so ). ----- Added /home/pk/OraHome1/lib to /etc/ld.so.conf , and executed /sbin/ldconfig as root. ----- Tried adding each of: os.putenv( 'LD_LIBRARY_PATH', '/home/pk/OraHome1/lib' ) os.environ[ 'LD_LIBRARY_PATH' ] = '/home/pk/OraHome1/lib' in DCOracle2/DCOracle2.py before the "import dco2" statement ----- /home/pk/OraHome1/lib is readable by all ; but tried the following anyway, to eliminate a permissions-issue as the cause: As root, copied /home/pk/OraHome1/lib to /oralib ; tried all of the above, using /oralib ----- "/home/pk/OraHome1/lib/libclntsh.so.9.0" exists, yet none of these work. The DCOracle2 module works fine when I use it from a standalone program. It would be helpful to know if anyone has gotten this config to work; also, any suggestions would be appreciated. pk __________________________________________________ Do You Yahoo!? Sign up for SBC Yahoo! Dial - First Month Free http://sbc.yahoo.com From jno@glasnet.ru Tue Jul 9 08:38:32 2002 From: jno@glasnet.ru (Eugene V. Dvurechenski) Date: Tue, 9 Jul 2002 11:38:32 +0400 Subject: [DB-SIG] DCOracle2 w/ apache + mod_python under rh7.2 ? In-Reply-To: <20020708235505.44584.qmail@web10406.mail.yahoo.com> References: <20020708230802.14200.80386.Mailman@mail.python.org> <20020708235505.44584.qmail@web10406.mail.yahoo.com> Message-ID: <20020709073832.GV14854@glas.net> On Mon, Jul 08, 2002 at 04:55:05PM -0700, Praveen Kumar wrote: > import dco2 > > ImportError: libclntsh.so.9.0: cannot open shared > object file: No such file or directory 1) make sure you have set ORACLE_HOME env var. 2) make sure you have set LD_LIBRARY_PATH (to $ORACLE_HOME/lib) env var. 3) make sure that both of them are set _before_ the ecript execution. the last point is essential - os.environ doesn't help much. -- SY, jno (PRIVATE PERSON) [ http://www.glasnet.ru/~jno ] a TeleRoss techie [ http://www.aviation.ru/ ] If God meant man to fly, He'd have given him more money. From djc@object-craft.com.au Tue Jul 9 11:26:57 2002 From: djc@object-craft.com.au (Dave Cole) Date: 09 Jul 2002 20:26:57 +1000 Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: References: Message-ID: >>>>> "Kevin" == Kevin Jacobs writes: Kevin> On 8 Jul 2002, Dave Cole wrote: >> If someone was willing to start building a higher level interface >> which could hide the details of the specific database underneath >> then I would be happy to try integrating my database stuff. Kevin> Great! Would you mind going through the list of suggestions Kevin> from Itamar and me and tell us which ones seem most Kevin> valuable/feasible? I am not really a good person to do that. My database knowledge is not really very good. - Dave -- http://www.object-craft.com.au From coventry@one.net Tue Jul 9 23:00:20 2002 From: coventry@one.net (Jon Franz) Date: Tue, 09 Jul 2002 18:00:20 -0400 Subject: [DB-SIG] Already begun work on something similar... Message-ID: <3D2B5CF4.1070609@one.net> > > >>>>>>>>> "Kevin" == Kevin Jacobs writes: >>>>> >>>> If someone was willing to start building a higher level interface >>>> which could hide the details of the specific database underneath >>>> then I would be happy to try integrating my database stuff. >> Kevin> Great! Would you mind going through the list of suggestions Kevin> from Itamar and me and tell us which ones seem most Kevin> valuable/feasible? Dave>I am not really a good person to do that. My database knowledge is Dave>not really very good. I've begun work on such a wrapper myself, but was only just past the planning stages... Its very ADO-alike, but stripped down to the most-used pieces. I'm taking what I like from ADO, perl-DBI, Delphi DB Objects, and trying to make something exceedingly easy to use for the average joe. The _main_ thing I see missing from the DB API, that makes the wrapper I'm working on quite a pain, is a fetch-column-by-name capability, instead of indexing into the returned tuple via a number. Of course, I may be looking past something in the documentation that aloows this - so enlighten me. Even if its just a quick wrokaround, it'd be helpful. I've noticed that the old, non DBAPI compliant modules for postgreSQL have this functionality, but I did not want to write my wrapper against a non DBAPI module. PS: sorry for jumping into the middle of this conversation. From twisted@itamarst.org Tue Jul 9 22:56:02 2002 From: twisted@itamarst.org (Itamar Shtull-Trauring) Date: Tue, 09 Jul 2002 17:56:02 -0400 Subject: [DB-SIG] Re: Already begun work on something similar... References: <3D2B5CF4.1070609@one.net> Message-ID: <3D2B5BF2.40005@itamarst.org> Jon Franz wrote: > I've begun work on such a wrapper myself, but was only just past the > planning stages... Its very ADO-alike, but stripped down to the > most-used pieces. I'm taking what I like from ADO, perl-DBI, Delphi > DB Objects, and trying to make something exceedingly easy to use for > the average joe. I actually wasn't thinking of a wrapper. I was thinking of a set of classes and functions that would be used to *implement* DB-API compliant adapters, so they don't need to duplicate work, and can have a more consistent feature set. From haering_postgresql@gmx.de Wed Jul 10 00:41:34 2002 From: haering_postgresql@gmx.de (Gerhard =?iso-8859-15?Q?H=E4ring?=) Date: Wed, 10 Jul 2002 01:41:34 +0200 Subject: [DB-SIG] Already begun work on something similar... In-Reply-To: <3D2B5CF4.1070609@one.net> References: <3D2B5CF4.1070609@one.net> Message-ID: <20020709234133.GA1221@lilith.my-fqdn.de> * Jon Franz [2002-07-09 18:00 -0400]: > The _main_ thing I see missing from the DB API, that makes the wrapper > I'm working on quite a pain, is a fetch-column-by-name capability, > [...] I may be looking past something in the documentation that aloows > this - so enlighten me. cursor.description > I've noticed that the old, non DBAPI compliant modules for postgreSQL > have this functionality, pyPgSQL has it today. So do PySQLite and MySQLdb. And psycopg and others maybe, too. Gerhard -- mail: gerhard bigfoot de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id AD24C930 public key fingerprint: 3FCC 8700 3012 0A9E B0C9 3667 814B 9CAA AD24 C930 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) From djc@object-craft.com.au Wed Jul 10 00:58:26 2002 From: djc@object-craft.com.au (Dave Cole) Date: 10 Jul 2002 09:58:26 +1000 Subject: [DB-SIG] Re: Already begun work on something similar... In-Reply-To: <3D2B5BF2.40005@itamarst.org> References: <3D2B5CF4.1070609@one.net> <3D2B5BF2.40005@itamarst.org> Message-ID: >>>>> "Itamar" == Itamar Shtull-Trauring writes: Itamar> Jon Franz wrote: >> I've begun work on such a wrapper myself, but was only just past >> the planning stages... Its very ADO-alike, but stripped down to >> the most-used pieces. I'm taking what I like from ADO, perl-DBI, >> Delphi DB Objects, and trying to make something exceedingly easy to >> use for the average joe. Itamar> I actually wasn't thinking of a wrapper. I was thinking of a Itamar> set of classes and functions that would be used to *implement* Itamar> DB-API compliant adapters, so they don't need to duplicate Itamar> work, and can have a more consistent feature set. My preference would be something along these lines. Implement the policy and interface in Python and then define a mechanism for loading and integrating lower level driver modules. - Dave -- http://www.object-craft.com.au From bzimmer@ziclix.com Wed Jul 10 04:19:36 2002 From: bzimmer@ziclix.com (brian zimmer) Date: Tue, 9 Jul 2002 22:19:36 -0500 Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: <1026118642.966.10.camel@momo> Message-ID: <002801c227c0$a1d54d30$6401a8c0@mountain> Please keep in mind that C would also make Jython development significantly more difficult. thanks, brian > -----Original Message----- > From: db-sig-admin@python.org > [mailto:db-sig-admin@python.org] On Behalf Of Federico Di Gregorio > Sent: Monday, July 08, 2002 3:57 AM > To: Python DB-SIG Mailing List > Subject: Re: [DB-SIG] Suggestions for DB-API improvements > > > Il lun, 2002-07-08 alle 08:37, Dustin Sallings ha scritto: > > Around 01:45 on Jul 8, 2002, Federico Di Gregorio said: > > > > # mm.. python or C? some drivers are C-only and would not > be very nice > > to # force them go python. > > > > I admit that I've not done any python modules in C, however, my > > answer to that question would be python. > > > > If you can't extend python classes in C, you can at > least create the > > glue work and write a thin python layer to implement the required > > python classes. > > > > It would probably be possible to create a driver developer kit > > including a skeleton framework in C which could be used to > implement > > many drivers more easily. > > i've looked at the code of 3 diferent drivers and they *so* > different that i think it would not be that much usefull to > have such layer. and having a common base in python then go > from python to C to python just to unify drivers is overkill. > you just want a middle layer over the drivers, imo. > > -- > Federico Di Gregorio > Debian GNU/Linux Developer & Italian Press Contact > fog@debian.org > INIT.D Developer > fog@initd.org > All programmers are optimists. -- Frederick P. > Brooks, Jr. > From dustin+pydbsig@spy.net Wed Jul 10 04:58:27 2002 From: dustin+pydbsig@spy.net (Dustin Sallings) Date: Tue, 9 Jul 2002 20:58:27 -0700 (PDT) Subject: [DB-SIG] Suggestions for DB-API improvements In-Reply-To: <002801c227c0$a1d54d30$6401a8c0@mountain> Message-ID: Around 22:19 on Jul 9, 2002, brian zimmer said: # Please keep in mind that C would also make Jython development # significantly more difficult. The point of the C code would be to assist people who were writing drivers in C. There's no reason more of these drivers couldn't be written in pure python (certainly makes installation easier). -- SPY My girlfriend asked me which one I like better. pub 1024/3CAE01D5 1994/11/03 Dustin Sallings | Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE L_______________________ I hope the answer won't upset her. ____________ From tjenkins@devis.com Wed Jul 10 15:51:38 2002 From: tjenkins@devis.com (Tom Jenkins) Date: 10 Jul 2002 10:51:38 -0400 Subject: [DB-SIG] Already begun work on something similar... In-Reply-To: <3D2B5CF4.1070609@one.net> References: <3D2B5CF4.1070609@one.net> Message-ID: <1026312699.12967.15.camel@asimov> On Tue, 2002-07-09 at 18:00, Jon Franz wrote: > I've begun work on such a wrapper myself, but was only just past the > planning stages... Its very ADO-alike, but stripped down to the most-used > pieces. I'm taking what I like from ADO, perl-DBI, Delphi DB Objects, and > trying to make something exceedingly easy to use for the average joe. > > The _main_ thing I see missing from the DB API, that makes the wrapper > I'm working on quite a pain, is a fetch-column-by-name capability, instead > of indexing into the returned tuple via a number. Of course, I may be > looking past something in the documentation that aloows this - so > enlighten me. Even if its just a quick wrokaround, it'd be helpful. > dtuples does this. http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/81252 also alot of drivers themselves implement this functionality themselves. psycopg through another set of methods; pypgsql through a flag in how to return fetch results; mysqldb does too (but i don't know how as i don't use mysql) -- Tom Jenkins Development InfoStructure http://www.devis.com From adam@battleaxe.net Thu Jul 11 20:28:21 2002 From: adam@battleaxe.net (Adam Israel) Date: Thu, 11 Jul 2002 14:28:21 -0500 Subject: [DB-SIG] Integer problem with mxODBC & unixODBC Message-ID: <003d01c22911$1e2c2e70$4b5ea941@bilbo> The system: Python 2.2.1 (#1, May 3 2002, 23:19:03) Debian 3.0 (sid)/Linux 2.4.18-686-smp egenix-mx-commercial-2.0.4 Unixodbc 2.1.1-8 FreeTDS 0.53-7 I'm attempting to connect to a MS SQL Server 2000 server running on Windows 2000, and I'm seeing some problems returning Integer values. I've googled, and only found one reference to this problem, posted to the list a few months ago, but no resolution was posted. I've tested this with isql, from python, and from c, and I'm reasonably sure the problem lies with mxODBC. Python code: ---- snip ---- #!/usr/bin/python2.2 import mx.ODBC.unixODBC db = mx.ODBC.unixODBC.DriverConnect('DSN=DSNNAME;UID=USERID;PWD=PASSWD) c = db.cursor() sql = "select count(*) from Image" c.execute(sql) print c.fetchall() c.close() ---- snip ---- Output: $ python -d testConnection.py query = select count(*) from Image [(9.6917293728400134e-270,)] The actual value returned should be: 2925279 Now, if I change this: - sql = "select count(*) from Image" + sql = "select cast(count(*) as varchar) from Image" Output: $ python -d testConnection.py query = select cast(count(*) as varchar) from Image [('2925279',)] It seems like the problem is definitely with the handling of Integer fields. The workaround is to cast all integer columns to varchar, but that's not very clean or efficient, IMO. To isolate where the problem lies, I wrote a bit of C code against the ODBC API, to execute the original query. It returned the correct value. I can send the file (~126 lines) if anyone is interested. So I know that unixODBC/FreeTDS are working correctly, and that leaves mxODBC.unixODBC. Has anyone else experienced this problem and if so, how did you fix it? I'm sort of in a bind with this problem, and there are not a lot of linux->ms sql server solutions around. My choices at this point are to find a fix for mxODBC, or to write my own python module to wrap ODBC. Thanks for any info, Adam Israel adam@battleaxe.net From mal@lemburg.com Thu Jul 11 21:37:04 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 11 Jul 2002 22:37:04 +0200 Subject: [DB-SIG] Integer problem with mxODBC & unixODBC References: <003d01c22911$1e2c2e70$4b5ea941@bilbo> Message-ID: <3D2DEC70.7060708@lemburg.com> Adam Israel wrote: > The system: > Python 2.2.1 (#1, May 3 2002, 23:19:03) > Debian 3.0 (sid)/Linux 2.4.18-686-smp > egenix-mx-commercial-2.0.4 > Unixodbc 2.1.1-8 > FreeTDS 0.53-7 > > I'm attempting to connect to a MS SQL Server 2000 server running on > Windows 2000, and I'm seeing some problems returning Integer values. > I've googled, and only found one reference to this problem, posted to > the list a few months ago, but no resolution was posted. > > I've tested this with isql, from python, and from c, and I'm reasonably > sure the problem lies with mxODBC. Interesting that you are getting any results back from the FreeTDS ODBC driver... I've looked into creating a subpackage for it in mxODBC but failed due to the fact that the FreeTDS ODBC has so many dummy implementations of important ODBC APIs. Note that mxODBC relies on the type information provided by the ODBC driver. Tools like isql simply ask for the string representation, which is why you are not seeing the same output. > Python code: > ---- snip ---- > #!/usr/bin/python2.2 > > import mx.ODBC.unixODBC > > db = mx.ODBC.unixODBC.DriverConnect('DSN=DSNNAME;UID=USERID;PWD=PASSWD) > c = db.cursor() > sql = "select count(*) from Image" > c.execute(sql) > print c.fetchall() > c.close() > ---- snip ---- > > Output: > $ python -d testConnection.py > query = select count(*) from Image > [(9.6917293728400134e-270,)] This is a float... and that looks wrong, since count(*) should normally return an integer. > The actual value returned should be: 2925279 > > > Now, if I change this: > - sql = "select count(*) from Image" > + sql = "select cast(count(*) as varchar) from Image" > > Output: > $ python -d testConnection.py > query = select cast(count(*) as varchar) from Image > [('2925279',)] > > It seems like the problem is definitely with the handling of Integer > fields. The workaround is to cast all integer columns to varchar, but > that's not very clean or efficient, IMO. > > To isolate where the problem lies, I wrote a bit of C code against the > ODBC API, to execute the original query. It returned the correct value. > I can send the file (~126 lines) if anyone is interested. > > So I know that unixODBC/FreeTDS are working correctly, and that leaves > mxODBC.unixODBC. Has anyone else experienced this problem and if so, > how did you fix it? I'm sort of in a bind with this problem, and there > are not a lot of linux->ms sql server solutions around. My choices at > this point are to find a fix for mxODBC, or to write my own python > module to wrap ODBC. No need for that. You can use the converter function feature in mxODBC to force fetching values using different types. To fully debug the situation, please build a debug version of mxODBC (see the docs) and create an mxODBC.log file with the above script. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From adam@battleaxe.net Thu Jul 11 21:39:11 2002 From: adam@battleaxe.net (Adam Israel) Date: Thu, 11 Jul 2002 15:39:11 -0500 Subject: [DB-SIG] Integer problem with mxODBC & unixODBC In-Reply-To: <003d01c22911$1e2c2e70$4b5ea941@bilbo> Message-ID: <004e01c2291b$034fc350$4b5ea941@bilbo> Oh, I forgot to add the output of mxODBC.log. --- New Log Session --- Thu Jul 11 13:29:25 2002 Importing the mx.DateTime C API... mx.DateTime package found API object mxDateTimeAPI found API object loaded and initialized. initmxODBC: Initializing ODBC API environment initmxODBC: henv=0x8172f40 mxODBC_New_UseDriverConnect: dsn='DSN=xxx;UID=xxx;PWD=xxx', clearAC=1 mxODBC_InitConnection(0x8174028): bindmethod=2, have_SQLDescribeParam=0, getdata_extensions=0x81712f8, txn_capable=4856 mxODBC_New_UseDriverConnect: created new connection at 0x8174028 mxODBCursor_New: created new cursor '' at 0x8178c88, hstmt=0x817a038 mxODBCursor_FreeVars: called for cursor at 0x8178c88 mxODBCursor_FreeVars: nothing to do mxODBCursor_Execute: using direct execute for statement 'select count(*) from Image' mxODBCursor_Execute: number of params in statement: 0 mxODBCursor_Execute: executing command without parameters mxODBCursor_FreeVars: called for cursor at 0x8178c88 mxODBCursor_FreeVars: nothing to do mxODBCursor_PrepareOutput: colcount=1 rowcount=0 mxODBCursor_PrepareOutput: column 0: name='' type=3 precision=4 scale=0 nullable=1 mxODBCursor_AllocateOutputVars: preparing binding of column 0 - sqltype=3, ctype=0, free_data=0, sqllen=4, use_getdata=0 mxODBCursor_AllocateOutputVars: binding column 0 - sqltype=3, ctype=8, free=1, sqllen=4, data_len=8, data_buflen=8, getdata=0 mxODBCursor_AllocateOutputVars: true len=8 mxODBCursor_FetchAll: fetching 1 column(s). mxODBCursor_FetchAll: row 0... mxODBCursor_FetchAll: row 1... mxODBCursor_FetchAll: done -- read 1 row(s). mxODBCursor_Close: called for cursor at 0x8178c88, hstmt=0x817a038 mxODBCursor_Close: stmt cancelled mxODBCursor_Close: stmt freed mxODBCursor_Free: called for cursor at 0x8178c88 mxODBCursor_FreeVars: called for cursor at 0x8178c88 mxODBCursor_FreeAllocatedVars: called for cursor at 0x8178c88 mxODBCursor_FreeAllocatedVars: cursor or connection already closed mxODBCursor_FreeAllocatedVars: freeing output variable for column 0 mxODBCursor_FreeVars: freeing output variable array mxODBCursor_FreeParameters: called for cursor at 0x8178c88 mxODBCursor_FreeParameters: cursor or connection already closed mxODBCursor_Close: called for cursor at 0x8178c88, hstmt=0x817a038 mxODBCursor_Close: cursor is already closed mxODBC_Free: called for connection at 0x8174028 mxODBC_Close: called for connection at 0x8174028, closed=0 mxODBC_Close: disconnect mxODBC_Close: free connection Thanks, Adam From mal@lemburg.com Thu Jul 11 22:03:30 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 11 Jul 2002 23:03:30 +0200 Subject: [DB-SIG] Integer problem with mxODBC & unixODBC References: <003d01c22911$1e2c2e70$4b5ea941@bilbo> <3D2DEC70.7060708@lemburg.com> Message-ID: <3D2DF2A2.8060303@lemburg.com> M.-A. Lemburg wrote: > Adam Israel wrote: > >> The system: >> Python 2.2.1 (#1, May 3 2002, 23:19:03) >> Debian 3.0 (sid)/Linux 2.4.18-686-smp >> egenix-mx-commercial-2.0.4 >> Unixodbc 2.1.1-8 >> FreeTDS 0.53-7 >> >> I'm attempting to connect to a MS SQL Server 2000 server running on >> Windows 2000, and I'm seeing some problems returning Integer values. >> I've googled, and only found one reference to this problem, posted to >> the list a few months ago, but no resolution was posted. >> >> I've tested this with isql, from python, and from c, and I'm reasonably >> sure the problem lies with mxODBC. > > > Interesting that you are getting any results back from > the FreeTDS ODBC driver... I've looked into creating a subpackage > for it in mxODBC but failed due to the fact that the FreeTDS > ODBC has so many dummy implementations of important ODBC > APIs. > > Note that mxODBC relies on the type information provided > by the ODBC driver. Tools like isql simply ask for the > string representation, which is why you are not seeing the > same output. BTW, I think that in this particular case it's the FreeTDS ODBC driver which is not working right: the driver seems not to support all ODBC SQL_C_* type codes. Most interesting is that it doesn't support the two signed types used by mxODBC to fetch integer data: SQL_C_SSHORT and SQL_C_SLONG. >> It seems like the problem is definitely with the handling of Integer >> fields. The workaround is to cast all integer columns to varchar, but >> that's not very clean or efficient, IMO. >> >> To isolate where the problem lies, I wrote a bit of C code against the >> ODBC API, to execute the original query. It returned the correct value. >> I can send the file (~126 lines) if anyone is interested. >> >> So I know that unixODBC/FreeTDS are working correctly, and that leaves >> mxODBC.unixODBC. Has anyone else experienced this problem and if so, >> how did you fix it? I'm sort of in a bind with this problem, and there >> are not a lot of linux->ms sql server solutions around. My choices at >> this point are to find a fix for mxODBC, or to write my own python >> module to wrap ODBC. > > > No need for that. You can use the converter function feature > in mxODBC to force fetching values using different types. def converter(position, sqltype, sqllen): # modify sqltype and sqllen as appropriate return mx.ODBC.unixODBC.SQL.VARCHAR, 25 # Now tell the cursor to use this converter: cursor.setconverter(converter) > To fully debug the situation, please build a debug version of > mxODBC (see the docs) and create an mxODBC.log file with the > above script. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From mal@lemburg.com Sun Jul 14 17:37:48 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Sun, 14 Jul 2002 18:37:48 +0200 Subject: [DB-SIG] Integer problem with mxODBC & unixODBC References: <004e01c2291b$034fc350$4b5ea941@bilbo> Message-ID: <3D31A8DC.8030104@lemburg.com> [Please don't log data to the list ... I don't think people are too interested in these details; you can post this data to me directly] Adam Israel wrote: > mxODBCursor_PrepareOutput: column 0: name='' type=3 precision=4 scale=0 > nullable=1 > mxODBCursor_AllocateOutputVars: preparing binding of column 0 - > sqltype=3, ctype=0, free_data=0, sqllen=4, use_getdata=0 > mxODBCursor_AllocateOutputVars: binding column 0 - sqltype=3, ctype=8, > free=1, sqllen=4, data_len=8, data_buflen=8, getdata=0 The FreeTDS driver tells mxODBC that the value is a decimal and mxODBC fetches it as double. Could be that the layout used for doubles in FreeTDS is different (e.g. the protocol uses a different binary representation for doubles than the Unix system). Could you try something like 'SELECT 1.234' and 'SELECT 5678' on the connection ? If this yields strange results too, then something is wrong in the FreeTDS ODBC driver with returning float and/or integer data. mxODBC doesn't have any problem with fetching floats or integers, so the bug is clearly within the FreeTDS ODBC driver. Thanks, -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From pk1u@yahoo.com Mon Jul 15 00:58:17 2002 From: pk1u@yahoo.com (Praveen Kumar) Date: Sun, 14 Jul 2002 16:58:17 -0700 (PDT) Subject: [DB-SIG] Re: DCOracle2 w/ apache + mod_python under rh7.2 ? In-Reply-To: <20020709073832.GV14854@glas.net> Message-ID: <20020714235817.90778.qmail@web10411.mail.yahoo.com> > 1) make sure you have set ORACLE_HOME env var. > 2) make sure you have set LD_LIBRARY_PATH (to > $ORACLE_HOME/lib) env var. > 3) make sure that both of them are set _before_ the > ecript execution. Didn't work. I got around it by using mod_python as a DSO, and using apache's LoadFile directive; in httpd.conf: LoadFile /home/pk/OraHome1/lib/libclntsh.so.9.0 Not ideal, but it works. pk --- "Eugene V. Dvurechenski" wrote: > On Mon, Jul 08, 2002 at 04:55:05PM -0700, Praveen > Kumar wrote: > > import dco2 > > > > ImportError: libclntsh.so.9.0: cannot open shared > > object file: No such file or directory > > 1) make sure you have set ORACLE_HOME env var. > 2) make sure you have set LD_LIBRARY_PATH (to > $ORACLE_HOME/lib) env var. > 3) make sure that both of them are set _before_ the > ecript execution. > > the last point is essential - os.environ doesn't > help much. Orig msg: > Has anyone successfully used DCOracle2 w/ apache + > mod_python under rh7.2 ? > > My system consists of rh7.2, apache1.3.26, > mod_python-2.7.8, python2.1.3, Oracle9i . > > I see the following when my program accesses the > DCOracle2 module: > > File > "/usr/lib/python2.1/site-packages/DCOracle2/__init__.py", > line 37, in ? > from DCOracle2 import * > > File > "/usr/lib/python2.1/site-packages/DCOracle2/DCOracle2.py", > line 104, in ? > import dco2 > > ImportError: libclntsh.so.9.0: cannot open shared > object file: No such file or directory > > > > I've tried the following: > > ----- > > Setting LD_LIBRARY_PATH from the same shell where I > start apache: > > export LD_LIBRARY_PATH=/home/pk/OraHome1/lib > > ----- > > Setting LD_LIBRARY_PATH to /home/pk/OraHome1/lib via > Apache's PassEnv and SetEnv directives. > > ----- > > Placed libclntsh.so.9.0 in > /usr/lib/python2.1/site-packages/DCOracle2 ( same > path > as dco2.so ). > > ----- > > Added /home/pk/OraHome1/lib to /etc/ld.so.conf , and > executed /sbin/ldconfig as root. > > ----- > > Tried adding each of: > > os.putenv( 'LD_LIBRARY_PATH', > '/home/pk/OraHome1/lib' > ) > > os.environ[ 'LD_LIBRARY_PATH' ] = > '/home/pk/OraHome1/lib' > > in DCOracle2/DCOracle2.py before the "import dco2" > statement > > ----- > > /home/pk/OraHome1/lib is readable by all ; but tried > the following anyway, to eliminate a > permissions-issue > as the cause: > > As root, copied /home/pk/OraHome1/lib to /oralib ; > tried all of the above, using /oralib > > ----- > > "/home/pk/OraHome1/lib/libclntsh.so.9.0" exists, yet > none of these work. The DCOracle2 module works fine > when I use it from a standalone program. It would be > helpful to know if anyone has gotten this config to > work; also, any suggestions would be appreciated. > > pk > __________________________________________________ Do You Yahoo!? Yahoo! Autos - Get free new car price quotes http://autos.yahoo.com From haering_postgresql@gmx.de Tue Jul 16 04:47:36 2002 From: haering_postgresql@gmx.de (Gerhard =?iso-8859-1?Q?H=E4ring?=) Date: Tue, 16 Jul 2002 05:47:36 +0200 Subject: [DB-SIG] Optional DB-API extensions and mxODBC Message-ID: <20020716034736.GB1069@lilith.my-fqdn.de> I wonder which DB-API modules already support the optional DB-API extensions from PEP 0249. I happen to like them and PySQLite (CVS) already has them, and the next release of pyPgSQL definitely will, too. I just looked into mxODBC and was surprised it doesn't support the scroll method of cursors that I was looking for. I was surprised because MAL is the PEP author :-) Gerhard -- mail: gerhard bigfoot de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id AD24C930 public key fingerprint: 3FCC 8700 3012 0A9E B0C9 3667 814B 9CAA AD24 C930 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) From andy@dustman.net Tue Jul 16 06:03:31 2002 From: andy@dustman.net (Andy Dustman) Date: 16 Jul 2002 01:03:31 -0400 Subject: [DB-SIG] Optional DB-API extensions and mxODBC In-Reply-To: <20020716034736.GB1069@lilith.my-fqdn.de> References: <20020716034736.GB1069@lilith.my-fqdn.de> Message-ID: <1026795811.2737.0.camel@4.0.0.10.in-addr.arpa> On Mon, 2002-07-15 at 23:47, Gerhard H=E4ring wrote: > I wonder which DB-API modules already support the optional DB-API > extensions from PEP 0249. I happen to like them and PySQLite (CVS) > already has them, and the next release of pyPgSQL definitely will, too. MySQLdb does (or 0.9.2 will). --=20 Andy Dustman PGP: 0x930B8AB6 @ .net http://dustman.net/andy "Cogito, ergo sum." -- Rene Descartes "I yam what I yam and that's all what I yam." -- Popeye From mal@lemburg.com Tue Jul 16 08:39:50 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Tue, 16 Jul 2002 09:39:50 +0200 Subject: [DB-SIG] Optional DB-API extensions and mxODBC References: <20020716034736.GB1069@lilith.my-fqdn.de> Message-ID: <3D33CDC6.5090404@lemburg.com> Gerhard H=E4ring wrote: > I wonder which DB-API modules already support the optional DB-API > extensions from PEP 0249. I happen to like them and PySQLite (CVS) > already has them, and the next release of pyPgSQL definitely will, too. >=20 > I just looked into mxODBC and was surprised it doesn't support the > scroll method of cursors that I was looking for. I was surprised becaus= e > MAL is the PEP author :-) It will in version 2.1 :-) Unfortunately, I found that not many database backends support scrollable cursors, some even don't tell you the current position of the cursor within the result set or the size of the result set. mxODBC 2.1 does support most of the extensions, though, and also comes with some new ones. --=20 Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From david@sundayta.com Tue Jul 16 09:49:23 2002 From: david@sundayta.com (David Warnock) Date: Tue, 16 Jul 2002 09:49:23 +0100 Subject: [DB-SIG] Which db mapping tool? Message-ID: <3D33DE13.3090004@sundayta.com> Hi, I have been using Python a little for a while, finding it very useful for text file processing. Now I would like to start using it for applications where I normally use Java (GUI and Web Apps). I am looking for a OR mapping layer so that a) I can "normally" avoid writing SQL by hand b) I can move easily between different dbms I have seen the following so far MiddleKit (Webware) PyDo (SkunkWeb) I currently use Firebird, MySql and Postgresql but am also interested in using SQLite. I recognise that I will probably have to add support for the Firebird and SQLite drivers to most OR layers as they typially already support MySql and Postgresql. My questions 1. Are there other OR layers worth looking at 2. Any recommendations between these OR Layers 3. I have seen some of the recent discussions about the future of the DBi API for python, have the authors of OR layers been specifically asked about what would make their life easier? Thanks Dave -- David Warnock, Sundayta Ltd. http://www.sundayta.com iDocSys for Document Management. VisibleResults for Fundraising. Development and Hosting of Web Applications and Sites. From haering_postgresql@gmx.de Tue Jul 16 10:04:49 2002 From: haering_postgresql@gmx.de (Gerhard =?iso-8859-1?Q?H=E4ring?=) Date: Tue, 16 Jul 2002 11:04:49 +0200 Subject: [DB-SIG] Which db mapping tool? In-Reply-To: <3D33DE13.3090004@sundayta.com> References: <3D33DE13.3090004@sundayta.com> Message-ID: <20020716090449.GA3781@lilith.my-fqdn.de> * David Warnock [2002-07-16 09:49 +0100]: > Hi, > > I have been using Python a little for a while, finding it very useful > for text file processing. Now I would like to start using it for > applications where I normally use Java (GUI and Web Apps). > > I am looking for a OR mapping layer so that > > a) I can "normally" avoid writing SQL by hand > b) I can move easily between different dbms > > I have seen the following so far > MiddleKit (Webware) I've been trying to add PostgreSQL support to MiddleKit, but didn't finish it. The main problem is that MiddleKit requires something like last_insert_id being available, which isn't the case in PostgreSQL. I tried to make a quick hack with Pg's OIDs, but this didn't work because MiddleKit packs the id column in the lower range of a dword, and something else in the higher range (or the other way round). Pretty stupid design, IMNSHO. > PyDo (SkunkWeb) > I currently use Firebird, MySql and Postgresql but am also interested > in using SQLite. I recognise that I will probably have to add support > for the Firebird and SQLite drivers to most OR layers as they typially > already support MySql and Postgresql. Feel free to ask on the PySQLite mailing lists if you need additional features to support an OR wrapper. We'll happily add it. There will of course be some difficulties to get type support in (you'll need to use pysqlite_client_pragma, as SQLite is typeless). > My questions > > 1. Are there other OR layers worth looking at Maybe dbObj http://www.valdyas.org/python/dbobj.html > 2. Any recommendations between these OR Layers Of all the ones I checked out for Java and Python, I found none of them satisfactory. Ok, I haven't really used them in practise, only in demo projects, but they just didn't work for me. I think they all assume that you can change the DB schema as you like, which very often isn't the case for me. And if they just persist Python objects in a relational database, then I have to ask why not to use an OODBMS like ZODB in the first place. Gerhard -- mail: gerhard bigfoot de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id AD24C930 public key fingerprint: 3FCC 8700 3012 0A9E B0C9 3667 814B 9CAA AD24 C930 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) From fog@initd.org Tue Jul 16 10:15:40 2002 From: fog@initd.org (Federico Di Gregorio) Date: 16 Jul 2002 11:15:40 +0200 Subject: [DB-SIG] Which db mapping tool? In-Reply-To: <20020716090449.GA3781@lilith.my-fqdn.de> References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> Message-ID: <1026810941.1023.7.camel@momo> --=-eHqqWqscH3NixKsCYwfI Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: quoted-printable Il mar, 2002-07-16 alle 11:04, Gerhard H=E4ring ha scritto: > * David Warnock [2002-07-16 09:49 +0100]: > > Hi, > >=20 > > I have been using Python a little for a while, finding it very useful=20 > > for text file processing. Now I would like to start using it for=20 > > applications where I normally use Java (GUI and Web Apps). > >=20 > > I am looking for a OR mapping layer so that > >=20 > > a) I can "normally" avoid writing SQL by hand > > b) I can move easily between different dbms > >=20 > > I have seen the following so far > > MiddleKit (Webware) >=20 > I've been trying to add PostgreSQL support to MiddleKit, but didn't > finish it. The main problem is that MiddleKit requires something like > last_insert_id being available, which isn't the case in PostgreSQL. I as i already said on the webware ML, psycopg (and other adapters too, i think) supports .lastrowid(), exactly what you need. --=20 Federico Di Gregorio Debian GNU/Linux Developer & Italian Press Contact fog@debian.org INIT.D Developer fog@initd.org Debian. The best software from the best people [see above] -- brought to you by One Line Spam --=-eHqqWqscH3NixKsCYwfI Content-Type: application/pgp-signature; name=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQA9M+Q8vcCgrgZGjesRAp3NAKDMPdeg1AdyeKKzGjqE49hC6xDP+gCgkLGs LCzMW32ixbPyYj8vTy0sor8= =Q1b6 -----END PGP SIGNATURE----- --=-eHqqWqscH3NixKsCYwfI-- From haering_postgresql@gmx.de Tue Jul 16 10:36:12 2002 From: haering_postgresql@gmx.de (Gerhard =?iso-8859-1?Q?H=E4ring?=) Date: Tue, 16 Jul 2002 11:36:12 +0200 Subject: [DB-SIG] Which db mapping tool? In-Reply-To: <1026810941.1023.7.camel@momo> References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> <1026810941.1023.7.camel@momo> Message-ID: <20020716093612.GA3921@lilith.my-fqdn.de> * Federico Di Gregorio [2002-07-16 11:15 +0200]: > Il mar, 2002-07-16 alle 11:04, Gerhard Häring ha scritto: > > * David Warnock [2002-07-16 09:49 +0100]: > > > Hi, > > > > > > I have been using Python a little for a while, finding it very useful > > > for text file processing. Now I would like to start using it for > > > applications where I normally use Java (GUI and Web Apps). > > > > > > I am looking for a OR mapping layer so that > > > > > > a) I can "normally" avoid writing SQL by hand > > > b) I can move easily between different dbms > > > > > > I have seen the following so far > > > MiddleKit (Webware) > > > > I've been trying to add PostgreSQL support to MiddleKit, but didn't > > finish it. The main problem is that MiddleKit requires something like > > last_insert_id being available, which isn't the case in PostgreSQL. I > > as i already said on the webware ML, I don't have any message from you in my local archive. Maybe it was before I subscribed. > psycopg (and other adapters too, i think) supports .lastrowid(), > exactly what you need. A grep on the psycopg 1.0.9 doesn't hit a string "lastrowid". And neither does a search on "last_" help. All I get is the last _OID_, which isn't exactly the same, and as I described doesn't quite work with MiddleKit (apart from other problems with OIDs). AFAIK (I asked on #postgresql) it's even not possible, and you'll have to do it in two steps anyway, either: nextval from sequence, then insert row, or insert row, then get curval from sequence. Am I missing something here? Gerhard -- mail: gerhard bigfoot de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id AD24C930 public key fingerprint: 3FCC 8700 3012 0A9E B0C9 3667 814B 9CAA AD24 C930 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) From fog@initd.org Tue Jul 16 11:09:10 2002 From: fog@initd.org (Federico Di Gregorio) Date: 16 Jul 2002 12:09:10 +0200 Subject: [DB-SIG] Which db mapping tool? In-Reply-To: <20020716093612.GA3921@lilith.my-fqdn.de> References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> <1026810941.1023.7.camel@momo> <20020716093612.GA3921@lilith.my-fqdn.de> Message-ID: <1026814151.1030.73.camel@momo> --=-tGE1iS59/icyt4pV9X0o Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: quoted-printable Il mar, 2002-07-16 alle 11:36, Gerhard H=E4ring ha scritto: > > as i already said on the webware ML, >=20 > I don't have any message from you in my local archive. Maybe it was > before I subscribed. doh. sorry in that case. > > psycopg (and other adapters too, i think) supports .lastrowid(), > > exactly what you need. >=20 > A grep on the psycopg 1.0.9 doesn't hit a string "lastrowid". And > neither does a search on "last_" help. All I get is the last _OID_, > which isn't exactly the same, and as I described doesn't quite work with > MiddleKit (apart from other problems with OIDs). AFAIK (I asked on > #postgresql) it's even not possible, and you'll have to do it in two > steps anyway, either: nextval from sequence, then insert row, or insert > row, then get curval from sequence. >=20 > Am I missing something here? psycopg 1.0.9 has lastoid(), 1.1 series lastrowid() don't know if this is what you need but you *can* use the oid returned by lastrowid() to access a newly inserted row. and if you want to use a "serial" type, yes, you need a two step process to get the value but i don't see any problem with: curs.execute("INSERT INTO ...") curs.execute("SELECT id FROM ... WHERE oid =3D %d", [curs.lastrowid()]) if you use psycopg 1.1 you can even do: curs.execute("INSERT INTO ... ; SELECT curval(...)")=20 curs.fetchone() --=20 Federico Di Gregorio Debian GNU/Linux Developer & Italian Press Contact fog@debian.org INIT.D Developer fog@initd.org La felicit=E0 =E8 una tazza di cioccolata calda. Sempre. -- I= o --=-tGE1iS59/icyt4pV9X0o Content-Type: application/pgp-signature; name=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQA9M/DGvcCgrgZGjesRAm/fAKCkVmOF9HMdRckU8atl1iyXXhvyywCaAtoD QMAN4w+6lwQlFHIZZejumr8= =G2tv -----END PGP SIGNATURE----- --=-tGE1iS59/icyt4pV9X0o-- From gerhard.haering@gmx.de Tue Jul 16 12:03:10 2002 From: gerhard.haering@gmx.de (Gerhard =?iso-8859-1?Q?H=E4ring?=) Date: Tue, 16 Jul 2002 13:03:10 +0200 Subject: [DB-SIG] Which db mapping tool? In-Reply-To: <1026814151.1030.73.camel@momo> References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> <1026810941.1023.7.camel@momo> <20020716093612.GA3921@lilith.my-fqdn.de> <1026814151.1030.73.camel@momo> Message-ID: <20020716110310.GA5348@lilith.my-fqdn.de> * Federico Di Gregorio [2002-07-16 12:09 +0200]: > psycopg 1.0.9 has lastoid(), 1.1 series lastrowid() The 1.1 name is perhaps slightly misleading, as it still returns the OID. > don't know if this is what you need but you *can* use the oid returned > by lastrowid() to access a newly inserted row. Of course, but it's probably inefficient, as you'll need a seperate select like SELECT id FROM mytable WHERE oid={value_of_oid}. I suspect that this kind of SELECT is slow. > and if you want to use a > "serial" type, yes, you need a two step process to get the value but i > don't see any problem with: > > curs.execute("INSERT INTO ...") > curs.execute("SELECT id FROM ... WHERE oid = %d", [curs.lastrowid()]) This returns the OID, not the value of the SERIAL primary key field. > if you use psycopg 1.1 you can even do: > > curs.execute("INSERT INTO ... ; SELECT curval(...)") > curs.fetchone() Yep, that's what I wanted. It would require changes to MiddleKit to add a sequence at table creation time and set the default value of the primary key accordingly. And you'll also have to know the name of the sequence, which IIRC isn't that easy in the MiddleKit code, either. Certainly doable, but more effort than I was willing to put in just to try out MiddleKit. Currently there seem to be some PostgreSQL patches floating around on its mailing list, but I haven't checked these out. Gerhard -- mail: gerhard bigfoot de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id AD24C930 public key fingerprint: 3FCC 8700 3012 0A9E B0C9 3667 814B 9CAA AD24 C930 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) From fog@initd.org Tue Jul 16 12:38:44 2002 From: fog@initd.org (Federico Di Gregorio) Date: 16 Jul 2002 13:38:44 +0200 Subject: [DB-SIG] Which db mapping tool? In-Reply-To: <20020716110310.GA5348@lilith.my-fqdn.de> References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> <1026810941.1023.7.camel@momo> <20020716093612.GA3921@lilith.my-fqdn.de> <1026814151.1030.73.camel@momo> <20020716110310.GA5348@lilith.my-fqdn.de> Message-ID: <1026819525.1030.97.camel@momo> --=-zsSOFQQsuNYswFmTS7bV Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: quoted-printable Il mar, 2002-07-16 alle 13:03, Gerhard H=E4ring ha scritto: > * Federico Di Gregorio [2002-07-16 12:09 +0200]: > > psycopg 1.0.9 has lastoid(), 1.1 series lastrowid() >=20 > The 1.1 name is perhaps slightly misleading, as it still returns the > OID. erm. what do you mean by ROWID?=20 --=20 Federico Di Gregorio Debian GNU/Linux Developer & Italian Press Contact fog@debian.org INIT.D Developer fog@initd.org La felicit=E0 =E8 una tazza di cioccolata calda. Sempre. -- I= o --=-zsSOFQQsuNYswFmTS7bV Content-Type: application/pgp-signature; name=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQA9NAXEvcCgrgZGjesRAplrAJ9u/ZhdwxvR/C7V9W+kMykNJ4kKmwCfWGqX PTXD4ZUU0os/WEsBEK+4ndU= =A7jV -----END PGP SIGNATURE----- --=-zsSOFQQsuNYswFmTS7bV-- From gerhard.haering@gmx.de Tue Jul 16 13:10:08 2002 From: gerhard.haering@gmx.de (Gerhard =?iso-8859-1?Q?H=E4ring?=) Date: Tue, 16 Jul 2002 14:10:08 +0200 Subject: [DB-SIG] Which db mapping tool? In-Reply-To: <1026819525.1030.97.camel@momo> References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> <1026810941.1023.7.camel@momo> <20020716093612.GA3921@lilith.my-fqdn.de> <1026814151.1030.73.camel@momo> <20020716110310.GA5348@lilith.my-fqdn.de> <1026819525.1030.97.camel@momo> Message-ID: <20020716121008.GA5719@lilith.my-fqdn.de> * Federico Di Gregorio [2002-07-16 13:38 +0200]: > Il mar, 2002-07-16 alle 13:03, Gerhard Häring ha scritto: > > * Federico Di Gregorio [2002-07-16 12:09 +0200]: > > > psycopg 1.0.9 has lastoid(), 1.1 series lastrowid() > > > > The 1.1 name is perhaps slightly misleading, as it still returns the > > OID. > > erm. what do you mean by ROWID? create table test (id serial, name varchar(20)); insert into test(name) values ('foobar'); By ROWID, I mean the value the serial primary key gets in that row. If the 'rowid' term has a different specific meaning, I wasn't aware of it. Gerhard -- mail: gerhard bigfoot de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id AD24C930 public key fingerprint: 3FCC 8700 3012 0A9E B0C9 3667 814B 9CAA AD24 C930 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) From fog@initd.org Tue Jul 16 13:22:20 2002 From: fog@initd.org (Federico Di Gregorio) Date: 16 Jul 2002 14:22:20 +0200 Subject: [DB-SIG] Which db mapping tool? In-Reply-To: <20020716121008.GA5719@lilith.my-fqdn.de> References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> <1026810941.1023.7.camel@momo> <20020716093612.GA3921@lilith.my-fqdn.de> <1026814151.1030.73.camel@momo> <20020716110310.GA5348@lilith.my-fqdn.de> <1026819525.1030.97.camel@momo> <20020716121008.GA5719@lilith.my-fqdn.de> Message-ID: <1026822141.1030.105.camel@momo> --=-H8vFGkCv6s0rQ+Rdq16S Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: quoted-printable Il mar, 2002-07-16 alle 14:10, Gerhard H=E4ring ha scritto: > * Federico Di Gregorio [2002-07-16 13:38 +0200]: > > Il mar, 2002-07-16 alle 13:03, Gerhard H=E4ring ha scritto: > > > * Federico Di Gregorio [2002-07-16 12:09 +0200]: > > > > psycopg 1.0.9 has lastoid(), 1.1 series lastrowid() > > >=20 > > > The 1.1 name is perhaps slightly misleading, as it still returns the > > > OID. > >=20 > > erm. what do you mean by ROWID?=20 >=20 > create table test (id serial, name varchar(20)); > insert into test(name) values ('foobar'); >=20 > By ROWID, I mean the value the serial primary key gets in that row. If > the 'rowid' term has a different specific meaning, I wasn't aware of it. imo, rowid _has_ a different meaning. if you have two serials or more to which does rowid refers? and if you don't have *any* serial? rowid should identify unambiguously (does such a word even exists in english?) a row and the oid does exactly that. --=20 Federico Di Gregorio Debian GNU/Linux Developer & Italian Press Contact fog@debian.org INIT.D Developer fog@initd.org Don't dream it. Be it. -- Dr. Frank'n'further --=-H8vFGkCv6s0rQ+Rdq16S Content-Type: application/pgp-signature; name=signature.asc -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) iD8DBQA9NA/8vcCgrgZGjesRAvbWAJ45bMuZ/AGVRSSuXkpO3zG6QFJ11wCfWJLs dhDyv3ZQBBKnXoNzR0/ZKmY= =3A9O -----END PGP SIGNATURE----- --=-H8vFGkCv6s0rQ+Rdq16S-- From matt@zope.com Tue Jul 16 14:44:20 2002 From: matt@zope.com (Matthew T. Kromer) Date: Tue, 16 Jul 2002 09:44:20 -0400 Subject: [DB-SIG] Which db mapping tool? References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> <1026810941.1023.7.camel@momo> <20020716093612.GA3921@lilith.my-fqdn.de> <1026814151.1030.73.camel@momo> <20020716110310.GA5348@lilith.my-fqdn.de> <1026819525.1030.97.camel@momo> <20020716121008.GA5719@lilith.my-fqdn.de> <1026822141.1030.105.camel@momo> Message-ID: <3D342334.3090804@zope.com> Federico Di Gregorio wrote: >imo, rowid _has_ a different meaning. if you have two serials or more to >which does rowid refers? and if you don't have *any* serial? rowid >should identify unambiguously (does such a word even exists in english?) >a row and the oid does exactly that. > > > The ROWID in Oracle represents an encoded record position in the database. It is guaranteed to be unique, and also can represent which database (if the server has serveral databases) the record was served out of. The ROWID is synthetic, I think. You can usually either query the ROWID explicitly, or it is implicitly returned on queries and can be obtained by an attribute on the statement handle (I think!). I make DCOracle2 return ROWIDS (SQLT_RDD) as an opaque type, albeit one that is useful for looping back around and feeding to Oracle. example: >>> c.execute('select rowid, name from test where id=43') 1 >>> r = c.fetchone() >>> print r [, None] >>> rid = r[0] >>> c.execute('select name, id from test where rowid=:1', rid) 1 >>> r = c.fetchall() >>> print r [[None, 43]] >>> Oracle also has dbms utility functions that can decode the rowid. Mozilla is about to crash on me or I'd show an example (I'll be lucky to send this mail!) -- Matt Kromer Zope Corporation http://www.zope.com/ From iiourov@yahoo.com Tue Jul 16 17:50:39 2002 From: iiourov@yahoo.com (Ilia Iourovitski) Date: Tue, 16 Jul 2002 09:50:39 -0700 (PDT) Subject: [DB-SIG] Which db mapping tool? In-Reply-To: <3D33DE13.3090004@sundayta.com> Message-ID: <20020716165039.28267.qmail@web20706.mail.yahoo.com> I looked for decent OR Mapper in Python land and didn't find any. Probably building sql query is to easy in Python. Your best bet is to use Castor or ObjectBridge with Jython. Ilia --- David Warnock wrote: > Hi, > > I have been using Python a little for a while, > finding it very useful > for text file processing. Now I would like to start > using it for > applications where I normally use Java (GUI and Web > Apps). > > I am looking for a OR mapping layer so that > > a) I can "normally" avoid writing SQL by hand > b) I can move easily between different dbms > > I have seen the following so far > MiddleKit (Webware) > PyDo (SkunkWeb) > > I currently use Firebird, MySql and Postgresql but > am also interested in > using SQLite. I recognise that I will probably have > to add support for > the Firebird and SQLite drivers to most OR layers as > they typially > already support MySql and Postgresql. > > My questions > > 1. Are there other OR layers worth looking at > 2. Any recommendations between these OR Layers > 3. I have seen some of the recent discussions about > the future of the > DBi API for python, have the authors of OR layers > been specifically > asked about what would make their life easier? > > Thanks > > Dave > -- > David Warnock, Sundayta Ltd. http://www.sundayta.com > iDocSys for Document Management. VisibleResults for > Fundraising. > Development and Hosting of Web Applications and > Sites. > > > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig __________________________________________________ Do You Yahoo!? Yahoo! Autos - Get free new car price quotes http://autos.yahoo.com From PaulFriedlander@Danfoss.com Tue Jul 16 20:16:09 2002 From: PaulFriedlander@Danfoss.com (Friedlander Paul) Date: Tue, 16 Jul 2002 21:16:09 +0200 Subject: [DB-SIG] mxODBC is truncating BLOBs when reading them Message-ID: I am using mxODBC to retrieve data from a Postgres database. I am using the latest ActivePython distribution. When I read data from a column with BYTEA data I get a warning and am told that the data was truncated. I found a comment that this used to be a problem with MySQL databases but has been fixed. I am using the Windows sub-object and am accessing the database through an ODBC converter provided by insight (running in the windows control panel). Can anyone shed light on this? BTW I am using the default ODBC package to write the data and it works fine. I tried the mxODBC because trying to read the data with ODBC seems to work for a while but then Python crashes. I ported the whole thing to Linux and tried it with the psycopg package and it worked fine. Thanks for you help. Paul Friedlander Danfoss paulfriedlander@danfoss.com 410-931-8250 From mal@lemburg.com Tue Jul 16 22:28:04 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Tue, 16 Jul 2002 23:28:04 +0200 Subject: [DB-SIG] mxODBC is truncating BLOBs when reading them References: Message-ID: <3D348FE4.9010603@lemburg.com> Friedlander Paul wrote: > I am using mxODBC to retrieve data from a Postgres database. I am using the > latest ActivePython distribution. > > When I read data from a column with BYTEA data I get a warning and am told > that the data was truncated. I found a comment that this used to be a > problem with MySQL databases but has been fixed. Just guessing here since you don't provide enough information (traceback, log file, versions, etc.): this could be related to a network buffer problem or a problem with the ODBC driver for Postgres. mxODBC doesn't truncate the data -- it's the driver that's truncating it. > I am using the Windows sub-object and am accessing the database through an > ODBC converter provided by insight (running in the windows control panel). > > Can anyone shed light on this? Please post the traceback and give some hint about the size of the data you are requesting. Thanks, -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From anthony@computronix.com Tue Jul 16 23:04:43 2002 From: anthony@computronix.com (Anthony Tuininga) Date: 16 Jul 2002 16:04:43 -0600 Subject: [DB-SIG] cx_Oracle 2.5 Message-ID: <1026857087.9477.45.camel@chl0151.edmonton.computronix.com> What is cx_Oracle? cx_Oracle is a Python extension module that allows access to Oracle and conforms to the Python database API 2.0 specifications with a few exceptions. Where do I get it? http://computronix.com/utilities What's new? The primary focus of this release was increased performance in certain key areas, elimination of unimplemented parts of the DB-API 2.0 and increased usefulness in a threaded environment. The following list details the changes made in no particular order. 1) Added flag OPT_NoOracle7 which, if set, assumes that connections are being made to Oracle8 or higher databases; this allows for eliminating the overhead in performing this check at connect time 2) Added flag OPT_NumbersAsStrings which, if set, returns all numbers as strings rather than integers or floats; this flag is used when defined variables are created (during select statements only) 3) Added flag OPT_Threading which, if set, uses OCI threading mode; there is a significant performance degradation in this mode (about 15-20%) but it does allow threads to share connections (threadsafety level 2 according to the Python Database API 2.0); note that in order to support this, Oracle 8i or higher is now required 4) Added Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS pairs where applicable to support threading during blocking OCI calls 5) Added global method attach() to cx_Oracle to support attaching to an existing database handle (as provided by PowerBuilder, for example) 6) Eliminated the cursor method fetchbinds() which was used for returning the list of bind variables after execution to get the values of out variables; the cursor method setinputsizes() was modified to return the list of bind variables and the cursor method execute() was modified to return the list of defined variables in the case of a select statement being executed; these variables have three methods available to them: getvalue([]) to get the value of a variable, setvalue(, ) to set its value and copy(, , ) to copy the value from a variable in a more efficient manner than setvalue(getvalue()) 7) Implemented cursor method executemany() which expects a list of dictionaries for the arguments 8) Implemented cursor method callproc() 9) Added cursor method prepare() which parses (prepares) the statement for execution; subsequent execute() or executemany() calls can pass None as the statement which will imply use of the previously prepared statement; used for high performance only 10) Added cursor method fetchraw() which will perform a raw fetch of the cursor returning the number of rows thus fetched; this is used to avoid the overhead of generating result sets; used for high performance only 11) Added cursor method executemanyprepared() which is identical to the method executemany() except that it takes a single argument which is the number of times to execute a previously prepared statement and it assumes that the bind variables already have their values set; used for high performance only 12) Added support for rowid being returned in a select statement 13) Added support for comparing dates returned by cx_Oracle 14) Integrated patch from Andre Reitz to set the null ok flag in the description attribute of the cursor 15) Integrated patch from Andre Reitz to setup.py to support compilation with Python 1.5 16) Integrated patch from Benjamin Kearns to setup.py to support compilation on Cygwin -- Anthony Tuininga anthony@computronix.com Computronix Distinctive Software. Real People. Suite 200, 10216 - 124 Street NW Edmonton, AB, Canada T5N 4A3 Phone: (780) 454-3700 Fax: (780) 454-3838 http://www.computronix.com From david@sundayta.com Wed Jul 17 01:06:25 2002 From: david@sundayta.com (David) Date: Wed, 17 Jul 2002 01:06:25 +0100 Subject: [DB-SIG] Which db mapping tool? References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> Message-ID: <3D34B501.8020606@sundayta.com> Gerhard, Thanks for the comments. > Feel free to ask on the PySQLite mailing lists if you need additional > features to support an OR wrapper. We'll happily add it. There will of > course be some difficulties to get type support in (you'll need to use > pysqlite_client_pragma, as SQLite is typeless). Thanks >>1. Are there other OR layers worth looking at > > > Maybe dbObj http://www.valdyas.org/python/dbobj.html This seems to be only interested in mysql in the latest version. > Of all the ones I checked out for Java and Python, I found none of them > satisfactory. Ok, I haven't really used them in practise, only in demo > projects, but they just didn't work for me. Over the last 4 years I have used several (including inhouse) in projects in Java. > I think they all assume that > you can change the DB schema as you like, which very often isn't the > case for me. Ditto for me, some will work with existing schema if it meets faily standard guidelines (a unique numeric primary key is a common requirement for many OR tools). > And if they just persist Python objects in a relational > database, then I have to ask why not to use an OODBMS like ZODB in the > first place. I don't just do that, we perform complex queries with multiple optional joins and wheres. In the end some of those will need to be hand coded sql, but it would be nice to get back standard objects. I would like to use a single OR tool but the projects vary in scale a lot from web apps with many millions of rows and 100+ tables to small stuff on a zaurus. Dave From david@sundayta.com Wed Jul 17 01:08:42 2002 From: david@sundayta.com (David) Date: Wed, 17 Jul 2002 01:08:42 +0100 Subject: [DB-SIG] Which db mapping tool? References: <20020716165039.28267.qmail@web20706.mail.yahoo.com> Message-ID: <3D34B58A.7030306@sundayta.com> Ilia, > Probably building sql query is to easy in Python. Until larger applications need maintenance including a) porting to other dbms or b) extending table definitions. > Your best bet is to use Castor or ObjectBridge with > Jython. H'mm sounds like a lot of overhead, why use python then? Dave -- David Warnock, Sundayta Ltd. http://www.sundayta.com iDocSys for Document Management. VisibleResults for Fundraising. Development and Hosting of Web Applications and Sites. From mal@lemburg.com Wed Jul 17 09:06:28 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 17 Jul 2002 10:06:28 +0200 Subject: [DB-SIG] cx_Oracle 2.5 References: <1026857087.9477.45.camel@chl0151.edmonton.computronix.com> Message-ID: <3D352584.5000108@lemburg.com> Anthony Tuininga wrote: > What is cx_Oracle? > > 7) Implemented cursor method executemany() which expects a list of > dictionaries for the arguments As per DB API this should be a sequence of sequences. Dictionaries are not sequences. > 11) Added cursor method executemanyprepared() which is identical to the > method executemany() except that it takes a single argument which is > the number of times to execute a previously prepared statement and > it assumes that the bind variables already have their values set; > used for high performance only Such a method is not needed if you implement the .execute() caching of commands as described in the DB API. .prepare() can also nicely hook into this scheme if you expose the prepared command: c.prepare('stuff') c.executemany(c.command, list_of_data_tuples) -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From mal@lemburg.com Wed Jul 17 09:15:42 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 17 Jul 2002 10:15:42 +0200 Subject: [DB-SIG] Which db mapping tool? References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> <3D34B501.8020606@sundayta.com> Message-ID: <3D3527AE.9020504@lemburg.com> David wrote: >> [Database abstraction tools] >> Of all the ones I checked out for Java and Python, I found none of the= m >> satisfactory. Ok, I haven't really used them in practise, only in demo >> projects, but they just didn't work for me.=20 >=20 >=20 > Over the last 4 years I have used several (including inhouse) in=20 > projects in Java. >=20 >> I think they all assume that >> you can change the DB schema as you like, which very often isn't the >> case for me.=20 >=20 >=20 > Ditto for me, some will work with existing schema if it meets faily=20 > standard guidelines (a unique numeric primary key is a common=20 > requirement for many OR tools). >=20 >> And if they just persist Python objects in a relational >> database, then I have to ask why not to use an OODBMS like ZODB in the >> first place. Note that with ZODB you'd lose the ability to do complex queries. Cach=E9 seems to be the OODB of choice here. > I don't just do that, we perform complex queries with multiple optional= =20 > joins and wheres. In the end some of those will need to be hand coded=20 > sql, but it would be nice to get back standard objects. I would like to= =20 > use a single OR tool but the projects vary in scale a lot from web apps= =20 > with many millions of rows and 100+ tables to small stuff on a zaurus. I don't think that the number of rows or tables is a problem. It's the approach which you have to get an idea for, e.g. do you want to map object structures to a database, or have the abstraction layer write the SQL for you, or have the abstraction layer manage the schema for you (just to name a few possibilities) ? --=20 Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From anthony@computronix.com Wed Jul 17 14:53:33 2002 From: anthony@computronix.com (Anthony Tuininga) Date: 17 Jul 2002 07:53:33 -0600 Subject: [DB-SIG] cx_Oracle 2.5 In-Reply-To: <3D352584.5000108@lemburg.com> References: <3D352584.5000108@lemburg.com> Message-ID: <1026914017.9478.64.camel@chl0151.edmonton.computronix.com> On Wed, 2002-07-17 at 02:06, M.-A. Lemburg wrote: > Anthony Tuininga wrote: > > What is cx_Oracle? > > > > 7) Implemented cursor method executemany() which expects a list of > > dictionaries for the arguments > > As per DB API this should be a sequence of sequences. Dictionaries > are not sequences. The one problem with passing a list of sequences (lists or tuples) is that it would mean that I would have to use bind by position in this case when I am using bind by name in all other cases. That would seem to me to be inconsistent with the execute() method which expects bind by name, don't you think? To me, it would make more sense that execute() and executemany() would use the same method of binding; otherwise, if I want to use executemany() I have to completely change my SQL statement and my bound variables!!? Comments? > > 11) Added cursor method executemanyprepared() which is identical to > the > > method executemany() except that it takes a single argument which > is > > the number of times to execute a previously prepared statement and > > it assumes that the bind variables already have their values set; > > used for high performance only > > Such a method is not needed if you implement the .execute() > caching of commands as described in the DB API. .prepare() > can also nicely hook into this scheme if you expose the prepared > command: > > c.prepare('stuff') > c.executemany(c.command, list_of_data_tuples) Not true. I support the following construct: defined_vars = c1.execute(some_select_statement) # insert code to transform defined_vars (sequence) to bind_vars (dictionary) c2.setinputsizes(bind_vars) c2.prepare(some_insert_statement) c1.fetchraw(num_rows) c2.executemanyprepared(num_rows) This code permits a straight copy from the fetch of one statement to the input of another. Please keep in mind that this is intended __ONLY__ for high performance code as the database API is more than adequate in terms of features for handling this stuff -- it just performs at about half to one third of the speed! BTW, I do support the prepare() method but I always pass None to the execute() method to indicate that the previously prepared statement ought to be used, rather than passing c.command. > -- > Marc-Andre Lemburg > CEO eGenix.com Software GmbH > _______________________________________________________________________ > eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... > Python Consulting: http://www.egenix.com/ > Python Software: http://www.egenix.com/files/python/ -- Anthony Tuininga anthony@computronix.com Computronix Distinctive Software. Real People. Suite 200, 10216 - 124 Street NW Edmonton, AB, Canada T5N 4A3 Phone: (780) 454-3700 Fax: (780) 454-3838 http://www.computronix.com From mal@lemburg.com Wed Jul 17 15:08:36 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 17 Jul 2002 16:08:36 +0200 Subject: [DB-SIG] cx_Oracle 2.5 References: <3D352584.5000108@lemburg.com> <1026914017.9478.64.camel@chl0151.edmonton.computronix.com> Message-ID: <3D357A64.6030401@lemburg.com> Anthony Tuininga wrote: > On Wed, 2002-07-17 at 02:06, M.-A. Lemburg wrote: > >>Anthony Tuininga wrote: >> >>>What is cx_Oracle? >>> >>> 7) Implemented cursor method executemany() which expects a list of >>> dictionaries for the arguments >> >>As per DB API this should be a sequence of sequences. Dictionaries >>are not sequences. > > > The one problem with passing a list of sequences (lists or tuples) is > that it would mean that I would have to use bind by position in this > case when I am using bind by name in all other cases. That would seem to > me to be inconsistent with the execute() method which expects bind by > name, don't you think? To me, it would make more sense that execute() > and executemany() would use the same method of binding; otherwise, if I > want to use executemany() I have to completely change my SQL statement > and my bound variables!!? Comments? Of course, both APIs expose the same interface, that is .execute() expects a sequence as argument as well. Where did you get the idea that according to the DB API a dictionary can be passed to .execute() ? >>>11) Added cursor method executemanyprepared() which is identical to >> >>the >> >>> method executemany() except that it takes a single argument which >> >>is >> >>> the number of times to execute a previously prepared statement and >>> it assumes that the bind variables already have their values set; >>> used for high performance only >> >>Such a method is not needed if you implement the .execute() >>caching of commands as described in the DB API. .prepare() >>can also nicely hook into this scheme if you expose the prepared >>command: >> >>c.prepare('stuff') >>c.executemany(c.command, list_of_data_tuples) > > > Not true. That's how mxODBC works, but maybe I'm missing some magic here. Could it be that you are passing the data in via C arrays rather than Python objects ? > I support the following construct: > > defined_vars = c1.execute(some_select_statement) According to the DB API 2.0 the .execute() return value should not be used anymore (in DB API 1.0 it was used to return the number of affected rows). > # insert code to transform defined_vars (sequence) to bind_vars > (dictionary) > c2.setinputsizes(bind_vars) Again, .setinputsizes() should accept a sequence, not a dictionary as per the DB API. > c2.prepare(some_insert_statement) > c1.fetchraw(num_rows) > c2.executemanyprepared(num_rows) How does the data from cursor c1 get to cursor c2 ? > This code permits a straight copy from the fetch of one statement to the > input of another. Please keep in mind that this is intended __ONLY__ for > high performance code as the database API is more than adequate in terms > of features for handling this stuff -- it just performs at about half to > one third of the speed! I see, so it's just an extension for performance reasons. > BTW, I do support the prepare() method but I always pass None to the > execute() method to indicate that the previously prepared statement > ought to be used, rather than passing c.command. That's tricky... c.command would be more explicit and is also inline with the specification w/r to the documented caching mechanism for .execute() et al. Anyway, just a suggestion. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From anthony@computronix.com Wed Jul 17 15:43:08 2002 From: anthony@computronix.com (Anthony Tuininga) Date: 17 Jul 2002 08:43:08 -0600 Subject: [DB-SIG] cx_Oracle 2.5 In-Reply-To: <3D357A64.6030401@lemburg.com> References: <3D357A64.6030401@lemburg.com> Message-ID: <1026916992.12711.19.camel@chl0151.edmonton.computronix.com> On Wed, 2002-07-17 at 08:08, M.-A. Lemburg wrote: > Anthony Tuininga wrote: > > On Wed, 2002-07-17 at 02:06, M.-A. Lemburg wrote: > > > >>Anthony Tuininga wrote: > >> > >>>What is cx_Oracle? > >>> > >>> 7) Implemented cursor method executemany() which expects a list of > >>> dictionaries for the arguments > >> > >>As per DB API this should be a sequence of sequences. Dictionaries > >>are not sequences. > > > > > > The one problem with passing a list of sequences (lists or tuples) is > > that it would mean that I would have to use bind by position in this > > case when I am using bind by name in all other cases. That would seem > to > > me to be inconsistent with the execute() method which expects bind by > > name, don't you think? To me, it would make more sense that execute() > > and executemany() would use the same method of binding; otherwise, if > I > > want to use executemany() I have to completely change my SQL statement > > and my bound variables!!? Comments? > > Of course, both APIs expose the same interface, that is .execute() > expects a sequence as argument as well. Where did you get the idea > that according to the DB API a dictionary can be passed to > .execute() ? >From the following statement in the DB API 2.0 document. It immediately follows the syntax declaration for the cursor method execute(). Note that it explicitly states that a sequence or mapping can be used depending on the module's paramstyle attribute -- for my module, that paramstyle is "named", which implies a dictionary. Thus my thought that executemany(), which is just an extension of execute() ought to work the same way. -------------------------------- QUOTE --------------------------------- Prepare and execute a database operation (query or command). Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified in a database-specific notation (see the module's paramstyle attribute for details). [5] -------------------------------- QUOTE --------------------------------- > >>>11) Added cursor method executemanyprepared() which is identical to > >> > >>the > >> > >>> method executemany() except that it takes a single argument which > >> > >>is > >> > >>> the number of times to execute a previously prepared statement > and > >>> it assumes that the bind variables already have their values set; > >>> used for high performance only > >> > >>Such a method is not needed if you implement the .execute() > >>caching of commands as described in the DB API. .prepare() > >>can also nicely hook into this scheme if you expose the prepared > >>command: > >> > >>c.prepare('stuff') > >>c.executemany(c.command, list_of_data_tuples) > > > > > > Not true. > > That's how mxODBC works, but maybe I'm missing some > magic here. Could it be that you are passing the data > in via C arrays rather than Python objects ? Actually, I am not passing anything at all which is the reason for the performance gain. Normally, fetchone(), fetchmany() or fetchall() create a tuple or sequence of tuples. The method I employed -- fetchraw() -- does nothing of the sort. Normally, executemany() expects a list of sequences (or dictionaries in my opinion if "named" paramstyle is used) so that also needs to be created and passed to the function. the method I employed -- executemanyprepared() -- does not have that requirement. As I stated before, this allows for significant performance benefits at the expense of portability. I only use this in cases where it is imperative that the program perform as fast as it can. BTW, with these methods my Python programs now outperform my C++ programs that I wrote earlier. > > I support the following construct: > > > > defined_vars = c1.execute(some_select_statement) > > According to the DB API 2.0 the .execute() return value > should not be used anymore (in DB API 1.0 it was used > to return the number of affected rows). Actually, it states that the return value is undefined. If you want to follow the API, ignore the return value; otherwise, I am telling you what the return value is so that you can use it if you don't care about portability because you want high performance. > > # insert code to transform defined_vars (sequence) to bind_vars > > (dictionary) > > c2.setinputsizes(bind_vars) > > Again, .setinputsizes() should accept a sequence, not > a dictionary as per the DB API. Again, setinputsizes() ought to follow the same paramstyle as execute() and executemany(), otherwise what point would there be in having setinputsizes()? Are you suggesting that there be some automagical method of transforming lists into dictionaries? Or are you suggesting that the "named" paramstyle ought to be banned from the DB API? > > c2.prepare(some_insert_statement) > > c1.fetchraw(num_rows) > > c2.executemanyprepared(num_rows) > > How does the data from cursor c1 get to cursor c2 ? I have bound the variable that is being populated on the select statement directly to the variable that is being bound on the insert statement. Again, this violates the DB API as it introduces a new type for variables but you don't have to use it; you just won't get the performance. > > This code permits a straight copy from the fetch of one statement to > the > > input of another. Please keep in mind that this is intended __ONLY__ > for > > high performance code as the database API is more than adequate in > terms > > of features for handling this stuff -- it just performs at about half > to > > one third of the speed! > > I see, so it's just an extension for performance reasons. BINGO! I generally don't use these extensions but they are handy when needed. > > BTW, I do support the prepare() method but I always pass None to the > > execute() method to indicate that the previously prepared statement > > ought to be used, rather than passing c.command. > > That's tricky... c.command would be more explicit and is > also inline with the specification w/r to the documented > caching mechanism for .execute() et al. Anyway, just a > suggestion. I don't think of it as tricky -- but then I've been using it for quite some time already.... :-) BTW, I didn't see anything PEP 249 with respect to the cursor attribute "command". Did I miss something? > -- > Marc-Andre Lemburg > CEO eGenix.com Software GmbH > _______________________________________________________________________ > eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... > Python Consulting: http://www.egenix.com/ > Python Software: http://www.egenix.com/files/python/ -- Anthony Tuininga anthony@computronix.com Computronix Distinctive Software. Real People. Suite 200, 10216 - 124 Street NW Edmonton, AB, Canada T5N 4A3 Phone: (780) 454-3700 Fax: (780) 454-3838 http://www.computronix.com From mal@lemburg.com Wed Jul 17 17:25:56 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 17 Jul 2002 18:25:56 +0200 Subject: [DB-SIG] cx_Oracle 2.5 References: <3D357A64.6030401@lemburg.com> <1026916992.12711.19.camel@chl0151.edmonton.computronix.com> Message-ID: <3D359A94.1010703@lemburg.com> Anthony Tuininga wrote: >>Of course, both APIs expose the same interface, that is .execute() >>expects a sequence as argument as well. Where did you get the idea >>that according to the DB API a dictionary can be passed to >>.execute() ? > > >>From the following statement in the DB API 2.0 document. It immediately > follows the syntax declaration for the cursor method execute(). Note > that it explicitly states that a sequence or mapping can be used > depending on the module's paramstyle attribute -- for my module, that > paramstyle is "named", which implies a dictionary. Thus my thought that > executemany(), which is just an extension of execute() ought to work the > same way. > > -------------------------------- QUOTE --------------------------------- > Prepare and execute a database operation (query or command). Parameters > may be provided as sequence or mapping and will be bound to variables in > the operation. Variables are specified in a database-specific notation > (see the module's paramstyle attribute for details). [5] > -------------------------------- QUOTE --------------------------------- Ah sorry, I forgot about that addition. The trick was that __getitem__ is used for finding the parameters in the data object. Strikes me as rather uncommon, though. >>That's how mxODBC works, but maybe I'm missing some >>magic here. Could it be that you are passing the data >>in via C arrays rather than Python objects ? > > Actually, I am not passing anything at all which is the reason for the > performance gain. Normally, fetchone(), fetchmany() or fetchall() create > a tuple or sequence of tuples. The method I employed -- fetchraw() -- > does nothing of the sort. Normally, executemany() expects a list of > sequences (or dictionaries in my opinion if "named" paramstyle is used) > so that also needs to be created and passed to the function. the method > I employed -- executemanyprepared() -- does not have that requirement. > As I stated before, this allows for significant performance benefits at > the expense of portability. I only use this in cases where it is > imperative that the program perform as fast as it can. BTW, with these > methods my Python programs now outperform my C++ programs that I wrote > earlier. > > >>>I support the following construct: >>> >>>defined_vars = c1.execute(some_select_statement) >> >>According to the DB API 2.0 the .execute() return value >>should not be used anymore (in DB API 1.0 it was used >>to return the number of affected rows). > > Actually, it states that the return value is undefined. If you want to > follow the API, ignore the return value; otherwise, I am telling you > what the return value is so that you can use it if you don't care about > portability because you want high performance. Fair enough. >>># insert code to transform defined_vars (sequence) to bind_vars >>>(dictionary) >>>c2.setinputsizes(bind_vars) >> >>Again, .setinputsizes() should accept a sequence, not >>a dictionary as per the DB API. > > > Again, setinputsizes() ought to follow the same paramstyle as execute() > and executemany(), otherwise what point would there be in having > setinputsizes()? Are you suggesting that there be some automagical > method of transforming lists into dictionaries? Or are you suggesting > that the "named" paramstyle ought to be banned from the DB API? I suppose we simply forgot to add support for this in .setinputsizes() and .setoutputsize(). The reason probably being that those two APIs are hardly ever used in implementations since they were optional right from the start. Their only purpose is providing means to implement more efficient database interaction. >>>c2.prepare(some_insert_statement) >>>c1.fetchraw(num_rows) >>>c2.executemanyprepared(num_rows) >> >>How does the data from cursor c1 get to cursor c2 ? > > I have bound the variable that is being populated on the select > statement directly to the variable that is being bound on the insert > statement. Again, this violates the DB API as it introduces a new type > for variables but you don't have to use it; you just won't get the > performance. Ok, but how does c2 know about c1 ? Don't they have to be hooked up to each other ? >>>BTW, I do support the prepare() method but I always pass None to the >>>execute() method to indicate that the previously prepared statement >>>ought to be used, rather than passing c.command. >> >>That's tricky... c.command would be more explicit and is >>also inline with the specification w/r to the documented >>caching mechanism for .execute() et al. Anyway, just a >>suggestion. > > > I don't think of it as tricky -- but then I've been using it for quite > some time already.... :-) BTW, I didn't see anything PEP 249 with > respect to the cursor attribute "command". Did I miss something? No. It's not part of the spec or the "standard extensions", just something I implemented in mxODBC. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From tjenkins@devis.com Wed Jul 17 17:21:58 2002 From: tjenkins@devis.com (Tom Jenkins) Date: 17 Jul 2002 12:21:58 -0400 Subject: [DB-SIG] cx_Oracle 2.5 In-Reply-To: <3D359A94.1010703@lemburg.com> References: <3D357A64.6030401@lemburg.com> <1026916992.12711.19.camel@chl0151.edmonton.computronix.com> <3D359A94.1010703@lemburg.com> Message-ID: <1026922919.6814.15.camel@asimov> On Wed, 2002-07-17 at 12:25, M.-A. Lemburg wrote: > > > Anthony Tuininga wrote: > > > > -------------------------------- QUOTE --------------------------------- > > Prepare and execute a database operation (query or command). Parameters > > may be provided as sequence or mapping and will be bound to variables in > > the operation. Variables are specified in a database-specific notation > > (see the module's paramstyle attribute for details). [5] > > -------------------------------- QUOTE --------------------------------- > > Ah sorry, I forgot about that addition. The trick was that > __getitem__ is used for finding the parameters in the data > object. > > Strikes me as rather uncommon, though. > Actually this is our most common way to send parameters. i'd estimate that around 85% of our execute calls use mappings to hold the parameters. it just seems to fit really well into how we work. -- Tom Jenkins Development InfoStructure http://www.devis.com From anthony@computronix.com Wed Jul 17 18:02:39 2002 From: anthony@computronix.com (Anthony Tuininga) Date: 17 Jul 2002 11:02:39 -0600 Subject: [DB-SIG] cx_Oracle 2.5 In-Reply-To: <3D359A94.1010703@lemburg.com> References: <3D359A94.1010703@lemburg.com> Message-ID: <1026925364.12710.34.camel@chl0151.edmonton.computronix.com> On Wed, 2002-07-17 at 10:25, M.-A. Lemburg wrote: > > Anthony Tuininga wrote: > > -------------------------------- QUOTE > --------------------------------- > > Prepare and execute a database operation (query or command). > Parameters > > may be provided as sequence or mapping and will be bound to variables > in > > the operation. Variables are specified in a database-specific notation > > (see the module's paramstyle attribute for details). [5] > > -------------------------------- QUOTE > --------------------------------- > > Ah sorry, I forgot about that addition. The trick was that > __getitem__ is used for finding the parameters in the data > object. > > Strikes me as rather uncommon, though. No problem. Since Oracle uses named parameter passing and actually recommends it, I have grown rather used to named parameters and would find it horrible to have to use the "?" syntax that a number of other databases are stuck with.... :-) > >>># insert code to transform defined_vars (sequence) to bind_vars > >>>(dictionary) > >>>c2.setinputsizes(bind_vars) > >> > >>Again, .setinputsizes() should accept a sequence, not > >>a dictionary as per the DB API. > > > > > > Again, setinputsizes() ought to follow the same paramstyle as > execute() > > and executemany(), otherwise what point would there be in having > > setinputsizes()? Are you suggesting that there be some automagical > > method of transforming lists into dictionaries? Or are you suggesting > > that the "named" paramstyle ought to be banned from the DB API? > > I suppose we simply forgot to add support for this in > .setinputsizes() and .setoutputsize(). The reason probably > being that those two APIs are hardly ever used in > implementations since they were optional right from the > start. Their only purpose is providing means to implement > more efficient database interaction. Certainly. Perhaps since you are the author of PEP 249 you wouldn't mind making those changes in the API? That might make it a little clearer and then we wouldn't have to have this discussion again in a few months when we have all forgotten again.... :-) > >>>c2.prepare(some_insert_statement) > >>>c1.fetchraw(num_rows) > >>>c2.executemanyprepared(num_rows) > >> > >>How does the data from cursor c1 get to cursor c2 ? > > I have bound the variable that is being populated on the select > > statement directly to the variable that is being bound on the insert > > statement. Again, this violates the DB API as it introduces a new type > > for variables but you don't have to use it; you just won't get the > > performance. > > Ok, but how does c2 know about c1 ? Don't they have to be > hooked up to each other ? No. The __variables__ are bound. Thus, when the fetch occurs, the variables are modified and when the execute occurs, the variables are read. All data transfer is done by Oracle, not by the Python module. > >>>BTW, I do support the prepare() method but I always pass None to the > >>>execute() method to indicate that the previously prepared statement > >>>ought to be used, rather than passing c.command. > >> > >>That's tricky... c.command would be more explicit and is > >>also inline with the specification w/r to the documented > >>caching mechanism for .execute() et al. Anyway, just a > >>suggestion. > > > > > > I don't think of it as tricky -- but then I've been using it for quite > > some time already.... :-) BTW, I didn't see anything PEP 249 with > > respect to the cursor attribute "command". Did I miss something? > > No. It's not part of the spec or the "standard extensions", > just something I implemented in mxODBC. > > -- > Marc-Andre Lemburg > CEO eGenix.com Software GmbH > _______________________________________________________________________ > eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... > Python Consulting: http://www.egenix.com/ > Python Software: http://www.egenix.com/files/python/ -- Anthony Tuininga anthony@computronix.com Computronix Distinctive Software. Real People. Suite 200, 10216 - 124 Street NW Edmonton, AB, Canada T5N 4A3 Phone: (780) 454-3700 Fax: (780) 454-3838 http://www.computronix.com From mal@lemburg.com Wed Jul 17 18:02:28 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 17 Jul 2002 19:02:28 +0200 Subject: [DB-SIG] cx_Oracle 2.5 References: <3D357A64.6030401@lemburg.com> <1026916992.12711.19.camel@chl0151.edmonton.computronix.com> <3D359A94.1010703@lemburg.com> <1026922919.6814.15.camel@asimov> Message-ID: <3D35A324.1080601@lemburg.com> Tom Jenkins wrote: > On Wed, 2002-07-17 at 12:25, M.-A. Lemburg wrote: > >> >>Anthony Tuininga wrote: >> >>>-------------------------------- QUOTE --------------------------------- >>> Prepare and execute a database operation (query or command). Parameters >>>may be provided as sequence or mapping and will be bound to variables in >>>the operation. Variables are specified in a database-specific notation >>>(see the module's paramstyle attribute for details). [5] >>>-------------------------------- QUOTE --------------------------------- >> >>Ah sorry, I forgot about that addition. The trick was that >>__getitem__ is used for finding the parameters in the data >>object. >> >>Strikes me as rather uncommon, though. >> > > > Actually this is our most common way to send parameters. i'd estimate > that around 85% of our execute calls use mappings to hold the > parameters. it just seems to fit really well into how we work. Interesting. Porting such an application to a sequence based DB API module must be a nightmare though... I think we really need some sort of standard support for this: a function which takes an SQL string, a parameter object (or sequence of such object) and a paramstyle and converts it to whichever other paramstyle format is needed. Any volunteers ? -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From jacobs@penguin.theopalgroup.com Wed Jul 17 18:17:57 2002 From: jacobs@penguin.theopalgroup.com (Kevin Jacobs) Date: Wed, 17 Jul 2002 13:17:57 -0400 (EDT) Subject: [DB-SIG] cx_Oracle 2.5 In-Reply-To: <3D35A324.1080601@lemburg.com> Message-ID: On Wed, 17 Jul 2002, M.-A. Lemburg wrote: > Interesting. Porting such an application to a sequence based > DB API module must be a nightmare though... > > I think we really need some sort of standard support for this: > a function which takes an SQL string, a parameter object (or > sequence of such object) and a paramstyle > and converts it to whichever other paramstyle format > is needed. > > Any volunteers ? Got one -- except that it is really slow, and needs to be taught all the details of the various SQL dialects. (i.e., it is a full SQL parser) A much simpler verion could easily be written that only knows how to tokenize SQL and about a few syntactic landmarks. Before I start, can people tell me all the wacky things that one can do with parameters in SQL statements? I have a sneaky feeling that I don't know the whole story. i.e., I'm sure some users attempt the following abuse of bound parameters: paramstyle = 'format' sql = '''SELECT foo_%s from bar;''' Where binding for, e.g. dbcon.execute(sql, 'a'), is implemented as sql % 'a'. -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From mal@lemburg.com Wed Jul 17 21:37:13 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Wed, 17 Jul 2002 22:37:13 +0200 Subject: [DB-SIG] cx_Oracle 2.5 References: Message-ID: <3D35D579.4070306@lemburg.com> Kevin Jacobs wrote: > On Wed, 17 Jul 2002, M.-A. Lemburg wrote: > >>Interesting. Porting such an application to a sequence based >>DB API module must be a nightmare though... >> >>I think we really need some sort of standard support for this: >>a function which takes an SQL string, a parameter object (or >>sequence of such object) and a paramstyle >>and converts it to whichever other paramstyle format >>is needed. >> >>Any volunteers ? > > > Got one -- except that it is really slow, and needs to be taught all the > details of the various SQL dialects. (i.e., it is a full SQL parser) > A much simpler verion could easily be written that only knows how to > tokenize SQL and about a few syntactic landmarks. I don't think you need to tokenize the SQL. The API should take the paramstyle used in the SQL as parameter and then you can extract the positions of the parameters easily using e.g. re. You will only need to watch out for quoting. Here are the defined paramstyles: 'qmark' Question mark style, e.g. '...WHERE name=?' 'numeric' Numeric, positional style, e.g. '...WHERE name=:1' 'named' Named style, e.g. '...WHERE name=:name' 'format' ANSI C printf format codes, e.g. '...WHERE name=%s' 'pyformat' Python extended format codes, e.g. '...WHERE name=%(name)s' > Before I start, can people tell me all the wacky things that one can do with > parameters in SQL statements? I have a sneaky feeling that I don't know the > whole story. > > i.e., I'm sure some users attempt the following abuse of bound parameters: > > paramstyle = 'format' > > sql = '''SELECT foo_%s from bar;''' > > Where binding for, e.g. dbcon.execute(sql, 'a'), is implemented as > sql % 'a'. This is possible, even "select ... where x < %5.2f" would be. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From jacobs@penguin.theopalgroup.com Wed Jul 17 21:43:05 2002 From: jacobs@penguin.theopalgroup.com (Kevin Jacobs) Date: Wed, 17 Jul 2002 16:43:05 -0400 (EDT) Subject: [DB-SIG] cx_Oracle 2.5 In-Reply-To: <3D35D579.4070306@lemburg.com> Message-ID: On Wed, 17 Jul 2002, M.-A. Lemburg wrote: > > Got one -- except that it is really slow, and needs to be taught all the > > details of the various SQL dialects. (i.e., it is a full SQL parser) > > A much simpler verion could easily be written that only knows how to > > tokenize SQL and about a few syntactic landmarks. > > I don't think you need to tokenize the SQL. The API > should take the paramstyle used in the SQL as parameter and > then you can extract the positions of the parameters > easily using e.g. re. You will only need to watch out for > quoting. Once we've dealt with quoting, we've essentially done most of the work required to tokenized the input. We may also need to detect the end of statement, since some backends allow multiple statements to be executed at a time, and I'm not sure what happens when nested queries include bound parameters. It's only simple if you leave reporting errors to the backend. -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From mal@lemburg.com Thu Jul 18 17:55:44 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 18 Jul 2002 18:55:44 +0200 Subject: [DB-SIG] cx_Oracle 2.5 References: Message-ID: <3D36F310.5060908@lemburg.com> Kevin Jacobs wrote: > On Wed, 17 Jul 2002, M.-A. Lemburg wrote: > >>>Got one -- except that it is really slow, and needs to be taught all the >>>details of the various SQL dialects. (i.e., it is a full SQL parser) >>>A much simpler verion could easily be written that only knows how to >>>tokenize SQL and about a few syntactic landmarks. >> >>I don't think you need to tokenize the SQL. The API >>should take the paramstyle used in the SQL as parameter and >>then you can extract the positions of the parameters >>easily using e.g. re. You will only need to watch out for >>quoting. > > > Once we've dealt with quoting, we've essentially done most of the work > required to tokenized the input. We may also need to detect the end of > statement, since some backends allow multiple statements to be executed at a > time, and I'm not sure what happens when nested queries include bound > parameters. It's only simple if you leave reporting errors to the backend. Let's start simple and get more complicated afterwards. -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From nhodgson@eb2.net.au Thu Jul 18 02:32:10 2002 From: nhodgson@eb2.net.au (Neil Hodgson) Date: Thu, 18 Jul 2002 11:32:10 +1000 Subject: [DB-SIG] INSERT: getting the id Message-ID: <3D361A9A.6010209@eb2.net.au> Dietmar writes: > I'm using DCOracle and cx_Oracle. > After creating a new data set using > cursor.execute("INSERT...") I'd like > to know the id of the new set. > Unfortunately execute doesn't return > the id and neither DA supports the > lastrowid attribute. > Any way to get the id? You can use the returning clause on the insert. Here is some code using DCOracle2 where I am using a sequence to generate unique record IDs: # This is how the sequence is created: #SQL> create sequence ddseq increment by 1 # start with 60 nomaxvalue nocycle cache 10; db = DCOracle2.connect("scott/tiger") cursor = db.cursor() ob = db.BindingArray(1,20,'SQLT_STR') ob[0] = '' cursor.execute(\ "insert into dd values (ddseq.NEXTVAL) " \ "returning id into :1", ob) print ob[0] cursor.close() db.close() The id here is a primary key, not the special 'rowid' which I don't understand. Neil From jno@glasnet.ru Thu Jul 18 08:39:30 2002 From: jno@glasnet.ru (Eugene V. Dvurechenski) Date: Thu, 18 Jul 2002 11:39:30 +0400 Subject: [DB-SIG] sql parser In-Reply-To: References: <3D35A324.1080601@lemburg.com> Message-ID: <20020718073930.GX14854@glas.net> just out of context ;-) On Wed, Jul 17, 2002 at 01:17:57PM -0400, Kevin Jacobs wrote: > details of the various SQL dialects. (i.e., it is a full SQL parser) the question is: "is there a ready-made sql parser in/for python?" -- SY, jno (PRIVATE PERSON) [ http://www.glasnet.ru/~jno ] a TeleRoss techie [ http://www.aviation.ru/ ] If God meant man to fly, He'd have given him more money. From PaulFriedlander@Danfoss.com Thu Jul 18 20:50:46 2002 From: PaulFriedlander@Danfoss.com (Friedlander Paul) Date: Thu, 18 Jul 2002 21:50:46 +0200 Subject: [DB-SIG] mxODBC is truncating BLOBs when reading them Message-ID: Thanks for getting back to me. First of all, here is the traceback: >>> c.execute("SELECT raw_data FROM datasets LIMIT 1") >>> a = c.fetchall() Traceback (most recent call last): File "", line 1, in ? Warning: ('01004', -2, 'Fetched item was truncated.', 3480) As for the setup. I am using ActivePython V2.2 and the latest stable release of mxODBC. Postgres is running on a Linux box and I am running the client on Windows 2000. The ODBC driver (running on windows) is from insight distribution systems (version 7.01.00.09, PSQLODBC.DLL, 11/27/01). I was originally using the ODBC driver that comes with ActivePython. It worked correctly (the data was returned correctly). However, reading BYTEA columns seemed to make it unstable and crashed Python. All other operations including writing BYTEA columns ran flawlessly. As a work around, I tried mxODBC. I can query other types of fields but get the traceback above when I try to query a BYTEA field. I haven't tried writing to a BYTEA field. The binary data is typically between 15k and 20k. For the time being, I have ported the client application to linux using psycopg but this isn't my prefered solution (it works fine in Linux). Per the Postgres documentation, I am escaping "\", "'", and \x00. I think that I have discovered that 0x0D characters are mysteriously disappearing too and am looking into escaping them also. I hope this gives you enough information to point me in a direction. Thanks again. -----Original Message----- From: M.-A. Lemburg [mailto:mal@lemburg.com] Sent: Tuesday, July 16, 2002 5:28 PM To: Friedlander Paul Cc: 'db-sig@python.org' Subject: Re: [DB-SIG] mxODBC is truncating BLOBs when reading them Friedlander Paul wrote: > I am using mxODBC to retrieve data from a Postgres database. I am using the > latest ActivePython distribution. > > When I read data from a column with BYTEA data I get a warning and am told > that the data was truncated. I found a comment that this used to be a > problem with MySQL databases but has been fixed. Just guessing here since you don't provide enough information (traceback, log file, versions, etc.): this could be related to a network buffer problem or a problem with the ODBC driver for Postgres. mxODBC doesn't truncate the data -- it's the driver that's truncating it. > I am using the Windows sub-object and am accessing the database through an > ODBC converter provided by insight (running in the windows control panel). > > Can anyone shed light on this? Please post the traceback and give some hint about the size of the data you are requesting. Thanks, -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From mal@lemburg.com Thu Jul 18 21:44:39 2002 From: mal@lemburg.com (M.-A. Lemburg) Date: Thu, 18 Jul 2002 22:44:39 +0200 Subject: [DB-SIG] mxODBC is truncating BLOBs when reading them References: Message-ID: <3D3728B7.60800@lemburg.com> Friedlander Paul wrote: > Thanks for getting back to me. > > First of all, here is the traceback: > > >>>>c.execute("SELECT raw_data FROM datasets LIMIT 1") >>>>a = c.fetchall() >>> > Traceback (most recent call last): > File "", line 1, in ? > Warning: ('01004', -2, 'Fetched item was truncated.', 3480) Ok, this is a warning which gets thrown as Python exception. Now, this may or may not be an error. Unfortunately, you can't see the data which the ODBC driver truncated. To disable reporting of warnings, you'd have to recompile mxODBC on Windows, or I could send you a beta version binary of mxODBC 2.1.0 which allows defining an error handler for this purpose. To be able to look deeper into the problem, you should also run a debug version of mxODBC which produces a very verbose log file about what is going on underneath. I think it's best if we take this off the list. Could you also send me a short script which creates a table using the BYTEA data types, inserts some data and then does the above query ? This would aid in trying to narrow down the cause. > As for the setup. I am using ActivePython V2.2 and the latest stable release > of mxODBC. Postgres is running > on a Linux box and I am running the client on Windows 2000. The ODBC driver > (running on windows) is from insight distribution systems (version > 7.01.00.09, PSQLODBC.DLL, 11/27/01). > > I was originally using the ODBC driver that comes with ActivePython. It > worked correctly (the data was returned correctly). However, reading BYTEA > columns seemed to make it unstable and crashed Python. All other operations > including writing BYTEA columns ran flawlessly. As a work around, I tried > mxODBC. I can query other types of fields but get the traceback above when I > try to query a BYTEA field. I haven't tried writing to a BYTEA field. > > The binary data is typically between 15k and 20k. > > For the time being, I have ported the client application to linux using > psycopg but this isn't my prefered solution (it works fine in Linux). > > Per the Postgres documentation, I am escaping "\", "'", and \x00. I think > that I have discovered that 0x0D characters are mysteriously disappearing > too and am looking into escaping them also. This hints into a different direction: you should always try to use bound parameters in SQL statements you pass to .execute(). The DB API module will then do the proper escaping for you. With mxODBC you don't even have to worry about different database backends since the ODBC drivers will quote the data for you. > I hope this gives you enough information to point me in a direction. > > Thanks again. > > > -----Original Message----- > From: M.-A. Lemburg [mailto:mal@lemburg.com] > Sent: Tuesday, July 16, 2002 5:28 PM > To: Friedlander Paul > Cc: 'db-sig@python.org' > Subject: Re: [DB-SIG] mxODBC is truncating BLOBs when reading them > > > Friedlander Paul wrote: > >>I am using mxODBC to retrieve data from a Postgres database. I am using > > the > >>latest ActivePython distribution. >> >>When I read data from a column with BYTEA data I get a warning and am told >>that the data was truncated. I found a comment that this used to be a >>problem with MySQL databases but has been fixed. > > > Just guessing here since you don't provide enough information > (traceback, log file, versions, etc.): this could be related > to a network buffer problem or a problem with the ODBC driver > for Postgres. mxODBC doesn't truncate the data -- it's the > driver that's truncating it. > > >>I am using the Windows sub-object and am accessing the database through an >>ODBC converter provided by insight (running in the windows control panel). >> >>Can anyone shed light on this? > > > Please post the traceback and give some hint about the size > of the data you are requesting. > > Thanks, -- Marc-Andre Lemburg CEO eGenix.com Software GmbH _______________________________________________________________________ eGenix.com -- Makers of the Python mx Extensions: mxDateTime,mxODBC,... Python Consulting: http://www.egenix.com/ Python Software: http://www.egenix.com/files/python/ From gerhard.haering@gmx.de Thu Jul 18 22:12:11 2002 From: gerhard.haering@gmx.de (Gerhard =?iso-8859-1?Q?H=E4ring?=) Date: Thu, 18 Jul 2002 23:12:11 +0200 Subject: [DB-SIG] mxODBC is truncating BLOBs when reading them In-Reply-To: References: Message-ID: <20020718211211.GA3266@lilith.my-fqdn.de> * Friedlander Paul [2002-07-18 21:50 +0200]: > Thanks for getting back to me. > > First of all, here is the traceback: > > >>> c.execute("SELECT raw_data FROM datasets LIMIT 1") > >>> a = c.fetchall() > Traceback (most recent call last): > File "", line 1, in ? > Warning: ('01004', -2, 'Fetched item was truncated.', 3480) > > As for the setup. I am using ActivePython V2.2 and the latest stable > release of mxODBC. Postgres is running on a Linux box and I am running > the client on Windows 2000. Just FYI: you can also use pyPgSQL (even psycopg has a win32 port now) on win32, which won't go thru the additional ODBC layer. Gerhard -- mail: gerhard bigfoot de registered Linux user #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id AD24C930 public key fingerprint: 3FCC 8700 3012 0A9E B0C9 3667 814B 9CAA AD24 C930 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) From PaulFriedlander@Danfoss.com Fri Jul 19 17:03:47 2002 From: PaulFriedlander@Danfoss.com (Friedlander Paul) Date: Fri, 19 Jul 2002 18:03:47 +0200 Subject: [DB-SIG] mxODBC is truncating BLOBs when reading them Message-ID: Thanks for the info. Ultimatly, I want to be DB agnostic and be able to = use SQL server also. -----Original Message----- From: Gerhard H=E4ring [mailto:gerhard.haering@gmx.de] Sent: Thursday, July 18, 2002 5:12 PM To: 'db-sig@python.org' Subject: Re: [DB-SIG] mxODBC is truncating BLOBs when reading them * Friedlander Paul [2002-07-18 21:50 = +0200]: > Thanks for getting back to me. >=20 > First of all, here is the traceback: >=20 > >>> c.execute("SELECT raw_data FROM datasets LIMIT 1") > >>> a =3D c.fetchall() > Traceback (most recent call last): > File "", line 1, in ? > Warning: ('01004', -2, 'Fetched item was truncated.', 3480) >=20 > As for the setup. I am using ActivePython V2.2 and the latest stable > release of mxODBC. Postgres is running on a Linux box and I am = running > the client on Windows 2000. Just FYI: you can also use pyPgSQL (even psycopg has a win32 port now) on win32, which won't go thru the additional ODBC layer. Gerhard --=20 mail: gerhard bigfoot de registered Linux user = #64239 web: http://www.cs.fhm.edu/~ifw00065/ OpenPGP public key id = AD24C930 public key fingerprint: 3FCC 8700 3012 0A9E B0C9 3667 814B 9CAA AD24 = C930 reduce(lambda x,y:x+y,map(lambda x:chr(ord(x)^42),tuple('zS^BED\nX_FOY\x0b'))) From rtheiss@yahoo.com Mon Jul 22 19:38:08 2002 From: rtheiss@yahoo.com (Robert Theiss) Date: Mon, 22 Jul 2002 11:38:08 -0700 (PDT) Subject: [DB-SIG] General DB Error Handling In-Reply-To: <1025113080.2197.14.camel@4.0.0.10.in-addr.arpa> Message-ID: <20020722183808.54219.qmail@web10405.mail.yahoo.com> Andy, Thanks for the prompt reply. With your suggestions, and a few others, I finally got what I was looking for. My code now looks like this: # Connect to DB try: db = informixdb.informixdb(dbInstance, dbUser, dbPasswd) except db.error: exc_type, exc_value = sys.exc_info()[:2] ExceptionHandler(exc_type, exc_value, DataFile, LogFile, db) The ExceptionHandler is a homegrown routine I wrote to handle exceptions, logging and to determine criticality of the error so either a warning email or a page will be generated. Bob --- Andy Dustman wrote: > On Wed, 2002-06-26 at 11:37, Robert Theiss wrote: > > > What I would like to do is trap any error returned > > from the database, so I can exit the program > > gracefully. A sample output is shown below, when > I > > intentionally generate an SQL error. > > Problem #1: Your module is compiled against the > wrong version of Python. > It looks like it's compiled for 1.5.2 but you are > using a 2.x version. > > > InformixdbError: Error -522 performing PREPARE: > Table > > (part) not selected in query. > > [bobt@monza:/service/bobt] > > > > > The python code that generated this output is > shown > > below: > ... > > try: > > returncode = > apply(db.execute(SqlSelectStatement), > > args) > > except: > > print "return code is %s : " % args > > You want: > > c = db.cursor() > try: > c.execute(SqlSelectStatement, args) # return > value undefined > rows = c.fetchall() > except InformixdbError, e: > print "return code is %s : " % e > > -- > Andy Dustman PGP: 0x930B8AB6 > @ .net http://dustman.net/andy > "Cogito, ergo sum." -- Rene Descartes > "I yam what I yam and that's all what I yam." -- > Popeye > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig __________________________________________________ Do You Yahoo!? Yahoo! Health - Feel better, live better http://health.yahoo.com From andy47@halfcooked.com Tue Jul 23 18:47:09 2002 From: andy47@halfcooked.com (Andy Todd) Date: Tue, 23 Jul 2002 18:47:09 +0100 Subject: [DB-SIG] Which db mapping tool? References: <3D33DE13.3090004@sundayta.com> <20020716090449.GA3781@lilith.my-fqdn.de> <1026810941.1023.7.camel@momo> <20020716093612.GA3921@lilith.my-fqdn.de> <1026814151.1030.73.camel@momo> <20020716110310.GA5348@lilith.my-fqdn.de> <1026819525.1030.97.camel@momo> <20020716121008.GA5719@lilith.my-fqdn.de> <1026822141.1030.105.camel@momo> <3D342334.3090804@zope.com> Message-ID: <3D3D969D.8020609@halfcooked.com> Matthew T. Kromer wrote: > Federico Di Gregorio wrote: > >> imo, rowid _has_ a different meaning. if you have two serials or more to >> which does rowid refers? and if you don't have *any* serial? rowid >> should identify unambiguously (does such a word even exists in english?) >> a row and the oid does exactly that. >> >> >> > > The ROWID in Oracle represents an encoded record position in the > database. It is guaranteed to be unique, and also can represent which > database (if the server has serveral databases) the record was served > out of. The ROWID is synthetic, I think. You can usually either query > the ROWID explicitly, or it is implicitly returned on queries and can be > obtained by an attribute on the statement handle (I think!). > Absolutely correct. The ROWID in Oracle is a pseudo column, just like 'user' and 'sysdate'. There are assembled by the database at query execution and returned just like proper column values. They can be used anywhere you would use an actual column name or (in Oracle 8.0 and above) a function name. > I make DCOracle2 return ROWIDS (SQLT_RDD) as an opaque type, albeit one > that is useful for looping back around and feeding to Oracle. example: > > >>> c.execute('select rowid, name from test where id=43') > 1 > >>> r = c.fetchone() > >>> print r > [, None] > >>> rid = r[0] > >>> c.execute('select name, id from test where rowid=:1', rid) > 1 > >>> r = c.fetchall() > >>> print r > [[None, 43]] > >>> > Absolutely the correct way to handle it. If you want to get boring, the rowid is usually four hex numbers (of machine word length) bundled together. As soon as you try and convert them to anything else things get a little scary, unless ... > Oracle also has dbms utility functions that can decode the rowid. > Mozilla is about to crash on me or I'd show an example (I'll be lucky to > send this mail!) > Never used them myself. As a rule of thumb its fine to use ROWID values in your SQL but I wouldn't want to have them hanging around anywhere else. Of course, your application code will always refer to specific rows by their primary key column values, won't it ;-) Regards, Andy -- ---------------------------------------------------------------------- From the desk of Andrew J Todd esq - http://www.halfcooked.com From ramrom@earthling.net Wed Jul 24 17:53:46 2002 From: ramrom@earthling.net (Bob Gailer) Date: Wed, 24 Jul 2002 10:53:46 -0600 Subject: [DB-SIG] Which db mapping tool? Message-ID: <5.1.0.14.0.20020724105226.02941cf0@pop.viawest.net> FWIW here's what Oracle says about rowid: "Rowid values have several important uses: -They are the fastest way to access a single row. -They can show you how a table's rows are stored. -They are unique identifiers for rows in a table. You should not use ROWID as a table's primary key. If you delete and reinsert a row with the Import and Export utilities, for example, then its rowid may change. If you delete a row, then Oracle may reassign its rowid to a new row inserted later." Bob Gailer mailto:ramrom@earthling.net 303 442 2625