From quadric at primenet.com Wed Jul 2 11:42:58 2003 From: quadric at primenet.com (quadric@primenet.com) Date: Wed Jul 2 13:51:12 2003 Subject: [DB-SIG] Running a Python script against an embedded mySQL server. Message-ID: <5.1.1.6.2.20030702103636.00af55e0@pop3.norton.antivirus> An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20030702/5f184abd/attachment.htm From chris at cogdon.org Wed Jul 2 12:00:24 2003 From: chris at cogdon.org (Chris Cogdon) Date: Wed Jul 2 14:00:31 2003 Subject: [DB-SIG] Running a Python script against an embedded mySQL server. In-Reply-To: <5.1.1.6.2.20030702103636.00af55e0@pop3.norton.antivirus> Message-ID: <0E2DFB40-ACB7-11D7-851D-000393B658A2@cogdon.org> On Wednesday, Jul 2, 2003, at 10:42 US/Pacific, quadric@primenet.com wrote: > Hi, > > I have an application that has an embedded/extended Python > interpreter.? I need to add > database capabilities and for reasons to lengthy to explain in this > email, also require > an embedded database server.? I have tentatively chosen mySQL 4.0 due > to the apparent > ease of embedding, along with it having all the necessary database > functionality I need. > > The application executes Python expressions as well as running > complete Python scripts > via its embedded interpreter.? These scripts require database access > to do their job.? I know > I can connect them to an external stand-alone mySQL server. > > The question is,? using mySQLdb , can I connect external Python > scripts that the application > executes? to the embedded mySQL server ? > > If so, are there any special considerations in doing this? > > I am experienced with relational databases and Python but > I'm new to mySQL & mySQLdb so this is a bit of a newbie question. The usual chain of libraries between the python application and the database goes like this: [Python App.py] - [Python DB-API.py] - [db-client-wrapper.o] - [db-client.o] (.o means binary files) In your application, your application will have a combination of .py and .o as the 'application' in this chain. Now, to do what you need, all you'll need to do is to make sure that the db-client-wrapper is also embedded into your application. When the python db-api calls the lower level interface to do the actual database work, it'll just be executing code inside the application, which should work just fine. Albeit, I've never heard of a full DMBS being embedded into the application, since the application and DMBS usually talk via a IP or UNIX socket... but... that's immaterial to your question :) -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From mal at lemburg.com Wed Jul 2 21:33:47 2003 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed Jul 2 14:34:18 2003 Subject: [DB-SIG] Running a Python script against an embedded mySQL server. In-Reply-To: <5.1.1.6.2.20030702103636.00af55e0@pop3.norton.antivirus> References: <5.1.1.6.2.20030702103636.00af55e0@pop3.norton.antivirus> Message-ID: <3F03258B.2080900@lemburg.com> quadric@primenet.com wrote: > Hi, > > I have an application that has an embedded/extended Python interpreter. I need > to add > database capabilities and for reasons to lengthy to explain in this email, also > require > an embedded database server. I have tentatively chosen mySQL 4.0 due to the > apparent > ease of embedding, along with it having all the necessary database functionality > I need. If you need an embedded database engine, you should have a look at the more powerful sqlite.sf.net. Embedding MySQL would cause your application to go under GPL control. sqlite comes with a BSD style license. It is also faster, offers more features and is very robust. You could use the sqlite ODBC driver together with the mxODBC Zope DA to connect to that database backend. -- Marc-Andre Lemburg eGenix.com Professional Python Software directly from the Source (#1, Jul 02 2003) >>> Python/Zope Products & Consulting ... http://www.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2003-07-01: Released mxODBC.Zope.DA for FreeBSD 1.0.6 beta 1 From quadric at primenet.com Wed Jul 2 12:35:38 2003 From: quadric at primenet.com (quadric@primenet.com) Date: Wed Jul 2 14:43:51 2003 Subject: [DB-SIG] mySQL vs Matisse vs PostgreSQL ??? Message-ID: <5.1.1.6.2.20030702111726.00b0f9b8@pop3.norton.antivirus> Hi , I have an application that embeds both Python and mySQL. I chose mySQL for a variety of reasons not least of which were reputation ( good as far as I can tell ) and cost. However, I have not gotten so far down the path that a database change would be impractical. I need to use the database both from within the app (written in C++) and from within Python scripts that the app runs. I would like to avoid a bunch of Python object to table mapping code if possible and found that Matisse ( www.matisse.com ) appears to have achieved that goal quite handily. It has both a C++ and Python API. Although I have experience in C++/Python/RDBMS's, I'm new to mySQL and know nothing about Matisse other than what can be downloaded in their 30-day trial Developer Kit. The licensing can be quite expensive (depending on number of final users) and I didn't ask about source but I don't think it is available for the DB engine or Python extension. Does anybody have any info about or experience with Matisse? Any comparisons between Matisse / mySQL / PostgreSQL that would be helpful? The thought of true Python object persistence with accessibility from C++ without a bunch of intermediate object=>table mapping is quite appealing. I've read a little on the gadfly module but it doesn't appear that direct access from C++ code is an option. Correct? BTW: The app is developed under and runs on Windows. [ No flame please :-) ] Thanks for your input. From ianb at colorstudy.com Wed Jul 2 20:35:52 2003 From: ianb at colorstudy.com (Ian Bicking) Date: Wed Jul 2 15:35:53 2003 Subject: [DB-SIG] mySQL vs Matisse vs PostgreSQL ??? In-Reply-To: <5.1.1.6.2.20030702111726.00b0f9b8@pop3.norton.antivirus> References: <5.1.1.6.2.20030702111726.00b0f9b8@pop3.norton.antivirus> Message-ID: <1057174606.720.159.camel@lothlorien> If you are looking for an embedded database, you should really look at SQLite. But other's have already said that. I certainly wouldn't suggest Gadfly, which I don't believe is being actively maintained. There are several Python ORM's (mapping tables to Python objects). See http://www.python.org/cgi-bin/moinmoin/HigherLevelDatabaseProgramming My own, SQLObject, supports SQLite. I can't say much about a C++ API, though -- I don't believe any of them have that. Besides db_row, I think they are all pure Python, and any C++ access would have to be the same way you access any Python object. The advantage, though, would be that your less agile code (written in C++) wouldn't be strongly tied to the persistence mechanism, making future concurrency or networkability easier to achieve. On Wed, 2003-07-02 at 13:35, quadric@primenet.com wrote: > Hi , > I have an application that embeds both Python and mySQL. I chose mySQL for > a variety of > reasons not least of which were reputation ( good as far as I can tell ) > and cost. > > However, I have not gotten so far down the path that a database change > would be impractical. > > I need to use the database both from within the app (written in C++) and > from within Python > scripts that the app runs. I would like to avoid a bunch of Python object > to table mapping > code if possible and found that Matisse ( www.matisse.com ) appears to have > achieved that > goal quite handily. It has both a C++ and Python API. > > Although I have experience in C++/Python/RDBMS's, I'm new to mySQL and know > nothing > about Matisse other than what can be downloaded in their 30-day trial > Developer Kit. > > The licensing can be quite expensive (depending on number of final users) > and I didn't ask about source > but I don't think it is available for the DB engine or Python extension. > > Does anybody have any info about or experience with Matisse? > > Any comparisons between Matisse / mySQL / PostgreSQL that would be helpful? > > The thought of true Python object persistence with accessibility from C++ > without > a bunch of intermediate object=>table mapping is quite appealing. > > I've read a little on the gadfly module but it doesn't appear that direct > access from C++ code is an > option. Correct? > > BTW: The app is developed under and runs on Windows. [ No flame please :-) ] > > > Thanks for your input. > > > > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig From lists at ghaering.de Thu Jul 3 00:00:08 2003 From: lists at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Wed Jul 2 17:00:11 2003 Subject: [DB-SIG] Running a Python script against an embedded mySQL server. In-Reply-To: <3F03258B.2080900@lemburg.com> References: <5.1.1.6.2.20030702103636.00af55e0@pop3.norton.antivirus> <3F03258B.2080900@lemburg.com> Message-ID: <3F0347D8.4060708@ghaering.de> M.-A. Lemburg wrote: > quadric@primenet.com wrote: > >> Hi, >> >> I have an application that has an embedded/extended Python >> interpreter. I need to add >> database capabilities [...] > > If you need an embedded database engine, you should have a look > at the more powerful sqlite.sf.net. [...] The URLs are really: http://pysqlite.sourceforge.net/ http://www.sqlite.org/ > Embedding MySQL would cause your application to go under GPL control. > sqlite comes with a BSD style license. [...] Actually, PySQLite has the Python license and SQLite itself is Public Domain. > It is also faster, offers more > features and is very robust. Yes to these, but SQLite is typeless, which can be annoying at times. Fortunately, PySQLite is programmed such that you usually won't see the typelessness of SQLite :-) -- Gerhard From mal at lemburg.com Thu Jul 3 00:04:38 2003 From: mal at lemburg.com (M.-A. Lemburg) Date: Wed Jul 2 17:05:10 2003 Subject: [DB-SIG] Running a Python script against an embedded mySQL server. In-Reply-To: <3F0347D8.4060708@ghaering.de> References: <5.1.1.6.2.20030702103636.00af55e0@pop3.norton.antivirus> <3F03258B.2080900@lemburg.com> <3F0347D8.4060708@ghaering.de> Message-ID: <3F0348E6.50708@lemburg.com> Gerhard H?ring wrote: > M.-A. Lemburg wrote: > >> quadric@primenet.com wrote: >> >>> Hi, >>> >>> I have an application that has an embedded/extended Python >>> interpreter. I need to add >>> database capabilities [...] >> >> >> If you need an embedded database engine, you should have a look >> at the more powerful sqlite.sf.net. [...] > > The URLs are really: > > http://pysqlite.sourceforge.net/ > http://www.sqlite.org/ Thanks for correcting these. >> Embedding MySQL would cause your application to go under GPL control. >> sqlite comes with a BSD style license. [...] > > Actually, PySQLite has the Python license and SQLite itself is Public > Domain. Even better :-) > > It is also faster, offers more > >> features and is very robust. > > Yes to these, but SQLite is typeless, which can be annoying at times. > Fortunately, PySQLite is programmed such that you usually won't see the > typelessness of SQLite :-) -- Marc-Andre Lemburg eGenix.com Professional Python Software directly from the Source (#1, Jul 02 2003) >>> Python/Zope Products & Consulting ... http://www.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2003-07-01: Released mxODBC.Zope.DA for FreeBSD 1.0.6 beta 1 From harri.pasanen at trema.com Thu Jul 3 10:34:40 2003 From: harri.pasanen at trema.com (Harri Pasanen) Date: Thu Jul 3 03:35:14 2003 Subject: [DB-SIG] mySQL vs Matisse vs PostgreSQL ??? In-Reply-To: <5.1.1.6.2.20030702111726.00b0f9b8@pop3.norton.antivirus> References: <5.1.1.6.2.20030702111726.00b0f9b8@pop3.norton.antivirus> Message-ID: <200307030934.40391.harri.pasanen@trema.com> While you are at it, you may wish to check out Metakit: http://www.equi4.com/metakit/ Hope this helps, -Harri On Wednesday 02 July 2003 20:35, quadric@primenet.com wrote: > Hi , > I have an application that embeds both Python and mySQL. I chose > mySQL for a variety of > reasons not least of which were reputation ( good as far as I can > tell ) and cost. > > However, I have not gotten so far down the path that a database > change would be impractical. > > I need to use the database both from within the app (written in > C++) and from within Python > scripts that the app runs. I would like to avoid a bunch of Python > object to table mapping > code if possible and found that Matisse ( www.matisse.com ) appears > to have achieved that > goal quite handily. It has both a C++ and Python API. > > Although I have experience in C++/Python/RDBMS's, I'm new to mySQL > and know nothing > about Matisse other than what can be downloaded in their 30-day > trial Developer Kit. > > The licensing can be quite expensive (depending on number of final > users) and I didn't ask about source > but I don't think it is available for the DB engine or Python > extension. > > Does anybody have any info about or experience with Matisse? > > Any comparisons between Matisse / mySQL / PostgreSQL that would be > helpful? > > The thought of true Python object persistence with accessibility > from C++ without > a bunch of intermediate object=>table mapping is quite appealing. > > I've read a little on the gadfly module but it doesn't appear that > direct access from C++ code is an > option. Correct? > > BTW: The app is developed under and runs on Windows. [ No flame > please :-) ] > > > Thanks for your input. > > > > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig From u.theiss at eurodata.de Fri Jul 4 14:13:22 2003 From: u.theiss at eurodata.de (Ulla Theiss) Date: Fri Jul 4 07:14:02 2003 Subject: [DB-SIG] Use of OODB or ORDB with python Message-ID: <3F056152.62F63505@eurodata.de> Hello list, because python is an object oriented language, it might make sense not to split the objects into tables and to use object oriented or object relational databases. Do you have any experience with it? Which object oriented or object relational database do you prefer for using with python? Thanks in advance, Ulla. From magnus at thinkware.se Sat Jul 5 00:30:41 2003 From: magnus at thinkware.se (Magnus =?iso-8859-1?Q?Lyck=E5?=) Date: Fri Jul 4 17:25:42 2003 Subject: [DB-SIG] Use of OODB or ORDB with python In-Reply-To: Message-ID: <5.2.1.1.0.20030704232222.01fadbe0@www.thinkware.se> >From: Ulla Theiss >Date: Fri, 04 Jul 2003 13:13:22 +0200 > >because python is an object oriented language, it might make sense not >to split the objects into tables >and to use object oriented or object relational databases. > >Do you have any experience with it? >Which object oriented or object relational database do you prefer for >using with python? I've used ZODB, the Object Database bundled with Zope. (It's also useful on it's own.) It's developed by Zope corporation, the company that has employed the core Python developers, so for instance Python's creator, Guido van Rossum, is one of the ZODB developers. http://www.zope.org/Wikis/ZODB/FrontPage There are also simpler object persistence systems that you might want to look at. http://www.thinkware.se/cgi-bin/thinki.cgi/PersistenceSystems Finally, you might want to use an object-relational mapper, to automatically Python instances to SQL table rows. My favourite in that field is Ian Bicking's SQLObject, but there is a whole bunch. http://www.thinkware.se/cgi-bin/thinki.cgi/ObjectRelationalMappersForPython -- Magnus Lycka (It's really Lyckå), magnus@thinkware.se Thinkware AB, Sweden, www.thinkware.se I code Python ~ The Agile Programming Language From ianb at colorstudy.com Sat Jul 5 05:55:28 2003 From: ianb at colorstudy.com (Ian Bicking) Date: Sat Jul 5 00:55:28 2003 Subject: [DB-SIG] ANN: SQLObject 0.4 Message-ID: <1057380980.514.40.camel@lothlorien> SQLObject 0.4: http://sqlobject.org Changes ======= * New (cleaner) column definition style, including for foreign keys * Alternate naming conventions supported * Subclassing supported What Is SQLObject? ================== SQLObject is an object-relational mapper, translating RDBMS tables into classes, rows into instances of those classes, allowing you to manipulate those objects to transparently manipulate the database. SQLObject currently supports Postgres, MySQL, and SQLite. Links ===== Download: http://prdownloads.sourceforge.net/sqlobject/SQLObject-0.4.tar.gz?download Documentation: http://sqlobject.org/docs/SQLObject.html News: http://sqlobject.org/docs/News.html -- Ian Bicking ianb@colorstudy.com http://colorstudy.com PGP: gpg --keyserver pgp.mit.edu --recv-keys 0x9B9E28B7 From thierry.michel at xtensive.com Sat Jul 5 08:40:40 2003 From: thierry.michel at xtensive.com (Thierry MICHEL) Date: Sat Jul 5 03:40:42 2003 Subject: [DB-SIG] Popy and Pygresql projects are merging Message-ID: <1057390641.2575.99.camel@haru.xtensive.com> Hi, The developers of PyGreSQL and PoPy are pleased to announce that they have decided to merge the two projects. It was felt that the two projects were alike in many ways but with different strengths which will allow them to create a more powerful product over all. Full announce could be read at: http://www.zope.org/Members/tm Regards. -- Thierry MICHEL Xtensive Consulting & Development 19 rue de l'ail 67000 Strasbourg France T?l: +33 (0)388 322 101 http://www.xtensive.com | "Free your Mind..." -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : http://mail.python.org/pipermail/db-sig/attachments/20030705/2b1374eb/attachment.bin From rbeneyt.akis at nerim.fr Mon Jul 7 12:44:40 2003 From: rbeneyt.akis at nerim.fr (=?ISO-8859-1?Q?Richard_B=E9neyt?=) Date: Mon Jul 7 05:46:44 2003 Subject: [DB-SIG] DB-API Ingres module ingmod bug. Message-ID: <3F094108.9050703@nerim.fr> Hi everybody, I've found the bug in ingmod.ec which made "tuple index out of range" pop up later in python programs. It was in the sqlda_input_bind(...) function: PySequence_GetItem(seq, num) fails at the end of seq and raises an exception that we must clear, with PyErr_Clear(), before leaving the loop, else it will crop up a little bit later (when a function call PyErr_Occurred() or something like that). Another way to do it would be to use PySequence_Length(seq) to get the seq's size and work with it. if (!(elem = PySequence_GetItem(sequence, num))) { PyErr_Clear(); break; } I sent the source to Holger Meyer (ingmod's original author) for him to review and integrate. As I've currently no mean (and no time actually) to set up a place where the package could be downloaded, I attach it to this mail, in case it would be useful to someone. Regards. -- Richard B?neyt rbeneyt.akis@nerim.fr -------------- next part -------------- A non-text attachment was scrubbed... Name: ingmod.tar.gz Type: application/gzip Size: 19190 bytes Desc: not available Url : http://mail.python.org/pipermail/db-sig/attachments/20030707/79d489a5/ingmod.tar.bin From sorr at rightnow.com Wed Jul 16 13:36:29 2003 From: sorr at rightnow.com (Orr, Steve) Date: Wed Jul 16 14:37:02 2003 Subject: [DB-SIG] Result Set Inconsistencies Message-ID: Why the inconsistencies in .fetch* result sets? Module fetchall Result Set --------- ------------------- MySQLdb tuple of tuples cx_Oracle list of tuples DCOracle2 list of lists From chris at cogdon.org Wed Jul 16 12:41:38 2003 From: chris at cogdon.org (Chris Cogdon) Date: Wed Jul 16 14:41:43 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: Message-ID: <229AD64A-B7BD-11D7-9C42-000393B658A2@cogdon.org> On Wednesday, Jul 16, 2003, at 11:36 US/Pacific, Orr, Steve wrote: > Why the inconsistencies in .fetch* result sets? > > Module fetchall Result Set > --------- ------------------- > MySQLdb tuple of tuples > cx_Oracle list of tuples > DCOracle2 list of lists The DB-API2.0 specification specifies that 'sequences' be used. This can mean tuples, lists or any other type that mimics the sequence protocol. For Example, pyPgSQL returns a list of PgResultSets, which operate like a sequence, but have other attributes too). Therefore, there is no requirement to specifically use lists or tuples, so the implementer is free to implement how he or she sees fit. -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From sorr at rightnow.com Wed Jul 16 13:58:53 2003 From: sorr at rightnow.com (Orr, Steve) Date: Wed Jul 16 14:59:26 2003 Subject: [DB-SIG] Result Set Inconsistencies Message-ID: I know the spec calls for sequences but why? Why be vague on an API "spec?" Intentional inconsistency? So if I'm developing against multiple databases and I want consistent result sets I have to know the behavior of each module and convert types? Seems weird to me. I'm thinking results sets should always be a lists or I should be able to specify how I want the result sets. I know there are lots of differences in database engines and an API can't make all database engines behave the same but it OUGHT to impose SOME consistency. -----Original Message----- From: Chris Cogdon [mailto:chris@cogdon.org] Sent: Wednesday, July 16, 2003 12:42 PM To: Orr, Steve Cc: db-sig@python.org Subject: Re: [DB-SIG] Result Set Inconsistencies On Wednesday, Jul 16, 2003, at 11:36 US/Pacific, Orr, Steve wrote: > Why the inconsistencies in .fetch* result sets? > > Module fetchall Result Set > --------- ------------------- > MySQLdb tuple of tuples > cx_Oracle list of tuples > DCOracle2 list of lists The DB-API2.0 specification specifies that 'sequences' be used. This can mean tuples, lists or any other type that mimics the sequence protocol. For Example, pyPgSQL returns a list of PgResultSets, which operate like a sequence, but have other attributes too). Therefore, there is no requirement to specifically use lists or tuples, so the implementer is free to implement how he or she sees fit. -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From chris at cogdon.org Wed Jul 16 13:17:14 2003 From: chris at cogdon.org (Chris Cogdon) Date: Wed Jul 16 15:17:19 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: Message-ID: <1B5F4BFA-B7C2-11D7-9C42-000393B658A2@cogdon.org> On Wednesday, Jul 16, 2003, at 11:58 US/Pacific, Orr, Steve wrote: > I know the spec calls for sequences but why? Why be vague on an API > "spec?" Intentional inconsistency? So if I'm developing against > multiple > databases and I want consistent result sets I have to know the behavior > of each module and convert types? Seems weird to me. > > I'm thinking results sets should always be a lists or I should be able > to specify how I want the result sets. I know there are lots of > differences in database engines and an API can't make all database > engines behave the same but it OUGHT to impose SOME consistency. Both lists and tuples follow the sequence specification. Ie, you find out their length, get elements and slices all using the same syntax. For example, where 's' is some sequence. length = len(s) element_4 = s[4] a_slice = s[2:4] for i in s: print "element", i All of the preceding work the same regardless of whether 's' is a list, tuple, or a PgResultSet You don't need to know if they're a list or tuple or anything else in most circumstances. In other words, it IS consistent, but not the level of consistency you're after :) -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From dyoo at hkn.eecs.berkeley.edu Wed Jul 16 15:02:40 2003 From: dyoo at hkn.eecs.berkeley.edu (Danny Yoo) Date: Wed Jul 16 17:03:04 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: <1B5F4BFA-B7C2-11D7-9C42-000393B658A2@cogdon.org> Message-ID: On Wed, 16 Jul 2003, Chris Cogdon wrote: > > I know the spec calls for sequences but why? Why be vague on an API > > "spec?" Intentional inconsistency? So if I'm developing against > > multiple databases and I want consistent result sets I have to know > > the behavior of each module and convert types? Seems weird to me. > > > > I'm thinking results sets should always be a lists or I should be able > > to specify how I want the result sets. I know there are lots of > > differences in database engines and an API can't make all database > > engines behave the same but it OUGHT to impose SOME consistency. > > Both lists and tuples follow the sequence specification. Ie, you find > out their length, get elements and slices all using the same syntax. > For example, where 's' is some sequence. > > length = len(s) > element_4 = s[4] > a_slice = s[2:4] > for i in s: > print "element", i > > All of the preceding work the same regardless of whether 's' is a list, > tuple, or a PgResultSet However, there is a fundamental difference between tuples and lists: they don't compare! ### >>> (1, 2) == [1, 2] 0 ### In this case, I agree that the standard needs to be specific about what kind of sequence is used to bundle up result sets. It's terrible to think that something innocent like: ### cursor.execute("select name, email from people") if ('dyoo', 'dyoo@hkn.eecs.berkeley.edu') in cursor.fetchall(): print "I'm in there!" ### won't work in Oracle simply because each result row is a list. From sorr at rightnow.com Wed Jul 16 16:50:49 2003 From: sorr at rightnow.com (Orr, Steve) Date: Wed Jul 16 17:51:26 2003 Subject: [DB-SIG] Result Set Inconsistencies Message-ID: > Both lists and tuples follow the sequence specification. So do strings... Lists are mutable so there's more you can do with them. -----Original Message----- From: Chris Cogdon [mailto:chris@cogdon.org] Sent: Wednesday, July 16, 2003 1:17 PM To: Orr, Steve Cc: db-sig@python.org Subject: Re: [DB-SIG] Result Set Inconsistencies On Wednesday, Jul 16, 2003, at 11:58 US/Pacific, Orr, Steve wrote: > I know the spec calls for sequences but why? Why be vague on an API > "spec?" Intentional inconsistency? So if I'm developing against > multiple databases and I want consistent result sets I have to know > the behavior of each module and convert types? Seems weird to me. > > I'm thinking results sets should always be a lists or I should be able > to specify how I want the result sets. I know there are lots of > differences in database engines and an API can't make all database > engines behave the same but it OUGHT to impose SOME consistency. Both lists and tuples follow the sequence specification. Ie, you find out their length, get elements and slices all using the same syntax. For example, where 's' is some sequence. length = len(s) element_4 = s[4] a_slice = s[2:4] for i in s: print "element", i All of the preceding work the same regardless of whether 's' is a list, tuple, or a PgResultSet You don't need to know if they're a list or tuple or anything else in most circumstances. In other words, it IS consistent, but not the level of consistency you're after :) -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From chris at cogdon.org Wed Jul 16 16:05:48 2003 From: chris at cogdon.org (Chris Cogdon) Date: Wed Jul 16 18:05:54 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: Message-ID: On Wednesday, Jul 16, 2003, at 14:50 US/Pacific, Orr, Steve wrote: >> Both lists and tuples follow the sequence specification. > So do strings... > > Lists are mutable so there's more you can do with them. Because the specification doesnt say that the results MUST be lists, there's no guarantee that the results coming from a fetch* are SUPPOSED to be mutable :) -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From chris at cogdon.org Wed Jul 16 16:08:18 2003 From: chris at cogdon.org (Chris Cogdon) Date: Wed Jul 16 18:08:23 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: Message-ID: <01456BF4-B7DA-11D7-9C42-000393B658A2@cogdon.org> On Wednesday, Jul 16, 2003, at 14:02 US/Pacific, Danny Yoo wrote: > However, there is a fundamental difference between tuples and lists: > they don't compare! > > ### >>>> (1, 2) == [1, 2] > 0 > ### > > > In this case, I agree that the standard needs to be specific about what > kind of sequence is used to bundle up result sets. It's terrible to > think > that something innocent like: > > ### > cursor.execute("select name, email from people") > if ('dyoo', 'dyoo@hkn.eecs.berkeley.edu') in cursor.fetchall(): > print "I'm in there!" > ### > > won't work in Oracle simply because each result row is a list. I'm sure we're both aware that your example could be MUCH better written with a little more intelligent SQL :) But, yes, I know it's just an example. For good or bad, implementers have decided to interpret 'sequence' in their own way. pyPgSQL's intelligent 'PgResultSet' is quite flexible (at the sacrifice of speed, as I've pointed out in previous discussions). At this stage of the game, I think forcing everyone to a particular interpretation of 'sequence' is not on the cards, and you should code your applications accordingly. For example >>> results = (1,2) # just for sake of example >>> results == [1,2] 0 >>> list(results) == [1,2] 1 Ie, if you're doing such direct comparison against lists, make sure you actually have a list you're comparing with. (in this case, converting to tuples might be better) -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From sorr at rightnow.com Wed Jul 16 17:16:34 2003 From: sorr at rightnow.com (Orr, Steve) Date: Wed Jul 16 18:17:07 2003 Subject: [DB-SIG] Result Set Inconsistencies Message-ID: > However, there is a fundamental difference between tuples and lists Exactly!!! > ...the standard needs to be specific about what kind of sequence is > used to bundle up result sets. It's terrible to think that something > innocent like: > ### > cursor.execute("select name, email from people") > if ('dyoo', 'dyoo@hkn.eecs.berkeley.edu') in cursor.fetchall(): > print "I'm in there!" > ### > won't work in Oracle simply because each result row is a list. Here's a quote from the spec: "This API has been defined to encourage similarity between the Python modules that are used to access databases. By doing this, we hope to achieve a consistency leading to more easily understood modules, code that is generally more portable across databases..." Because results sets can either be lists or tuples (or strings?) the API fails its stated purpose. Retrofitting clarity into the spec and imposing lists or tuples as the only way to handle result sets would be disruptive BUT... Enhancing the spec to require user configurable result set types could be quite beneficial. -----Original Message----- From: Danny Yoo [mailto:dyoo@hkn.eecs.berkeley.edu] Sent: Wednesday, July 16, 2003 3:03 PM To: Chris Cogdon Cc: Orr, Steve; db-sig@python.org Subject: Re: [DB-SIG] Result Set Inconsistencies On Wed, 16 Jul 2003, Chris Cogdon wrote: > > I know the spec calls for sequences but why? Why be vague on an API > > "spec?" Intentional inconsistency? So if I'm developing against > > multiple databases and I want consistent result sets I have to know > > the behavior of each module and convert types? Seems weird to me. > > > > I'm thinking results sets should always be a lists or I should be > > able to specify how I want the result sets. I know there are lots of > > differences in database engines and an API can't make all database > > engines behave the same but it OUGHT to impose SOME consistency. > > Both lists and tuples follow the sequence specification. Ie, you find > out their length, get elements and slices all using the same syntax. > For example, where 's' is some sequence. > > length = len(s) > element_4 = s[4] > a_slice = s[2:4] > for i in s: > print "element", i > > All of the preceding work the same regardless of whether 's' is a > list, tuple, or a PgResultSet However, there is a fundamental difference between tuples and lists: they don't compare! ### >>> (1, 2) == [1, 2] 0 ### In this case, I agree that the standard needs to be specific about what kind of sequence is used to bundle up result sets. It's terrible to think that something innocent like: ### cursor.execute("select name, email from people") if ('dyoo', 'dyoo@hkn.eecs.berkeley.edu') in cursor.fetchall(): print "I'm in there!" ### won't work in Oracle simply because each result row is a list. From jacobs at penguin.theopalgroup.com Wed Jul 16 19:52:14 2003 From: jacobs at penguin.theopalgroup.com (Kevin Jacobs) Date: Wed Jul 16 18:52:50 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: Message-ID: On Wed, 16 Jul 2003, Orr, Steve wrote: > Because results sets can either be lists or tuples (or strings?) the API > fails its stated purpose. You don't see the major users of DB-API drivers resonating with this argument, so this should be your hint that your use-case is not as compelling as you think. DB-API is there to enable database access -- not hold your hand or write your applications for you. If you want lists, and demand lists, then add the six extra characters and write: rows = list(cursor.fetchall()) If this is too much effort, then I can think of a several other ways to make this easier -- and none of them make life more difficult for DB-API driver authors. In this case -- less is more. -Kevin -- -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From kevin_cazabon at hotmail.com Wed Jul 16 17:01:43 2003 From: kevin_cazabon at hotmail.com (kevin_cazabon@hotmail.com) Date: Wed Jul 16 19:05:06 2003 Subject: [DB-SIG] Result Set Inconsistencies References: Message-ID: As a lurker here, I whole-heartedly agree that the spec should not be ambiguous here. While its not a lot of work to add a list() wrapper to calls, it's another potential for bugs, difficulties, and confusion. It also adds overhead to the transaction. Is it not too much to ask that the next rev. of the spec clarify this point, with a MINUMUM of having a preferred solution (so as to not break older modules, but provide a framework for new ones and updates)? At least that way when people implement an interface they know whether to choose the red pill or the blue one. Kevin. ----- Original Message ----- From: "Kevin Jacobs" To: "Orr, Steve" Cc: Sent: Wednesday, July 16, 2003 3:52 PM Subject: RE: [DB-SIG] Result Set Inconsistencies > On Wed, 16 Jul 2003, Orr, Steve wrote: > > Because results sets can either be lists or tuples (or strings?) the API > > fails its stated purpose. > > You don't see the major users of DB-API drivers resonating with this > argument, so this should be your hint that your use-case is not as > compelling as you think. DB-API is there to enable database access -- not > hold your hand or write your applications for you. If you want lists, and > demand lists, then add the six extra characters and write: > > rows = list(cursor.fetchall()) > > If this is too much effort, then I can think of a several other ways to make > this easier -- and none of them make life more difficult for DB-API driver > authors. In this case -- less is more. > > -Kevin > > -- > -- > Kevin Jacobs > The OPAL Group - Enterprise Systems Architect > Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com > Fax: (216) 986-0714 WWW: http://www.theopalgroup.com > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig > > From chris at cogdon.org Wed Jul 16 17:16:05 2003 From: chris at cogdon.org (Chris Cogdon) Date: Wed Jul 16 19:16:12 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: Message-ID: <798B43DA-B7E3-11D7-9C42-000393B658A2@cogdon.org> On Wednesday, Jul 16, 2003, at 16:01 US/Pacific, wrote: > As a lurker here, I whole-heartedly agree that the spec should not be > ambiguous here. While its not a lot of work to add a list() wrapper to > calls, it's another potential for bugs, difficulties, and confusion. > It > also adds overhead to the transaction. I think you've just hit on a key issue with regards to performance. If the interface writer had to return a particular format, that in itself may add overhead to the transaction. It may be better to say that the interface writer can choose the format gives the best performance because, in most cases[1], the application writer doesn't CARE if lists or tuples or MagicDoohickeySets are being returned. [1] - As evidenced by this issue not cropping up until now. -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From sorr at rightnow.com Wed Jul 16 20:01:52 2003 From: sorr at rightnow.com (Orr, Steve) Date: Wed Jul 16 21:02:25 2003 Subject: [DB-SIG] Result Set Inconsistencies Message-ID: > DB-API is there to enable database access -- not hold your hand or > write your applications for you. Oh c'mon. Nobody said anything about writing apps for anyone. I can write wrappers to overcome the inconsistencies of the DB API implementations but the point is that I should not have to and the spec itself agrees with me in its second sentence!!! See Danny Yoo's response! See Kevin Cazabon's response! Why be timid about the spec? Improvement was obviously needed between 1.0 and 2.0... Are you saying the spec is an immutable type and is perfect as it stands? :-) Developing on one database engine is easy but when you're developing apps to run on multiple database engines then you need consistency in the API. Perhaps there's a complacency about this because there's not much multi-database development going on. Picture a LARGE app (many lines of code) written to support Oracle, Informix, DB2, SAPDB, PostGreSQL, MySQL InnoDB, etc. and picture having to learn ALL the idiosyncracies of all the DB API implementations and wrap something around them to minimize database specific code. Now picture not having to worry about it because the API spec was tighter in the first place. > ...life more difficult for DB-API driver authors... This focus is wrong. The USERS of the API are important and should drive the development of the spec to meet their collective needs. Whether fetches return strings, tuples, or lists shouldn't impact API authors' ability to deliver Python modules that perform well. The first two sentences of the spec need greater emphasis!! See Danny Yoo's response! See Kevin Cazabon's response! Eschew complacency. Continued improvement is the way to go. Or just be timid, accept the status quo, and get busy with kludge work-arounds. ;-) -----Original Message----- From: Kevin Jacobs [mailto:jacobs@penguin.theopalgroup.com] Sent: Wednesday, July 16, 2003 4:52 PM To: Orr, Steve Cc: db-sig@python.org Subject: RE: [DB-SIG] Result Set Inconsistencies On Wed, 16 Jul 2003, Orr, Steve wrote: > Because results sets can either be lists or tuples (or strings?) the > API fails its stated purpose. You don't see the major users of DB-API drivers resonating with this argument, so this should be your hint that your use-case is not as compelling as you think. DB-API is there to enable database access -- not hold your hand or write your applications for you. If you want lists, and demand lists, then add the six extra characters and write: rows = list(cursor.fetchall()) If this is too much effort, then I can think of a several other ways to make this easier -- and none of them make life more difficult for DB-API driver authors. In this case -- less is more. -Kevin -- -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From nathan-db-sig at geerbox.com Wed Jul 16 19:17:59 2003 From: nathan-db-sig at geerbox.com (Nathan Clegg) Date: Wed Jul 16 21:17:52 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: References: Message-ID: <16149.63815.539584.603784@jin.int.geerbox.com> Specifications such as the DB API cover a wide variety of applications, engines, and adapters. Specifying things in terms of high level interfaces offers great flexibility to authors at either end of the API while allowing all apps to communicate with all engines at some level. I don't think equality checks, something I have certainly never desired to do in this context, is a very strong argument to restrict adapter authors in their implementations. >>>>> "Steve" == Orr, Steve writes: >> DB-API is there to enable database access -- not hold your hand >> or write your applications for you. Steve> Oh c'mon. Nobody said anything about writing apps for Steve> anyone. I can write wrappers to overcome the Steve> inconsistencies of the DB API implementations but the point Steve> is that I should not have to and the spec itself agrees Steve> with me in its second sentence!!! -- Nathan Clegg From jacobs at penguin.theopalgroup.com Wed Jul 16 23:59:40 2003 From: jacobs at penguin.theopalgroup.com (Kevin Jacobs) Date: Wed Jul 16 23:00:14 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: Message-ID: On Wed, 16 Jul 2003, Orr, Steve wrote: > > DB-API is there to enable database access -- not hold your hand or > > write your applications for you. > > Oh c'mon. Nobody said anything about writing apps for anyone. I can > write wrappers to overcome the inconsistencies of the DB API > implementations but the point is that I should not have to and the spec > itself agrees with me in its second sentence!!! The API authors were very careful about _not_ specifying what kinds of sequence type. The fact that you want to fill in some extra details implies that your needs differ from those that the authors intended. This is not to say that the spec is perfect, but this flexibility has been appreciated by driver authors and has placed no undue burden on application writers. > Why be timid about the spec? Improvement was obviously needed between > 1.0 and 2.0... Are you saying the spec is an immutable type and is > perfect as it stands? :-) Never! I'm deeply unhappy about certain parts of the spec, and have spent a great deal of time thinking about how to improve it meaningfully. > Developing on one database engine is easy but when you're developing > apps to run on multiple database engines then you need consistency in > the API. My middleware framework currently supports 15 distinct DB-API drivers, and none of them have the precise semantics that I want without glue logic. > Perhaps there's a complacency about this because there's not > much multi-database development going on. Picture a LARGE app (many > lines of code) written to support Oracle, Informix, DB2, SAPDB, > PostGreSQL, MySQL InnoDB, etc. and picture having to learn ALL the > idiosyncracies of all the DB API implementations and wrap something > around them to minimize database specific code. Now picture not having > to worry about it because the API spec was tighter in the first place. Sounds like every day for me. I maintain over 500k lines of code in several large financial applications that connect to many of the databases you mention above, plus quite a few more. And yes, I do have to worry about driver idiosyncrasies, and frankly, the sequence type of the result set is the least of my worries. Now if you want to talk about more precise type specifiers, or sensible semantics for bound query arguments, or a uniform type mapping infrastructure, or anything else that really does impact complex and heterogenious database enviornments, then you'll find I'm much more interested. -Kevin -- -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From anthony at interlink.com.au Thu Jul 17 19:36:28 2003 From: anthony at interlink.com.au (Anthony Baxter) Date: Thu Jul 17 04:37:12 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: Message-ID: <200307170836.h6H8aSve019460@localhost.localdomain> >>> "Orr, Steve" wrote > I'm thinking results sets should always be a lists or I should be able > to specify how I want the result sets. I know there are lots of > differences in database engines and an API can't make all database > engines behave the same but it OUGHT to impose SOME consistency. What about if someone wanted to make the result set a generator? That way you're not going to be pulling down all the data unless you actually need it... Anthony -- Anthony Baxter It's never too late to have a happy childhood. From msanchez at grupoburke.com Thu Jul 17 12:07:03 2003 From: msanchez at grupoburke.com (=?ISO-8859-1?Q?Marcos_S=E1nchez_Provencio?=) Date: Thu Jul 17 05:07:40 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: References: Message-ID: <3F166737.8010403@grupoburke.com> Kevin Jacobs wrote: > On Wed, 16 Jul 2003, Orr, Steve wrote: > >>>DB-API is there to enable database access -- not hold your hand or >>>write your applications for you. >> >>Oh c'mon. Nobody said anything about writing apps for anyone. I can >>write wrappers to overcome the inconsistencies of the DB API >>implementations but the point is that I should not have to and the spec >>itself agrees with me in its second sentence!!! > > > The API authors were very careful about _not_ specifying what kinds of > sequence type. The fact that you want to fill in some extra details implies > that your needs differ from those that the authors intended. This is not to > say that the spec is perfect, but this flexibility has been appreciated by > driver authors and has placed no undue burden on application writers. > Maybe all we need is to narrow the spec about this point. We could say the resultset is list-compatible and offer suggested improvements (such as named columns, iterators, etc.). Plus metadata to know what level of improvements we have (a-la-paramstyle). ?Everybody happy about that? From gh at ghaering.de Thu Jul 17 12:44:33 2003 From: gh at ghaering.de (=?ISO-8859-1?Q?Gerhard_H=E4ring?=) Date: Thu Jul 17 05:44:38 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: <200307170836.h6H8aSve019460@localhost.localdomain> References: <200307170836.h6H8aSve019460@localhost.localdomain> Message-ID: <3F167001.4080401@ghaering.de> Anthony Baxter wrote: >>>>"Orr, Steve" wrote >> >>I'm thinking results sets should always be a lists or I should be able >>to specify how I want the result sets. I know there are lots of >>differences in database engines and an API can't make all database >>engines behave the same but it OUGHT to impose SOME consistency. > > What about if someone wanted to make the result set a generator? That > way you're not going to be pulling down all the data unless you actually > need it... We have that already. Well, almost. The form c.execute("select ...") for row in c: ... is already an optional DB-API extension according to the spec. Just implement the __iter__ method in the cursor class :) -- Gerhard From mal at lemburg.com Thu Jul 17 12:50:04 2003 From: mal at lemburg.com (Marc-Andre Lemburg) Date: Thu Jul 17 05:50:40 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: <3F166737.8010403@grupoburke.com> References: <3F166737.8010403@grupoburke.com> Message-ID: <3F16714C.6060802@lemburg.com> Marcos S?nchez Provencio wrote: > Kevin Jacobs wrote: > >> On Wed, 16 Jul 2003, Orr, Steve wrote: >> The API authors were very careful about _not_ specifying what kinds of >> sequence type. The fact that you want to fill in some extra details >> implies >> that your needs differ from those that the authors intended. This is >> not to >> say that the spec is perfect, but this flexibility has been >> appreciated by >> driver authors and has placed no undue burden on application writers. >> > > Maybe all we need is to narrow the spec about this point. We could say > the resultset is list-compatible and offer suggested improvements (such > as named columns, iterators, etc.). Plus metadata to know what level of > improvements we have (a-la-paramstyle). > > ?Everybody happy about that? No need to complicate things: It's easy enough to convert any Python sequence type into any other type you may want. In reality, you rarely care whether the return value from .fetchall() or .fetchmany() is a list, tuple, user defined sequence type, etc. because you're usually processing the data using iteration and/or indexing. -- Marc-Andre Lemburg eGenix.com Professional Python Software directly from the Source (#1, Jul 17 2003) >>> Python/Zope Products & Consulting ... http://www.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2003-07-01: Released mxODBC.Zope.DA for FreeBSD 1.0.6 beta 1 From jacobs at penguin.theopalgroup.com Thu Jul 17 07:24:39 2003 From: jacobs at penguin.theopalgroup.com (Kevin Jacobs) Date: Thu Jul 17 06:25:18 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: <3F166737.8010403@grupoburke.com> Message-ID: On Thu, 17 Jul 2003, Marcos S?nchez Provencio wrote: > Maybe all we need is to narrow the spec about this point. We could say > the resultset is list-compatible and offer suggested improvements (such > as named columns, iterators, etc.). Plus metadata to know what level of > improvements we have (a-la-paramstyle). > > ?Everybody happy about that? Take a look at my db_row package -- it provides many of the improvements you mention, but without the need to change the DB-API and also doesn't specify (or care) what sequence type is used. See: http://opensource.theopalgroup.com/ -Kevin -- -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From magnus at thinkware.se Thu Jul 17 22:06:21 2003 From: magnus at thinkware.se (Magnus =?iso-8859-1?Q?Lyck=E5?=) Date: Thu Jul 17 15:00:22 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: Message-ID: <5.2.1.1.0.20030717204031.020cac98@www.thinkware.se> At Wed, 16 Jul 2003 12:36:29 -0600, "Orr, Steve" wrote: >Why the inconsistencies in .fetch* result sets? > >Module fetchall Result Set >--------- ------------------- >MySQLdb tuple of tuples >cx_Oracle list of tuples >DCOracle2 list of lists The relaxed API spec is a feature, not a bug, in my opinion. Personally I think a list of tuples (like cx_Oracle uses) embodies the meaning of these types in Python best, but there might be all sorts of implementation aspects that leads interface developers to do it differently. It's also typical for Python to have such relaxed APIs. The late binding and signature based polymorphism of Python is generally considered a strength, not a liability, and the importance of this flexibility is one reason why it's hard to implement interface specifications for Python. A key in successful Python programming is to not assume more about data than you need and can... So don't overspecify interfaces. It will just lead to reduced flexibility for no good reason. APIs in most other languages are more strict because most other languages are unable to handle the kind of flexibility that Python can handle. Since the spec only specifies a sequence, you shouldn't make assumptions like Danny Yoo did in his example. Just becase a program happens to run in a certain context, we can't assume that the program is correct. -- Magnus Lycka (It's really Lyckå), magnus@thinkware.se Thinkware AB, Sweden, www.thinkware.se I code Python ~ The Agile Programming Language From sorr at rightnow.com Fri Jul 18 12:09:06 2003 From: sorr at rightnow.com (Orr, Steve) Date: Fri Jul 18 13:09:38 2003 Subject: [DB-SIG] Result Set Inconsistencies Message-ID: Regarding: > The relaxed API spec is a feature Well... I genuinely appreciate the "relaxed robustness" of Python but I don't think this means we should be relaxed about API specs which should be more exacting. When standards are relaxed they become confusing and degenerate into many colored flavors of interpretation. Remember the beginnings of ODBC and how it had to be tightened up? SQL is a great standard that has had to be tightened up over time and it would have been so much easier if it we more strict and comprehensive from the beginning. -----Original Message----- From: Magnus Lyck? [mailto:magnus@thinkware.se] Sent: Thursday, July 17, 2003 1:06 PM To: db-sig@python.org; db-sig@python.org Subject: Re: [DB-SIG] Result Set Inconsistencies At Wed, 16 Jul 2003 12:36:29 -0600, "Orr, Steve" wrote: >Why the inconsistencies in .fetch* result sets? > >Module fetchall Result Set >--------- ------------------- >MySQLdb tuple of tuples >cx_Oracle list of tuples >DCOracle2 list of lists The relaxed API spec is a feature, not a bug, in my opinion. Personally I think a list of tuples (like cx_Oracle uses) embodies the meaning of these types in Python best, but there might be all sorts of implementation aspects that leads interface developers to do it differently. It's also typical for Python to have such relaxed APIs. The late binding and signature based polymorphism of Python is generally considered a strength, not a liability, and the importance of this flexibility is one reason why it's hard to implement interface specifications for Python. A key in successful Python programming is to not assume more about data than you need and can... So don't overspecify interfaces. It will just lead to reduced flexibility for no good reason. APIs in most other languages are more strict because most other languages are unable to handle the kind of flexibility that Python can handle. Since the spec only specifies a sequence, you shouldn't make assumptions like Danny Yoo did in his example. Just becase a program happens to run in a certain context, we can't assume that the program is correct. -- Magnus Lycka (It's really Lyckå), magnus@thinkware.se Thinkware AB, Sweden, www.thinkware.se I code Python ~ The Agile Programming Language _______________________________________________ DB-SIG maillist - DB-SIG@python.org http://mail.python.org/mailman/listinfo/db-sig From sorr at rightnow.com Fri Jul 18 12:09:19 2003 From: sorr at rightnow.com (Orr, Steve) Date: Fri Jul 18 13:09:42 2003 Subject: [DB-SIG] Result Set Inconsistencies Message-ID: Regarging: > No need to complicate things: It's easy enough to convert any Python > sequence type into any other type you may want. Hmmm... Well that's my point. If an API consistently returns the same sequence type then it's less complicated. I guess "complication" depends on your perspective. ;-) -----Original Message----- From: Marc-Andre Lemburg [mailto:mal@lemburg.com] Sent: Thursday, July 17, 2003 3:50 AM To: db-sig@python.org Subject: Re: [DB-SIG] Result Set Inconsistencies Marcos S?nchez Provencio wrote: > Kevin Jacobs wrote: > >> On Wed, 16 Jul 2003, Orr, Steve wrote: >> The API authors were very careful about _not_ specifying what kinds >> of sequence type. The fact that you want to fill in some extra >> details implies that your needs differ from those that the authors >> intended. This is not to >> say that the spec is perfect, but this flexibility has been >> appreciated by >> driver authors and has placed no undue burden on application writers. >> > > Maybe all we need is to narrow the spec about this point. We could say > the resultset is list-compatible and offer suggested improvements (such > as named columns, iterators, etc.). Plus metadata to know what level of > improvements we have (a-la-paramstyle). > > ?Everybody happy about that? No need to complicate things: It's easy enough to convert any Python sequence type into any other type you may want. In reality, you rarely care whether the return value from .fetchall() or .fetchmany() is a list, tuple, user defined sequence type, etc. because you're usually processing the data using iteration and/or indexing. -- Marc-Andre Lemburg eGenix.com Professional Python Software directly from the Source (#1, Jul 17 2003) >>> Python/Zope Products & Consulting ... http://www.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2003-07-01: Released mxODBC.Zope.DA for FreeBSD 1.0.6 beta 1 _______________________________________________ DB-SIG maillist - DB-SIG@python.org http://mail.python.org/mailman/listinfo/db-sig From sorr at rightnow.com Fri Jul 18 12:09:24 2003 From: sorr at rightnow.com (Orr, Steve) Date: Fri Jul 18 13:09:44 2003 Subject: [DB-SIG] Result Set Inconsistencies Message-ID: Thanks for your reply. Whilst I agree that the sequence type of a result set is a relatively small point and is easily overcome I'm still quizical on the "philosophy" behind a "flexible" API spec. See my response to the "relaxed API spec is a feature" post. Regarding the "burden" on the API authors... I don't see it. I looked at some of their Python code and it looked like changing the result set return type would be easy, but I could be wrong. On a similar vein as regards cx_Oracle vs DCOracle2 result sets... When I select a database column of datatype number, DCOracle2 returns '1' where cx_Oracle returns '1.0' ... What's up with that? Is this a failing of the spec or a failing of the module? What about cursor.description? It seems like the second element could be more consistent. OK... It looks like I'm getting ready to start down the road to developing my owna "middleware" framework. Any recommendations? (I guess I'll take another look at your db_row package.) As I go down this road I don't want to find myself saying, "Why oh why didn't I take the blue pill?" ;-) -----Original Message----- From: Kevin Jacobs [mailto:jacobs@penguin.theopalgroup.com] Sent: Wednesday, July 16, 2003 9:00 PM To: Orr, Steve Cc: db-sig@python.org Subject: RE: [DB-SIG] Result Set Inconsistencies On Wed, 16 Jul 2003, Orr, Steve wrote: > > DB-API is there to enable database access -- not hold your hand or > > write your applications for you. > > Oh c'mon. Nobody said anything about writing apps for anyone. I can > write wrappers to overcome the inconsistencies of the DB API > implementations but the point is that I should not have to and the > spec itself agrees with me in its second sentence!!! The API authors were very careful about _not_ specifying what kinds of sequence type. The fact that you want to fill in some extra details implies that your needs differ from those that the authors intended. This is not to say that the spec is perfect, but this flexibility has been appreciated by driver authors and has placed no undue burden on application writers. > Why be timid about the spec? Improvement was obviously needed between > 1.0 and 2.0... Are you saying the spec is an immutable type and is > perfect as it stands? :-) Never! I'm deeply unhappy about certain parts of the spec, and have spent a great deal of time thinking about how to improve it meaningfully. > Developing on one database engine is easy but when you're developing > apps to run on multiple database engines then you need consistency in > the API. My middleware framework currently supports 15 distinct DB-API drivers, and none of them have the precise semantics that I want without glue logic. > Perhaps there's a complacency about this because there's not much > multi-database development going on. Picture a LARGE app (many lines > of code) written to support Oracle, Informix, DB2, SAPDB, PostGreSQL, > MySQL InnoDB, etc. and picture having to learn ALL the idiosyncracies > of all the DB API implementations and wrap something around them to > minimize database specific code. Now picture not having to worry about > it because the API spec was tighter in the first place. Sounds like every day for me. I maintain over 500k lines of code in several large financial applications that connect to many of the databases you mention above, plus quite a few more. And yes, I do have to worry about driver idiosyncrasies, and frankly, the sequence type of the result set is the least of my worries. Now if you want to talk about more precise type specifiers, or sensible semantics for bound query arguments, or a uniform type mapping infrastructure, or anything else that really does impact complex and heterogenious database enviornments, then you'll find I'm much more interested. -Kevin -- -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From jacobs at penguin.theopalgroup.com Fri Jul 18 14:35:05 2003 From: jacobs at penguin.theopalgroup.com (Kevin Jacobs) Date: Fri Jul 18 13:35:39 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: Message-ID: On Fri, 18 Jul 2003, Orr, Steve wrote: > On a similar vein as regards cx_Oracle vs DCOracle2 result sets... > When I select a database column of datatype number, DCOracle2 returns > '1' where cx_Oracle returns '1.0' ... What's up with that? Is this a > failing of the spec or a failing of the module? It could be a failing of the module, because the spec doesn't dictate what to do with data. The spec was written to be almost totally agnostic about the data and type capabilities of the backend database. > What about cursor.description? It seems like the second element could be > more consistent. >From my perspective, the type code is not usually the problem -- it is the DB-API type objects that are too coarse grained. It is important to know if a column is a NUMERIC or a FLOAT type, and to distinguish DATE, TIME, or DATETIME and INTERVAL types. > OK... It looks like I'm getting ready to start down the road to > developing my owna "middleware" framework. Any recommendations? (I guess > I'll take another look at your db_row package.) As I go down this road I > don't want to find myself saying, "Why oh why didn't I take the blue > pill?" ;-) It is very much worth the effort. You get to do things your way, without having to provide general solutions for every corner case. db_row is a good start, because it is orthogonal to many other extensions you may want to add. The only restiction is that it reqires the use of at least Python 2.2.3. I'm also in the process of legally disentangling and open-sourcing large portions of my own database abstraction framework. Hopefully that will also help, even if only by example. -Kevin -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From mcfletch at rogers.com Fri Jul 18 14:57:50 2003 From: mcfletch at rogers.com (Mike C. Fletcher) Date: Fri Jul 18 13:59:00 2003 Subject: [DB-SIG] Result Set Inconsistencies In-Reply-To: References: Message-ID: <3F18351E.6010301@rogers.com> Kevin Jacobs wrote: ... >>OK... It looks like I'm getting ready to start down the road to >>developing my owna "middleware" framework. Any recommendations? (I guess >>I'll take another look at your db_row package.) As I go down this road I >>don't want to find myself saying, "Why oh why didn't I take the blue >>pill?" ;-) >> >> > >It is very much worth the effort. You get to do things your way, without >having to provide general solutions for every corner case. db_row is a good >start, because it is orthogonal to many other extensions you may want to >add. The only restiction is that it reqires the use of at least Python >2.2.3. > I've been looking into db_row today, very similar to the DBRow class in wxpytable (my own database abstraction framework), though that's a read-write system, and has no effort made to reduce memory overhead for individual rows. Haven't yet decided whether to adopt db_row, or just transfer a few of the ideas. Does look very nicely designed, anyway. May wind up taking the idea of making result-sets classes (they're stand-alone objects in my systems) at least. >I'm also in the process of legally disentangling and open-sourcing large >portions of my own database abstraction framework. Hopefully that will also >help, even if only by example. > Might help me. I'm currently working on the object-relational part of my system to support my day-job (the company uses my library, but the library is open-sourced, so have to work on it off-hours). Most of what the system provides at the moment is reverse engineering of database schemas from live databases on MySQL or PostgreSQL (i.e. get databases, tables, fields, constraints and indices), a "thick" API level that provides missing features for given database adapters (e.g. cursor.connection), and a fairly extensive set of objects for modelling schemas (either reverse engineered or generated directly, with SQL-generation support). It's also got a few convenience elements here and there. How many middleware systems do we have? Is it possible that we need a Database-API on top of/beside DB-API to start reducing all the duplication, or is there really no common set of functions? Enjoy, Mike _______________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://members.rogers.com/mcfletch/ From fcoutant at freesurf.fr Fri Jul 18 22:26:42 2003 From: fcoutant at freesurf.fr (Fabien COUTANT) Date: Fri Jul 18 18:16:21 2003 Subject: [DB-SIG] Getting schemas and other niceties (was: Result Set Inconsistencies) In-Reply-To: <3F18351E.6010301@rogers.com> References: <3F18351E.6010301@rogers.com> Message-ID: <20030718192642.GA4320@harris> Hi Mike, Hi everyone, On Friday, 18 July 2003, you (Mike C. Fletcher) wrote: [...] > library is open-sourced, so have to work on it off-hours). Most of what > the system provides at the moment is reverse engineering of database > schemas from live databases on MySQL or PostgreSQL (i.e. get databases, > tables, fields, constraints and indices), a "thick" API level that > provides missing features for given database adapters (e.g. > cursor.connection), and a fairly extensive set of objects for modelling > schemas (either reverse engineered or generated directly, with > SQL-generation support). It's also got a few convenience elements here > and there. > > How many middleware systems do we have? Is it possible that we need a > Database-API on top of/beside DB-API to start reducing all the > duplication, or is there really no common set of functions? I tried about a year ago to make things move on this subject: I suggested an API extension to obtain and represent schema information. With no luck. At that time I explained that more and more tools want to work on the meta-level, but by lack of standard API each one implements its own schema getting methods, incompatible between each others and each supporting a different, limited set of DBMS. You show this is still true today. My general idea is that standard things should be usable in standard ways. This is what an API specification is for: it tells a common way to do standard things, and can let open some doors to implement specific things. This idea is implemented in Java and makes a big part of its success (this is also why Sun outputs so many API specifications, as new domains emerge). This idea can be declined on several subjects in our DBAPI context: - all DBMS communicate on behalf of a connection, so we have connection objects. - there is a SQL standard, so there is a method to execute SQL commands and retrieve results. - all DBMS allow schema introspection, so we should set an API to do it. (that's what I had tried) - SQL enforces quoting rules, so we should have a standard-quoting method. Unfortunately, there are some type-specific rules and types here and there, so the quoting method should attach to a particular DB driver / connection. This is actually implemented by the execute* cursor methods; Some problems still reside in unclear parameter passing/typing aspects (more in list archives for the past few months, I don't want to re-open the debate here) - all DMBS have some form of URL string as single connection argument (think of JDBC's jdbc::... URL scheme); Additionally the same parts are more or less always encountered in these URLs: host, port, base, schema, user and password. Maybe we should set a common syntax for the connect method argument(s) across all drivers... - SQL describes a minimal common set of types, so we should set a common naming for type names/objects in column types of result sets and schemas. Etc... (anyone wants to add ?) All this would probably imply a new (major?) revision of the DBAPI spec, and would in turn imply updating DB drivers. But not necessarily, if the biggest changes are made optional. As always, some people (like me) are expecting changes, and some don't. As you saw, nothing moved up to now ;-) Conclusion: I also post this to test people's feeling after a year and several debates. Is is time for DBAPI3 (or should it be 2.5) ? Personnaly I think DBAPI2 is not enough as it is (as you guessed :), as it lacks standardization in several common domains. NB: for your result-set row type problem, I think specifying a "sequence" in DBAPI is enough. For your comparison you should try to convert the searched element to the row type like this: search = ["foo", "bar"] rs = cursor.execute('...some SQL...') # convert here, if there is at least one row to test its type if len(rs) > 0 : rowSeqType = type(rs[0]) search = rowSeqType(search) # might be tricky (who says doesn't work ?!) on classes that don't # have the right constructor if search in rs : ... or convert the result set rows: rs = map(type(search), rs) -- Hope this helps, Fabien. From mal at lemburg.com Sat Jul 19 01:37:12 2003 From: mal at lemburg.com (M.-A. Lemburg) Date: Fri Jul 18 18:37:49 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <20030718192642.GA4320@harris> References: <3F18351E.6010301@rogers.com> <20030718192642.GA4320@harris> Message-ID: <3F187698.1060309@lemburg.com> Fabien COUTANT wrote: >>How many middleware systems do we have? Is it possible that we need a >>Database-API on top of/beside DB-API to start reducing all the >>duplication, or is there really no common set of functions? > > I tried about a year ago to make things move on this subject: I suggested > an API extension to obtain and represent schema information. > With no luck. Perhaps that's because people usually write application specific database abstractions ?! In real life, you only support n different database backends (with n <= 3 in most cases). Writing an application abstraction then boils down to writing a class with methods using DB-API calls and one defining the SQL to be used for each backend. That's not much work and easier to customize/understand/debug/etc than trying to wrap your head around complex overgeneralized object-relational database mapping interfaces. "Practicality beats purity." W/r to the subject line, I think the best workable approach that the industry has come up with is the ODBC approach to schema inspection. But that's really a DB-API extensions (which is only needed by a few application types), so does not have a place in the specification itself. -- Marc-Andre Lemburg eGenix.com Professional Python Software directly from the Source (#1, Jul 19 2003) >>> Python/Zope Products & Consulting ... http://www.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2003-07-01: Released mxODBC.Zope.DA for FreeBSD 1.0.6 beta 1 From fcoutant at freesurf.fr Sat Jul 19 14:52:04 2003 From: fcoutant at freesurf.fr (Fabien COUTANT) Date: Sat Jul 19 08:10:35 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <3F187698.1060309@lemburg.com> References: <3F18351E.6010301@rogers.com> <20030718192642.GA4320@harris> <3F187698.1060309@lemburg.com> Message-ID: <20030719115204.GA1296@harris> > >I tried about a year ago to make things move on this subject: I suggested > >an API extension to obtain and represent schema information. > >With no luck. > > Perhaps that's because people usually write application specific > database abstractions ?! ...and they surely do like this partly because they have specific needs, and partly because they lack support for some standard thing in the API. > In real life, you only support n different database backends > (with n <= 3 in most cases). Writing an application abstraction > then boils down to writing a class with methods using DB-API calls > and one defining the SQL to be used for each backend. I only agree about the end-user class of applications, though I've seen in most of my work projects an additional layer is not necessary most of the time (only connect/execute/fetch methods are). > That's not much work and easier to customize/understand/debug/etc > than trying to wrap your head around complex overgeneralized > object-relational database mapping interfaces. I only said *standard* *SQL* *related* things should be translated to an API. OR mappers are a specific thing that depend on target language, object design and underlying data model, and I fully agree there should be nothing about them in a DB access API: they deserve an layer/software on their own. Howver the data model is where a DB access API is concerned. There is a whole class of software that deserves a schema access API: OR mappers, DB designers / reverse-engineering tools, automatic code and web/GUI forms generator, and there are surely others. This class of software may be bigger than you think, and we, as application developers, would have greater choice and benefit if those tools were not limited each to a few DMBS, but instead worked using a common API. Why are there so many Java tools about DB design, OR mapping, code or form generation ? Because JDBC has standardized a schema introspection API ! So why not doing the same with Python ? Such an extension would be optional, written only once in a given driver, and would benefit a whole set of tools, so this would actually be easier to do than duplicating/understanding/debugging/etc code in each of those tools. > "Practicality beats purity." As a project leader at work, I totally agree with this. But this does not apply here: I find absolutely impractical that each meta-level DB tool that want to supports a DB has to write specific code for it, when the feature it needs is standard/ubiquitous in the first place. > W/r to the subject line, I think the best workable approach that > the industry has come up with is the ODBC approach to schema Don't know ODBC, only got my hands on JDBC. I suppose they are similar wrt schema inspection. > inspection. But that's really a DB-API extensions (which is only > needed by a few application types), so does not have a place > in the specification itself. I am indeed speaking of extensions, which are needed by a few software *types*. But it's worth standardizing the APIs for those extensions, and the best place is DB-API. Make them optional, so that current drivers still conform to the new spec, but at least drivers that want to implement the extensions will have an API to conform to. -- Hope this helps, Fabien. From mal at lemburg.com Sat Jul 19 16:16:56 2003 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat Jul 19 09:17:31 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <20030719115204.GA1296@harris> References: <3F18351E.6010301@rogers.com> <20030718192642.GA4320@harris> <3F187698.1060309@lemburg.com> <20030719115204.GA1296@harris> Message-ID: <3F1944C8.7020005@lemburg.com> Fabien COUTANT wrote: > OR mappers are a specific thing that depend on target language, object > design and underlying data model, and I fully agree there should be nothing > about them in a DB access API: they deserve an layer/software on their > own. > > Howver the data model is where a DB access API is concerned. There is a > whole class of software that deserves a schema access API: OR mappers, DB > designers / reverse-engineering tools, automatic code and web/GUI forms > generator, and there are surely others. > > This class of software may be bigger than you think, and we, as application > developers, would have greater choice and benefit if those tools were not > limited each to a few DMBS, but instead worked using a common API. Why are > there so many Java tools about DB design, OR mapping, code or form > generation ? Because JDBC has standardized a schema introspection API ! > So why not doing the same with Python ? Uhm, we already have such an interface: mxODBC provides these interfaces for CPython and zxJDBC for Jython. > Such an extension would be optional, written only once in a given driver, > and would benefit a whole set of tools, so this would actually be easier > to do than duplicating/understanding/debugging/etc code in each of those > tools. > ... >>W/r to the subject line, I think the best workable approach that >>the industry has come up with is the ODBC approach to schema > > Don't know ODBC, only got my hands on JDBC. I suppose they are similar wrt > schema inspection. Have a look at http://www.egenix.com/files/python/mxODBC.pdf for a list of cursor level APIs for DB introspection. ODBC has had these for a long time and they have proven to provide everything you need for the class of software you describe above. zxJDBC provides the same set of APIs on top of JDBC. If we add optional DB-API extensions, then I'd suggest to simply go with the set defined in the mxODBC docs. -- Marc-Andre Lemburg eGenix.com Professional Python Software directly from the Source (#1, Jul 19 2003) >>> Python/Zope Products & Consulting ... http://www.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2003-07-01: Released mxODBC.Zope.DA for FreeBSD 1.0.6 beta 1 From fog at initd.org Sat Jul 19 15:25:37 2003 From: fog at initd.org (Federico Di Gregorio) Date: Sat Jul 19 10:25:39 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <3F1944C8.7020005@lemburg.com> References: <3F18351E.6010301@rogers.com> <20030718192642.GA4320@harris> <3F187698.1060309@lemburg.com> <20030719115204.GA1296@harris> <3F1944C8.7020005@lemburg.com> Message-ID: <1058627791.3939.21.camel@localhost> Il sab, 2003-07-19 alle 15:16, M.-A. Lemburg ha scritto: [snip] > > generation ? Because JDBC has standardized a schema introspection API ! > > So why not doing the same with Python ? > > Uhm, we already have such an interface: mxODBC provides these > interfaces for CPython and zxJDBC for Jython. sorry, no. mxODBC != dbapi. the fact that mxODBC provide some extension to the dbapi does not mean we have such an interface. -- Federico Di Gregorio Debian GNU/Linux Developer fog@debian.org INIT.D Developer fog@initd.org Gli esseri umani, a volte, sono destinati, per il solo fatto di esistere, a fare del male a qualcuno. -- Haruki Murakami -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: Questa parte del messaggio =?ISO-8859-1?Q?=E8?= firmata Url : http://mail.python.org/pipermail/db-sig/attachments/20030719/e38abbc0/attachment.bin From fcoutant at freesurf.fr Sat Jul 19 18:58:46 2003 From: fcoutant at freesurf.fr (Fabien COUTANT) Date: Sat Jul 19 12:03:15 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <3F1944C8.7020005@lemburg.com> References: <3F18351E.6010301@rogers.com> <20030718192642.GA4320@harris> <3F187698.1060309@lemburg.com> <20030719115204.GA1296@harris> <3F1944C8.7020005@lemburg.com> Message-ID: <20030719155846.GA1565@harris> On Saturday, 19 July 2003, you (M.-A. Lemburg) wrote: [...] > Have a look at http://www.egenix.com/files/python/mxODBC.pdf > for a list of cursor level APIs for DB introspection. ODBC has > had these for a long time and they have proven to provide > everything you need for the class of software you describe > above. zxJDBC provides the same set of APIs on top of JDBC. I have just finished reading it. Interesting reading. I don't agree on everything written in it, but that's another story. A few remarks: - I read that mxODBC is not 100% DB-API compliant, as some meta-information (column sizes) are not available; I consider this is a failure of ODBC (not your package) since the info is present and fetchable from databases. In JDBC column sizes *are* available. - your doc is not complete by itself, because it refers to MS's ODBC docs where constants and meta-info are used or returned ("getinfo" connection's method). > If we add optional DB-API extensions, then I'd suggest to > simply go with the set defined in the mxODBC docs. As I understand you *don't* suggest that db-api is useless and we should all use mxODBC, but rather that we should mimic mxODBC's introspecting API into DB-API. Why not... I like the idea of your cursor's "catalog" methods. I synthetize for others: - connection objects have some meta-data as r/o attributes: DBMS name and version, driver name and version (plus others more specific to ODBC) - cursor objects have a set of methods that return result sets (i.e. sequences of sequences in DB-API interpretation) for meta-information about DB structure and access rights (I intentionally omit arguments for clarity): - tables() - tableprivileges() - columns() - columnprivileges() - primarykeys() - foreignkeys() - procedures() - procedurecolumns() I'm ok for the concept, but I see a few things that should be taken into account to accomplish integration into DB-API: - ODBC, its API and your document are copyrighted material (respectively by MS and egenix), so we must invent DB-API's own naming and representation of meta-data. - SQL-level representation should be returned (such as the already declared type codes used in cursors description attribute) instead of byte streams or DB/ODBC specific codes. This does not prevent specific types to be added, I just mean standard "codes" must be used for standard types. - columns in result sets corresponding to features of standard SQL (column name, type, size, unique, nullable, ...) should be made first and mandatory (but would allow for None values in some specified columns, as a provision for DBMS that don't support the feature). We have to carefully select the columns that fall into this category. - columns returned in such result sets would not be bound by the specification, but could be extended to include other driver/DB specific infos, as long as mandatory infos are here in the first columns. [I stop here for now, want to see other's reactions] -- Hope this helps, Fabien. From mcfletch at rogers.com Sat Jul 19 13:37:58 2003 From: mcfletch at rogers.com (Mike C. Fletcher) Date: Sat Jul 19 12:39:16 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <3F1944C8.7020005@lemburg.com> References: <3F18351E.6010301@rogers.com> <20030718192642.GA4320@harris> <3F187698.1060309@lemburg.com> <20030719115204.GA1296@harris> <3F1944C8.7020005@lemburg.com> Message-ID: <3F1973E6.3080907@rogers.com> M.-A. Lemburg wrote: > Fabien COUTANT wrote: ... >> Howver the data model is where a DB access API is concerned. There is a >> whole class of software that deserves a schema access API: OR >> mappers, DB >> designers / reverse-engineering tools, automatic code and web/GUI forms >> generator, and there are surely others. > ... > Uhm, we already have such an interface: mxODBC provides these > interfaces for CPython and zxJDBC for Jython. Well, technically, *you*, and your clients, of course, have it just now ;) . I'm gathering you're offering that definition/doc as a reference "spec" for a standardisation effort? >> Such an extension would be optional, written only once in a given >> driver, >> and would benefit a whole set of tools, so this would actually be easier >> to do than duplicating/understanding/debugging/etc code in each of those >> tools. > > > ... > >>> W/r to the subject line, I think the best workable approach that >>> the industry has come up with is the ODBC approach to schema >> >> >> Don't know ODBC, only got my hands on JDBC. I suppose they are >> similar wrt >> schema inspection. > Very similar indeed, basically the same tables getting returned AFAICS. >> > > Have a look at http://www.egenix.com/files/python/mxODBC.pdf > for a list of cursor level APIs for DB introspection. ODBC has > had these for a long time and they have proven to provide > everything you need for the class of software you describe > above. zxJDBC provides the same set of APIs on top of JDBC. The APIs seem adequate, though it's not clear from that document, for instance, how multi-field foreign-key references work (I assume they're supposed to create two rows in the foreign-key table). BTW, all of the links to the ODBC docs on MSDN that I tried failed. Questions: What is specialcolumns supposed to do? It appears to get primary keys which may be "special" in some way. Why is the index-retrieval method called statistics? Yes, it appears to include statistics, but that seems secondary to me. I don't currently have any code for describing procedures/functions from the database, that would need to be created. Similarly I don't currently pull the type descriptions out of the databases. > If we add optional DB-API extensions, then I'd suggest to > simply go with the set defined in the mxODBC docs. Well, most of the system-catalog query stuff seems doable from PostgreSQL and MySQL (it's basically the same information as I'm reverse-engineering into an object-based schema description, though the mxODBC stuff has a considerable number of fields I don't yet extract). Out of curiousity, do people actually use the table-based formats for real work? Or do they just parse the tables to create objects describing the schemas? DB-Catalog-API anyone? Mike _______________________________________ Mike C. Fletcher Designer, VR Plumber, Coder http://members.rogers.com/mcfletch/ From jacobs at penguin.theopalgroup.com Sat Jul 19 15:09:47 2003 From: jacobs at penguin.theopalgroup.com (Kevin Jacobs) Date: Sat Jul 19 14:10:36 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <20030719155846.GA1565@harris> Message-ID: On Sat, 19 Jul 2003, Fabien COUTANT wrote: > I'm ok for the concept, but I see a few things that should be taken into > account to accomplish integration into DB-API: > > - ODBC, its API and your document are copyrighted material (respectively by > MS and egenix), so we must invent DB-API's own naming and > representation of meta-data. No -- just our own documentation that may also point at mxODBC and MS ODBC. However, we may choose to deviate from these anyway, since Python is more expressive about certain concepts than the C ODBC and the Java JDBC APIs. > - SQL-level representation should be returned (such as the already declared > type codes used in cursors description attribute) instead of byte > streams or DB/ODBC specific codes. > This does not prevent specific types to be added, I just mean standard > "codes" must be used for standard types. This is a dangerously ambiguous statement. What I _think_ you mean is that you want canonical representations (not SQL representations) instead of RDBMS-native binary values. What you do not want are SQL literal representations, at least not by default. > - columns in result sets corresponding to features of standard SQL (column > name, type, size, unique, nullable, ...) should be made first and > mandatory (but would allow for None values in some specified columns, > as a provision for DBMS that don't support the feature). We have to > carefully select the columns that fall into this category. First and mandatory? Why enforce an ordinal relationship among attributes of a given column? The existing description tuple concept is simply outdated and needs to be replaced, not kludged with extensions. > - columns returned in such result sets would not be bound by the > specification, but could be extended to include other driver/DB > specific infos, as long as mandatory infos are here in the first > columns. Sure -- this is a complex way of saying that some attributes will be mandatory, but many extensions are possible and should be naturally accommodated. Again, a good reason to not pass back an ordinal description object. My wish list includes a structured schema introspection API that does not intrude with the interface of cursor and connection objects. i.e., I'd like to see a connection.schema property of method, but not a multitude of new methods added to the connection and cursor interfaces. -Kevin -- -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From mal at lemburg.com Sun Jul 20 00:09:59 2003 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat Jul 19 15:59:49 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <3F1973E6.3080907@rogers.com> References: <3F18351E.6010301@rogers.com> <20030718192642.GA4320@harris> <3F187698.1060309@lemburg.com> <20030719115204.GA1296@harris> <3F1944C8.7020005@lemburg.com> <3F1973E6.3080907@rogers.com> Message-ID: <3F19B3A7.30504@lemburg.com> Mike C. Fletcher wrote: > M.-A. Lemburg wrote: >> Uhm, we already have such an interface: mxODBC provides these >> interfaces for CPython and zxJDBC for Jython. > > Well, technically, *you*, and your clients, of course, have it just now > ;). I'm gathering you're offering that definition/doc as a reference > "spec" for a standardisation effort? Yes. I'm proposing to go the ODBC way here, because folks have already put a lot of work into this, so reinventing the wheel can effectively be prevented :-) >> Have a look at http://www.egenix.com/files/python/mxODBC.pdf >> for a list of cursor level APIs for DB introspection. ODBC has >> had these for a long time and they have proven to provide >> everything you need for the class of software you describe >> above. zxJDBC provides the same set of APIs on top of JDBC. > > The APIs seem adequate, though it's not clear from that document, for > instance, how multi-field foreign-key references work (I assume they're > supposed to create two rows in the foreign-key table). BTW, all of the > links to the ODBC docs on MSDN that I tried failed. Oh well... so they changed the URLs again :-/ MS has the tendency to change the web-site structure every few months. It's hard to keep the links up to date. > Questions: > > What is specialcolumns supposed to do? It appears to get primary > keys which may be "special" in some way. These special columns can be used to uniquely identify a row within the table, i.e. the row id column name or a primary key column if nothing else is available. It may even return multiple columns, which you'd then have to query together in order to identify a row in the table if the underlying data source does not have the concept of a row id. > Why is the index-retrieval method called statistics? Yes, it appears > to include statistics, but that seems secondary to me. Ask MS :-) > I don't currently have any code for describing procedures/functions from > the database, that would need to be created. Similarly I don't > currently pull the type descriptions out of the databases. > >> If we add optional DB-API extensions, then I'd suggest to >> simply go with the set defined in the mxODBC docs. > > Well, most of the system-catalog query stuff seems doable from > PostgreSQL and MySQL (it's basically the same information as I'm > reverse-engineering into an object-based schema description, though the > mxODBC stuff has a considerable number of fields I don't yet extract). > Out of curiousity, do people actually use the table-based formats for > real work? Or do they just parse the tables to create objects > describing the schemas? Most of the time, the schema inspection is a one time operation which then sets up some internal data structure for use in the abstraction layers. -- Marc-Andre Lemburg eGenix.com Professional Python Software directly from the Source (#1, Jul 19 2003) >>> Python/Zope Products & Consulting ... http://www.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2003-07-01: Released mxODBC.Zope.DA for FreeBSD 1.0.6 beta 1 From mal at lemburg.com Sun Jul 20 00:19:55 2003 From: mal at lemburg.com (M.-A. Lemburg) Date: Sat Jul 19 16:09:45 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <20030719155846.GA1565@harris> References: <3F18351E.6010301@rogers.com> <20030718192642.GA4320@harris> <3F187698.1060309@lemburg.com> <20030719115204.GA1296@harris> <3F1944C8.7020005@lemburg.com> <20030719155846.GA1565@harris> Message-ID: <3F19B5FB.5000606@lemburg.com> Fabien COUTANT wrote: > On Saturday, 19 July 2003, you (M.-A. Lemburg) wrote: > [...] > >>Have a look at http://www.egenix.com/files/python/mxODBC.pdf >>for a list of cursor level APIs for DB introspection. ODBC has >>had these for a long time and they have proven to provide >>everything you need for the class of software you describe >>above. zxJDBC provides the same set of APIs on top of JDBC. > > I have just finished reading it. Interesting reading. I don't agree on > everything written in it, but that's another story. > > A few remarks: > - I read that mxODBC is not 100% DB-API compliant, as some meta-information > (column sizes) are not available; I consider this is a failure of ODBC > (not your package) since the info is present and fetchable from > databases. In JDBC column sizes *are* available. mxODBC 2.0 is 100% DB-API compliant. If you read the spec carefully, you'll find that things like column sizes (which were really only useful in the days of mainfraime text based UIs) are optional. Rather than spending time to query this information from the database for each and every query, I chose to leave it to the user to call the appropriate catalog methods instead. In practice you'll also find that the information from the catalog methods is more realiable than the data available at query time. > - your doc is not complete by itself, because it refers to MS's ODBC > docs where constants and meta-info are used or returned ("getinfo" > connection's method). True, people using these methods are usually experts and don't have trouble looking up the data in the MS docs. After all, we didn't want to reauthor the entire ODBC spec :-) >>If we add optional DB-API extensions, then I'd suggest to >>simply go with the set defined in the mxODBC docs. > > As I understand you *don't* suggest that db-api is useless and we should > all use mxODBC, but rather that we should mimic mxODBC's introspecting API > into DB-API. Both actually :-) No, seriously, I'm only suggesting that if we consider adding methods for introspection, then we should follow the existing sets in mxODBC and zxJDBC. > Why not... I like the idea of your cursor's "catalog" methods. I > synthetize for others: > - connection objects have some meta-data as r/o attributes: DBMS name and > version, driver name and version (plus others more specific to ODBC) > - cursor objects have a set of methods that return result sets (i.e. > sequences of sequences in DB-API interpretation) for meta-information > about DB structure and access rights (I intentionally omit arguments > for clarity): > - tables() > - tableprivileges() > - columns() > - columnprivileges() > - primarykeys() > - foreignkeys() > - procedures() > - procedurecolumns() > > I'm ok for the concept, but I see a few things that should be taken into > account to accomplish integration into DB-API: > > - ODBC, its API and your document are copyrighted material (respectively by > MS and egenix), so we must invent DB-API's own naming and > representation of meta-data. I'm the editor of the DB API spec and don't have a problem with putting some of our docs in the public domain. > - SQL-level representation should be returned (such as the already declared > type codes used in cursors description attribute) instead of byte > streams or DB/ODBC specific codes. > This does not prevent specific types to be added, I just mean standard > "codes" must be used for standard types. No problem with that as long as we define name-based codes rather than hard-code the values into the spec. > - columns in result sets corresponding to features of standard SQL (column > name, type, size, unique, nullable, ...) should be made first and > mandatory (but would allow for None values in some specified columns, > as a provision for DBMS that don't support the feature). We have to > carefully select the columns that fall into this category. I'd rather not change the layout of the result sets. Adding new columns is OK though (ODBC allows this too). > - columns returned in such result sets would not be bound by the > specification, but could be extended to include other driver/DB > specific infos, as long as mandatory infos are here in the first > columns. Right. -- Marc-Andre Lemburg eGenix.com Professional Python Software directly from the Source (#1, Jul 19 2003) >>> Python/Zope Products & Consulting ... http://www.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2003-07-01: Released mxODBC.Zope.DA for FreeBSD 1.0.6 beta 1 From fcoutant at freesurf.fr Sun Jul 20 10:28:09 2003 From: fcoutant at freesurf.fr (Fabien COUTANT) Date: Sun Jul 20 03:28:52 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: References: <20030719155846.GA1565@harris> Message-ID: <20030720072809.GB660@harris> On Saturday, 19 July 2003, you (Kevin Jacobs) wrote: > > - ODBC, its API and your document are copyrighted material (respectively by > > MS and egenix), so we must invent DB-API's own naming and > > representation of meta-data. > No -- just our own documentation that may also point at mxODBC and MS ODBC. > However, we may choose to deviate from these anyway, since Python is more > expressive about certain concepts than the C ODBC and the Java JDBC APIs. Why link/attach to an external specification source, that is commercial and independant of DBSIG ? If we are to do that, I prefer we refer to some JDBC version which would be more stable (in location notably ;-). In fact it may even be a better thing to attach the referred spec document to DB-API spec so that there are no more location problems possible. > > - SQL-level representation should be returned (such as the already declared > > type codes used in cursors description attribute) instead of byte > This is a dangerously ambiguous statement. What I _think_ you mean is that > you want canonical representations (not SQL representations) instead of > RDBMS-native binary values. What you do not want are SQL literal > representations, at least not by default. Yes, exactly. I want types returned as driver.STRING, driver.BINARY, driver.NUMBER etc. > > - columns in result sets corresponding to features of standard SQL (column > > name, type, size, unique, nullable, ...) should be made first and > First and mandatory? Why enforce an ordinal relationship among attributes > of a given column? The existing description tuple concept is simply > outdated and needs to be replaced, not kludged with extensions. To be useful there must be a way to find back precise columns (e.g. column name). I simply suggested an idea that was uniform with the existing DB-API cursor description attribute. Alternatively, since we are speaking of a extension that does not collide with existing spec, we could choose a new mechanism to point columns in result sets, such as: - fixing once and for all mandatory column names. E.g. "COLUMN_NAME" when retrieving list of columns. or - setting a member attribute of driver/connection with the position or name of the column in the result set. E.g. db.COLUMN_NAME == 3 (this is an example, of course we should not let the choice open as to whether these are names or positions) > My wish list includes a structured schema introspection API that does not > intrude with the interface of cursor and connection objects. i.e., I'd like > to see a connection.schema property of method, but not a multitude of new > methods added to the connection and cursor interfaces. Agreed. Again this is going towards JDBC's API, whereby there is a getMetaData() method on connection, which returns a DatabaseMetaData object that holds all other meta-info methods and attributes. -- Hope this helps, Fabien. From fcoutant at freesurf.fr Sun Jul 20 10:23:50 2003 From: fcoutant at freesurf.fr (Fabien COUTANT) Date: Sun Jul 20 04:01:51 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <3F19B5FB.5000606@lemburg.com> References: <3F18351E.6010301@rogers.com> <20030718192642.GA4320@harris> <3F187698.1060309@lemburg.com> <20030719115204.GA1296@harris> <3F1944C8.7020005@lemburg.com> <20030719155846.GA1565@harris> <3F19B5FB.5000606@lemburg.com> Message-ID: <20030720072349.GA660@harris> On Saturday, 19 July 2003, you (M.-A. Lemburg) wrote: > mxODBC 2.0 is 100% DB-API compliant. If you read the spec carefully, > you'll find that things like column sizes (which were really only > useful in the days of mainfraime text based UIs) are optional. Right, I overlooked this ! The description attribute enforces a structure of 7-item sequences, but says nothing about type and meaning of the elements (or should we consider element names enough for experts ? :-), except for type_code. > have trouble looking up the data in the MS docs. After all, we > didn't want to reauthor the entire ODBC spec :-) If we refer to an external spec, we might want to Pythonize it, or minimize the set of mandatory attributes/methods to simplify driver writers job. WRT meaning I'm ok for external references. > >- ODBC, its API and your document are copyrighted material (respectively by > I'm the editor of the DB API spec and don't have a problem with > putting some of our docs in the public domain. Glad to hear it. > > >- SQL-level representation should be returned (such as the already declared > > type codes used in cursors description attribute) instead of byte > No problem with that as long as we define name-based codes rather > than hard-code the values into the spec. Agreed. I never intented to hard-code anything in the spec :) > >- columns in result sets corresponding to features of standard SQL (column > > name, type, size, unique, nullable, ...) should be made first and > I'd rather not change the layout of the result sets. Adding new > columns is OK though (ODBC allows this too). This is being debated in another post... -- Hope this helps, Fabien. From jacobs at penguin.theopalgroup.com Sun Jul 20 10:41:17 2003 From: jacobs at penguin.theopalgroup.com (Kevin Jacobs) Date: Sun Jul 20 09:41:57 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: <20030720072809.GB660@harris> Message-ID: On Sun, 20 Jul 2003, Fabien COUTANT wrote: > On Saturday, 19 July 2003, you (Kevin Jacobs) wrote: > > > - SQL-level representation should be returned (such as the already declared > > > type codes used in cursors description attribute) instead of byte > > This is a dangerously ambiguous statement. What I _think_ you mean is that > > you want canonical representations (not SQL representations) instead of > > RDBMS-native binary values. What you do not want are SQL literal > > representations, at least not by default. > > Yes, exactly. I want types returned as driver.STRING, driver.BINARY, > driver.NUMBER etc. This causes tremendous information loss with the current DB-API type objects. I _need_ to know if the driver.NUMERIC is an SQL NUMERIC or a FLOAT, or an INT8 or INT32 or INT64. > > > - columns in result sets corresponding to features of standard SQL (column > > > name, type, size, unique, nullable, ...) should be made first and > > First and mandatory? Why enforce an ordinal relationship among attributes > > of a given column? The existing description tuple concept is simply > > outdated and needs to be replaced, not kludged with extensions. > > To be useful there must be a way to find back precise columns (e.g. column > name). I'm not suggesting the removal of ordinal addressing of columns -- just the ordinal relationship between column schema/descritions. i.e., extending the existing 7 element tuple is a silly thing to do. Thus there will still be a column values 0..n-1 for each tuple, since this is at the heart of the relational model. Column names should still be optional, since many RDBMS will return unpredictable names (or none at all) for many types of expressions. -Kevin -- -- Kevin Jacobs The OPAL Group - Enterprise Systems Architect Voice: (216) 986-0710 x 19 E-mail: jacobs@theopalgroup.com Fax: (216) 986-0714 WWW: http://www.theopalgroup.com From fcoutant at freesurf.fr Mon Jul 21 10:31:11 2003 From: fcoutant at freesurf.fr (Fabien COUTANT) Date: Mon Jul 21 03:10:54 2003 Subject: [DB-SIG] Getting schemas and other niceties In-Reply-To: References: Message-ID: <30586.217.167.52.114.1058772671.squirrel@arlette.freesurf.fr> > On Sun, 20 Jul 2003, Fabien COUTANT wrote: >> Yes, exactly. I want types returned as driver.STRING, driver.BINARY, >> driver.NUMBER etc. > This causes tremendous information loss with the current DB-API type > objects. I _need_ to know if the driver.NUMERIC is an SQL NUMERIC or a > FLOAT, or an INT8 or INT32 or INT64. Returning the exact type is possible too. In fact the current mxODBC package returns both a type code and a type name. I was speaking of the type code, you are speaking of the type name, so this is compatible. In fact I agree that we need the exact DB-specific type name too. > I'm not suggesting the removal of ordinal addressing of columns -- just > the ordinal relationship between column schema/descritions. i.e., > extending the existing 7 element tuple is a silly thing to do. Thus > there will still be a column values 0..n-1 for each tuple, since this > is at the heart of the relational model. Column names should still be > optional, since many RDBMS will return unpredictable names (or none at > all) for many types of expressions. There must be some misunderstanding here... We were speaking of columns returned in result sets by schema instrospection methods, not of cursors' description attribute in general. Those columns have to be defined in some way, for exemple JDBC defines fixed names and meaning such as TABLE_CAT, TABLE_NAME, COLUMN_NAME, etc. More generally I think we shouldn't touch at the existing API, only add an extension (I guess everyone already agrees on this). -- Hope this helps, Fabien. ---------------------------------------------------------------- Ce service de mailing vous est offert par http://www.freesurf.fr. FreeSurf, votre acces ADSL a partir de 29 euros/mois http://www.freesurf.fr/adsl/ From romerchat at hotmail.com Wed Jul 23 17:27:44 2003 From: romerchat at hotmail.com (Reuven Abliyev) Date: Thu Jul 24 05:11:46 2003 Subject: [DB-SIG] UNICODE Message-ID: I using ACCESS & ADO Database contains some data in Hebrew connect to DB - OK getting Recordset - OK BUT when I'm trying to load data I've got into COMBOBOX I'm getting this message: UnicodeError: ASCII encoding error: ordinal not in range(128) How can I solve this From mal at lemburg.com Thu Jul 24 12:37:51 2003 From: mal at lemburg.com (M.-A. Lemburg) Date: Thu Jul 24 05:38:25 2003 Subject: [DB-SIG] UNICODE In-Reply-To: References: Message-ID: <3F1FA8EF.6000105@lemburg.com> Reuven Abliyev wrote: > I using ACCESS & ADO > Database contains some data in Hebrew > connect to DB - OK > getting Recordset - OK > BUT > when I'm trying to load data I've got into COMBOBOX > I'm getting this message: > UnicodeError: ASCII encoding error: ordinal not in range(128) > > How can I solve this Not sure how you can solve this with ADO, but mxODBC can be setup to read Unicode directly from the database: To have mxODBC connections run in native Unicode mode you have to set them up using the .stringformat attribute: connection = mx.ODBC.Windows.DriverConnect(...) connection.stringformat = mx.ODBC.Windows.NATIVE_UNICODE_STRINGFORMAT # or connection.stringformat = mx.ODBC.Windows.MIXED_STRINGFORMAT For more information, have a look at the data types section and the connection.stringformat documentation: http://www.egenix.com/files/python/mxODBC.html#Datatypes -- Marc-Andre Lemburg eGenix.com Professional Python Software directly from the Source (#1, Jul 24 2003) >>> Python/Zope Products & Consulting ... http://www.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ 2003-07-01: Released mxODBC.Zope.DA for FreeBSD 1.0.6 beta 1 From romerchat at hotmail.com Thu Jul 24 14:05:36 2003 From: romerchat at hotmail.com (Reuven Abliyev) Date: Thu Jul 24 06:07:01 2003 Subject: [DB-SIG] Re: UNICODE References: <3F1FA8EF.6000105@lemburg.com> Message-ID: We are using thne mxODBC in our project now, but we want to convert our project to ADO "M.-A. Lemburg" wrote in message news:3F1FA8EF.6000105@lemburg.com... > Reuven Abliyev wrote: > > I using ACCESS & ADO > > Database contains some data in Hebrew > > connect to DB - OK > > getting Recordset - OK > > BUT > > when I'm trying to load data I've got into COMBOBOX > > I'm getting this message: > > UnicodeError: ASCII encoding error: ordinal not in range(128) > > > > How can I solve this > > Not sure how you can solve this with ADO, but mxODBC can > be setup to read Unicode directly from the database: > > To have mxODBC connections run > in native Unicode mode you have to set them up using the > .stringformat attribute: > > connection = mx.ODBC.Windows.DriverConnect(...) > connection.stringformat = mx.ODBC.Windows.NATIVE_UNICODE_STRINGFORMAT > # or > connection.stringformat = mx.ODBC.Windows.MIXED_STRINGFORMAT > > For more information, have a look at the data types section > and the connection.stringformat documentation: > > http://www.egenix.com/files/python/mxODBC.html#Datatypes > > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Software directly from the Source (#1, Jul 24 2003) > >>> Python/Zope Products & Consulting ... http://www.egenix.com/ > >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > 2003-07-01: Released mxODBC.Zope.DA for FreeBSD 1.0.6 beta 1 > > > _______________________________________________ > DB-SIG maillist - DB-SIG@python.org > http://mail.python.org/mailman/listinfo/db-sig > From chris at cogdon.org Thu Jul 24 08:57:58 2003 From: chris at cogdon.org (Chris Cogdon) Date: Thu Jul 24 10:58:02 2003 Subject: [DB-SIG] Re: UNICODE In-Reply-To: Message-ID: <36CA634A-BDE7-11D7-91DC-000393B658A2@cogdon.org> On Thursday, Jul 24, 2003, at 04:05 US/Pacific, Reuven Abliyev wrote: > We are using thne mxODBC in our project now, > but we want to convert our project to ADO IT sounds like that there isn't a problem with the database, but rather a problem with the GUI that you're using to display the data. THe GUI wants the information converted to ASCII, but since the data doesn't fall into the ASCII character set, you're receiving the error. What GUI are you using? Can you ask the question to a forum familiar with that GUI instead ? -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From steve_weaver at peoplesoft.com Thu Jul 24 10:38:28 2003 From: steve_weaver at peoplesoft.com (steve_weaver@peoplesoft.com) Date: Thu Jul 24 12:38:41 2003 Subject: [DB-SIG] Out of Office Response: DB-SIG Digest, Vol 4, Issue 13 Message-ID: I will be out of the office starting Thursday, July 24, 2003 and will not return until Monday, August 4, 2003. I will respond to your message when I return. From pli2553 at comcast.net Thu Jul 24 20:12:30 2003 From: pli2553 at comcast.net (pli2553@comcast.net) Date: Thu Jul 24 15:13:26 2003 Subject: [DB-SIG] Questions on Free Python "ODBC Interface" Message-ID: Hi, I can find very littrle document on this. Would anyone please tell me how good this package is? Is it good enough for developing small commercial database applications? I tried sample code lised on the WEB site. Database conenctions are established ok (statement is: s = odbc.odbc('DSN/UID/PASSWORD'). the interpreter does not complain) but I am not successful selecting anything from any of the tables I have. It always returns 0(0 row selected?), while there are actually records in those tables. Any idea what's the problem? Any advices are greatly appreciated peng From anthony at computronix.com Fri Jul 25 16:10:37 2003 From: anthony at computronix.com (Anthony Tuininga) Date: Fri Jul 25 11:10:38 2003 Subject: [DB-SIG] cx_OracleDBATools 1.2 Message-ID: <1059145764.17139.22.camel@localhost.localdomain> What is it? cx_OracleDBATools is a set of Python scripts that handle Oracle DBA tasks in a cross platform manner. These scripts are intended to work the same way on all platforms and hide the complexities involved in managing Oracle databases, especially on Windows. Binaries are provided for those who do not have a Python installation. Where do I get it? http://starship.python.net/crew/atuining http://www.computronix.com/utilities.shtml (it may be a few days before the second site is updated) What's new? This is the first release of this project to the public. It has been in use in our company for several months already. -- Anthony Tuininga anthony@computronix.com Computronix Distinctive Software. Real People. Suite 200, 10216 - 124 Street NW Edmonton, AB, Canada T5N 4A3 Phone: (780) 454-3700 Fax: (780) 454-3838 http://www.computronix.com From anthony at computronix.com Fri Jul 25 16:13:35 2003 From: anthony at computronix.com (Anthony Tuininga) Date: Fri Jul 25 11:13:43 2003 Subject: [DB-SIG] cx_OracleTools 7.1 Message-ID: <1059145971.17139.26.camel@localhost.localdomain> What is it? cx_OracleTools is a set of Python scripts that handle Oracle database development tasks in a cross platform manner and improve (in my opinion) on the tools that are available by default in an Oracle client installation. Those who use cx_Oracle (a Python interface driver for Oracle compatible with the DB API) may also be interested in this project, if only as examples. Binaries are provided for those who do not have a Python installation. Where do I get it? http://starship.python.net/crew/atuining http://www.computronix.com/utilities.shtml utronix.com (it may be a few days before the second site is updated) What's new? This is the first release of this project to the public. It has been in heavy use in our company for several years already. -- Anthony Tuininga anthony@computronix.com Computronix Distinctive Software. Real People. Suite 200, 10216 - 124 Street NW Edmonton, AB, Canada T5N 4A3 Phone: (780) 454-3700 Fax: (780) 454-3838 http://www.computronix.com From steve_weaver at peoplesoft.com Fri Jul 25 10:38:30 2003 From: steve_weaver at peoplesoft.com (steve_weaver@peoplesoft.com) Date: Fri Jul 25 12:38:38 2003 Subject: [DB-SIG] Out of Office Response: DB-SIG Digest, Vol 4, Issue 14 Message-ID: I will be out of the office starting Thursday, July 24, 2003 and will not return until Monday, August 4, 2003. I will respond to your message when I return. From steve_weaver at peoplesoft.com Sat Jul 26 10:38:27 2003 From: steve_weaver at peoplesoft.com (steve_weaver@peoplesoft.com) Date: Sat Jul 26 12:38:40 2003 Subject: [DB-SIG] Out of Office Response: DB-SIG Digest, Vol 4, Issue 15 Message-ID: I will be out of the office starting Thursday, July 24, 2003 and will not return until Monday, August 4, 2003. I will respond to your message when I return. From bill.allie at mug.org Sat Jul 26 23:35:35 2003 From: bill.allie at mug.org (Billy G. Allie) Date: Sat Jul 26 22:35:39 2003 Subject: [DB-SIG] pyPgSQL 2.4 released. Message-ID: <3F233A77.7020509@mug.org> Announce: pyPgSQL - Version 2.4 is released. =========================================================================== pyPgSQL v2.4 has been released. It is available at http://pypgsql.sourceforge.net. pyPgSQL is a package of two (2) modules that provide a Python DB-API 2.0 compliant interface to PostgreSQL databases. The first module, libpq, is written in C and exports the PostgreSQL C API to Python. The second module, PgSQL, provides the DB-API 2.0 compliant interface and support for various PostgreSQL data types, such as INT8, NUMERIC, MONEY, BOOL, ARRAYS, etc. This module is written in Python and works with PostgreSQL 7.0 or later and Python 2.0 or later. It was tested with PostgreSQL 7.0.3, 7.1.3, 7.2.2, 7.3, Python 2.0.1, 2.1.3 and 2.2.2. Note: It is highly recommended that you use PostgreSQL 7.2 or later and Python 2.1 or later. If you want to use PostgreSQL Large Objects under Python 2.2.x, you *must* use Python 2.2.2, or later because of a bug in earlier 2.2 versions. Project homepages: pyPgSQL: http://pypgsql.sourceforge.net/ PostgreSQL: http://www.postgresql.org/ Python: http://www.python.org/ --------------------------------------------------------------------------- ChangeLog: =========================================================================== Changes since pyPgSQL Version 2.3 ================================= =-=-=-=-=-=-=-=-=-=-=-=-=- ** IMPORTANT NOTE ** =-=-=-=-=-=-=-=-=-=-=-=-=-= NOTE: There is a change to the Connection.binary() function that *could* cause existing code to break. Connection.binary() no longer commits the transaction used to create the large object. The application developer is now responsible for commiting (or rolling back) the transaction. -=-=-=-=-=-=-=-=-=-=-=-=-= ** IMPORTANT NOTE ** -=-=-=-=-=-=-=-=-=-=-=-=-=- Changes to README ----------------- * Updates for 2.4. Changes to PgSQL.py ------------------- * Applied patch from Laurent Pinchart to allow _quote to correctly process objects that are sub-classed from String and Long types. * Change the name of the quoting function back to _quote. Variables named like __*__ should be restrict to system names. * PgTypes is now hashable. repr() of a PgType will now return the repr() of the underlying OID. * Connection.binary() will now fail if autocommit is enabled. * Connection.binary() will no longer commit the transaction after creating the large object. The application developer is now responsible for commiting (or for rolling back) the transaction [Bug #747525]. * Added PG_TIMETZ to the mix [Patch #708013]. * Pg_Money will now accept a string as a parameter. * PostgreSQL int2, int, int4 will now be cast into Python ints. Int8 will be cast into a Python long. Float4, float8, and money types will be cast into a Python float. * Correct problem with the PgNumeric.__radd__ method. [Bug #694358] * Correct problem with conversion of negitive integers (with a given scale and precision) to PgNumerics. [Bug #694358] * Work around a problem where the precision and scale of a query result can be different from the first result in the result set. [Bug #697221] * Change the code so that the display length in the cursor.description attribute is always None instead of '-1'. * Fixed another problem with interval <-> DateTimeDelta casting. * Corrected a problem that caused the close of a portal (ie. PostgreSQL cursor) to fail. * Corrected a problem with interval <-> DateTimeDelta casting. [Bug #653044] * Corrected problem found by Adam Buraczewski in the __setupTransaction function. * Allow both 'e' and 'E' to signify an exponent in the PgNumeric constructor. * Correct some problems that were missed in yesterday's fixes (Thanks, Adam, for the help with the problems) Changes to libpqmodule.c ------------------------ * On win32, we usually statically link against libpq. Because of fortunate circumstances, a problem didn't show up until now: we need to call WSAStartup() to initialize the socket stuff from Windows *in our module* in order for the statically linked libpq to work. I just took the relevant DllMain function from the libpq sources and put it here. * Modified some comments to reflect reality. * Applied patch from Laurent Pinchart: In libPQquoteString, bytea are quoted using as much as 5 bytes per input byte (0x00 is quoted '\\000'), so allocating (slen * 4) + 3 is not enough for data that contain lots of 0x00 bytes. * Added PG_TIMETZ to the mix [Patch #708013]. Changes to pgboolean.c ---------------------- * Change the name of the quoting function back to _quote. __*__ type names should be restricted to system names. Changes to pgconnection.c ------------------------- * Applied patch by Laurent Pinchart to correct a problem lo_import, lo_export, lo_unlink. * In case PQgetResult returns NULL, let libPQgetResult return a Python None, like the docstring says. This is necessary in order to be able to cancel queries, as after cancelling a query with PQrequestCancel, we need to read results until PQgetResult returns NULL. Changes to pglargeobject.c -------------------------- * Change the name of the quoting function back to _quote. __*__ type names should be restricted to system names. Changes to pgnotify.c --------------------- * Fixed a bug in the code. The code in question use to work, but doesn't anymore (possible change to libpq?). -- ___________________________________________________________________________ ____ | Billy G. Allie | Domain....: Bill.Allie@mug.org | /| | 7436 Hartwell | MSN.......: B_G_Allie@email.msn.com |-/-|----- | Dearborn, MI 48126| |/ |LLIE | (313) 582-1540 | From steve_weaver at peoplesoft.com Sun Jul 27 10:38:27 2003 From: steve_weaver at peoplesoft.com (steve_weaver@peoplesoft.com) Date: Sun Jul 27 12:38:44 2003 Subject: [DB-SIG] Out of Office Response: DB-SIG Digest, Vol 4, Issue 16 Message-ID: I will be out of the office starting Thursday, July 24, 2003 and will not return until Monday, August 4, 2003. I will respond to your message when I return.