From ricardo.b at zmail.pt Sat Nov 4 00:13:11 2006 From: ricardo.b at zmail.pt (Ricardo Bugalho) Date: Fri, 03 Nov 2006 23:13:11 +0000 Subject: [DB-SIG] How to pass parameter with LIKE In-Reply-To: References: <402a50980610242114g7814725dn7cf1a94b3095cfa6@mail.gmail.com> Message-ID: <1162595591.15367.21.camel@ezquiel> And avoid python's string formating. Use paramterized queries. cursor.execute( "SELECT mrc_code from blocks where mrc_code in (%s, %s, %s, %s)", blkfld) You can generate that string too, instead of hard coding it: sqlQuery = "SELECT mrc_code FROM blocks WHERE mrc_code in (%s)" % (",").join(["%s" for x in blkfld]) cursor.execute(sqlQuery, blkfld) On Wed, 2006-10-25 at 10:01 +0000, William Dode wrote: > On 25-10-2006, Janice Sterling wrote: > > select mrc_code from blocks where mrc_code in ('31103a1', '31103e1', > '31103a5', '31103e5') > > isnt'it ? > From aprotin at research.att.com Mon Nov 6 23:33:40 2006 From: aprotin at research.att.com (Art Protin) Date: Mon, 06 Nov 2006 17:33:40 -0500 Subject: [DB-SIG] Spec. clarification re: .scroll Message-ID: <454FB844.50200@research.att.com> Dear folks, In the option method for cursors, .scroll(), in mode == 'absolute', does the target position start at 1 or at 0? I assumed 0. Thank you, Arthur Protin From mal at egenix.com Tue Nov 7 00:13:04 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 07 Nov 2006 00:13:04 +0100 Subject: [DB-SIG] Spec. clarification re: .scroll In-Reply-To: <454FB844.50200@research.att.com> References: <454FB844.50200@research.att.com> Message-ID: <454FC180.6070402@egenix.com> Art Protin wrote: > Dear folks, > In the option method for cursors, .scroll(), in mode == 'absolute', > does the target position start at 1 or at 0? I assumed 0. Right. Note that the row number indicates the index of the next row a .fetchxxx() call will fetch. cursor.scroll(0, 'absolute') will thus position the cursor just before the first row in the result set. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 07 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From aprotin at research.att.com Fri Nov 10 17:35:09 2006 From: aprotin at research.att.com (Art Protin) Date: Fri, 10 Nov 2006 11:35:09 -0500 Subject: [DB-SIG] Cursor.description when Cursor.rowcount == 0 Message-ID: <4554AA3D.80708@research.att.com> Dear folks, On testing and documenting my implementation of the V2 DB API Spec, I am bothered by my (limited) understanding of what the spec says for the behavior of Cursor.description . Clearly when a query (or rather an SQL statement) does not produce a result, .description should be None. However, not producing a result is not the same as producing a table of zero rows by one or more columns. (This is like the distinction between the two comparisons "" == False and "" is False , the empty string has the same 'value' as False while remaining distinct.) I do not know that this will ever make a difference to my users but I am expected to be precise in my implementation. Is it the general understanding that .description will return None whenever the result set has no rows? Thank you all, Arthur Protin From carsten at uniqsys.com Fri Nov 10 18:35:02 2006 From: carsten at uniqsys.com (Carsten Haese) Date: Fri, 10 Nov 2006 12:35:02 -0500 Subject: [DB-SIG] Cursor.description when Cursor.rowcount == 0 In-Reply-To: <4554AA3D.80708@research.att.com> References: <4554AA3D.80708@research.att.com> Message-ID: <1163180102.3365.50.camel@dot.uniqsys.com> On Fri, 2006-11-10 at 11:35 -0500, Art Protin wrote: > Dear folks, > On testing and documenting my implementation of the V2 DB API Spec, > I am bothered > by my (limited) understanding of what the spec says for the behavior of > Cursor.description . > Clearly when a query (or rather an SQL statement) does not produce a > result, .description should be None. However, not producing a result is > not the same as producing a table of > zero rows by one or more columns. (This is like the distinction between > the two comparisons > "" == False > and > "" is False > , the empty string has the same 'value' as False while remaining distinct.) > I do not know that this will ever make a difference to my users > but I am expected > to be precise in my implementation. Is it the general understanding > that .description > will return None whenever the result set has no rows? I can't speak for the universe, but that's not my understanding. My interpretation of the spec is that "operations that do not return rows" means non-DQL (i.e. DDL/DML/DCL) operations. DQL operations should IMHO always be considered as returning rows, even if they happen to return an empty set of rows. Hence, .execute()ing a select statement should always set .description to non-None, even if the result set happens to be empty. Making a special case for empty selects is neither desirable nor, in general, possible. The special case is in general not possible because in many database engines and their DB-API implementation, actual row retrieval is deferred to the fetch methods, so .execute() wouldn't even know if any rows will be returned. (This is true for e.g. InformixDB, but any database engine that supports server-side cursors is likely to behave this way.) The special case is not desirable because there is at least one use case for having .description set after executing an empty select, namely, the cheapest way to inspect the names and types of the columns in a table: cur.execute("select * from sometable where 1=0") Just my two cents, Carsten Haese. http://informixdb.sourceforge.net From mal at egenix.com Fri Nov 10 19:03:46 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 10 Nov 2006 19:03:46 +0100 Subject: [DB-SIG] Cursor.description when Cursor.rowcount == 0 In-Reply-To: <4554AA3D.80708@research.att.com> References: <4554AA3D.80708@research.att.com> Message-ID: <4554BF02.2030306@egenix.com> Art Protin wrote: > Dear folks, > On testing and documenting my implementation of the V2 DB API Spec, > I am bothered > by my (limited) understanding of what the spec says for the behavior of > Cursor.description . > Clearly when a query (or rather an SQL statement) does not produce a > result, .description should be None. It is common practice to do cursor.execute('select * from mytable where 1=0') print cursor.description to access the schema of a table. > However, not producing a result is > not the same as producing a table of > zero rows by one or more columns. (This is like the distinction between > the two comparisons > "" == False > and > "" is False > , the empty string has the same 'value' as False while remaining distinct.) > I do not know that this will ever make a difference to my users > but I am expected > to be precise in my implementation. Is it the general understanding > that .description > will return None whenever the result set has no rows? cursor.description always refers to a result set. If a statement does not produce a result set, then .description should be None. However, a result set may have length 0 (as in the example above), so .rowcount==0 is not a good inidicator. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 10 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From carsten at uniqsys.com Fri Nov 10 19:30:48 2006 From: carsten at uniqsys.com (Carsten Haese) Date: Fri, 10 Nov 2006 13:30:48 -0500 Subject: [DB-SIG] Cursor.description when Cursor.rowcount == 0 In-Reply-To: <4554AA3D.80708@research.att.com> References: <4554AA3D.80708@research.att.com> Message-ID: <1163183448.3365.85.camel@dot.uniqsys.com> On Fri, 2006-11-10 at 11:35 -0500, Art Protin wrote: > not producing a result is > not the same as producing a table of > zero rows by one or more columns. > (This is like the distinction between > the two comparisons > "" == False > and > "" is False > , the empty string has the same 'value' as False while remaining distinct.) By the way, neither of these comparisons is true. The empty string is neither equal nor identical to False. Coercing the empty string into a bool results in False, which equals, and is identical to, False. A better analogy for what you're trying to say is the difference between None and []. DML/DDL/DCL operations produce no result set, in the sense of None. A DQL operation that doesn't return any rows produces an empty result set, in the sense of []. -Carsten From carsten at uniqsys.com Fri Nov 10 19:37:28 2006 From: carsten at uniqsys.com (Carsten Haese) Date: Fri, 10 Nov 2006 13:37:28 -0500 Subject: [DB-SIG] Cursor.description when Cursor.rowcount == 0 In-Reply-To: <4554BF02.2030306@egenix.com> References: <4554AA3D.80708@research.att.com> <4554BF02.2030306@egenix.com> Message-ID: <1163183848.3365.93.camel@dot.uniqsys.com> On Fri, 2006-11-10 at 19:03 +0100, M.-A. Lemburg wrote: > It is common practice to do > > cursor.execute('select * from mytable where 1=0') > print cursor.description I'm glad I'm not the only one doing this :) > cursor.description always refers to a result set. If a statement > does not produce a result set, then .description should be None. I think the DB-API spec (or a future version thereof) should replace "operations that do not return rows" with "operations that do not produce a result set" to make this point clear. -Carsten From aprotin at research.att.com Fri Nov 10 21:37:33 2006 From: aprotin at research.att.com (Art Protin) Date: Fri, 10 Nov 2006 15:37:33 -0500 Subject: [DB-SIG] Cursor.description when Cursor.rowcount == 0 Message-ID: <4554E30D.40406@research.att.com> Dear folks, Thank you very much for sharing that interpretation, the arguments, and, even more importantly, that counter example (from Marc-Andre Lemburg and Carsten Haese). Based on that feedback I will revise my implementation to have .description give None only when there is no result set (due to either query not yet given, query failed, or query not DQL). Thank you all, Arthur Protin From aprotin at research.att.com Fri Nov 10 21:55:04 2006 From: aprotin at research.att.com (Art Protin) Date: Fri, 10 Nov 2006 15:55:04 -0500 Subject: [DB-SIG] Cursor.description when Cursor.rowcount == 0 In-Reply-To: <1163183848.3365.93.camel@dot.uniqsys.com> References: <4554AA3D.80708@research.att.com> <4554BF02.2030306@egenix.com> <1163183848.3365.93.camel@dot.uniqsys.com> Message-ID: <4554E728.40400@research.att.com> Dear folks, Carsten Haese wrote: >On Fri, 2006-11-10 at 19:03 +0100, M.-A. Lemburg wrote: > > >>It is common practice to do >> >>cursor.execute('select * from mytable where 1=0') >>print cursor.description >> >> > >I'm glad I'm not the only one doing this :) > > > >>cursor.description always refers to a result set. If a statement >>does not produce a result set, then .description should be None. >> >> > >I think the DB-API spec (or a future version thereof) should replace >"operations that do not return rows" with "operations that do not >produce a result set" to make this point clear. > >-Carsten > > > > The DBMS that I am writing the interface for has other (more efficient) ways of producing this meta data. Don't the other DBMSs ? Should the spec include a method (or more) to report on tables in the DB and on columns in a table? Is there a common extension that should be codified? Thank you all, Arthur Protin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20061110/e15d300e/attachment.html From Chris.Clark at ingres.com Fri Nov 10 22:46:21 2006 From: Chris.Clark at ingres.com (Chris Clark) Date: Fri, 10 Nov 2006 13:46:21 -0800 Subject: [DB-SIG] Metadata interface - was Re: Cursor.description when Cursor.rowcount == 0 In-Reply-To: <4554E728.40400@research.att.com> References: <4554AA3D.80708@research.att.com> <4554BF02.2030306@egenix.com> <1163183848.3365.93.camel@dot.uniqsys.com> <4554E728.40400@research.att.com> Message-ID: <4554F32D.8030803@ingres.com> Art Protin wrote: > Dear folks, > Carsten Haese wrote: >> On Fri, 2006-11-10 at 19:03 +0100, M.-A. Lemburg wrote: >> >>> It is common practice to do >>> >>> cursor.execute('select * from mytable where 1=0') >>> print cursor.description >>> > The DBMS that I am writing the interface for has other (more > efficient) ways of producing this meta data. Don't the other DBMSs ? > Should the spec include a method (or more) to report on tables in the > DB and on columns in a table? Is there a common extension that should > be codified? I would vote +1 on method(s) for meta data access being added to the (next) DBI spec rather than a new extension. Practically every DBMS offer meta data query access but they all differ in approaches; some have api calls most have meta data tables/views that can be queried using SQL. Adding methods to the spec would allow mapping to host api methods OR meta data tables depending on what the DBMS offers. As for what the new api could be, it depends on what we expect the information to be used for. Do we want to know the Python type or the SQL type? If we want to know the SQL type; should the type information be native or portable - e.g. assume we are looking at a column in Oracle of type NUMBER(10,3), should that be reported as NUMBER(10,3) or should it be reported as DECIMAL(10,3)? If you just want to deal with Python stuff then Python type information should be fine, if you want to perform or create DDL statements from that information you really want the host type. What do people want/need or do people want both types of information? Some DBI drivers already implement meta data query functionality (some ORMs do too), do we want to base the spec on existing implementations? We probably want avoid ripping-off ^h^h^h^h^h^h^h borrowing the jdbc or odbc metadata interface as they are rather verbose and un-pythonic but they would be a good place to start for requirements (for example the meta data information in jdbc is not restricted to just table information; e.g. supportsSubqueriesInIns()). Chris From blais at furius.ca Mon Nov 13 08:17:15 2006 From: blais at furius.ca (Martin Blais) Date: Mon, 13 Nov 2006 02:17:15 -0500 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. Message-ID: <8393fff0611122317q1497d8c7k96575913a0468ec9@mail.gmail.com> > "Martin Blais" writes: > > > I want to propose a few improvements on the DBAPI 2.0 Cursor.execute() > > method interface. You can find the details of my proposed changes > > here: > > http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.html > > The model of query execution you are assuming is nothing like that > used by Oracle (in cx_Oracle in particular). You can certainly build > up bits of a query string using Python string formatting - this is > nothing to do with the DB API, but on the other hand, it is also > *extremely* uncommon in my experience. > > However, you assume that the "second stage", of adding variable > bindings supplied in the cursor.execute call, is also a string > formatting exercise (just with automatic escaping). This is most > certainly not the case in Oracle - the query is sent to the DB engine > as given, with variable placeholders intact, and the variable bindings > are sent independently. > > This is a crucial optimisation for Oracle - with the code > > c.execute("select * from emp where id = :id", 100) > c.execute("select * from emp where id = :id", 200) > > the DB engine only sees a SINGLE query, run twice with different > bindings. The query plan can be cached and optimised on this basis. If > the ID was interpolated by Python, Oracle would see 2 different > queries, and would need to re-parse and re-optimise for each. > > So, your proposal for unifying the 2 steps you see does not make sense > in the context of (cx_)Oracle - the steps are utterly different. I think you are mistaken (either that or I do not understand what you mean, or perhaps you haven't read the proposed code). My proposal does not modify the way the escaped parameters are to be sent to the client interface. In fact, the test implementation merely rewrites the query to take advantage of the Pythonic interface, with the exception that it may create :parameters if needed, for example, if you pass in a list or a dict. > Sorry for going on at such length, but I get twitchy every time I see > people assume that parameter binding is simply a client-side string > interpolation exercise. That approach is the reason that huge numbers > of poorly written Visual Basic programs exist, which destroy > performance on Oracle databases. (It's also the cause of many SQL > injection attacks, but I don't want to make too much of that, as I'd > be getting perilously close to spreading FUD without providing more > detail than anyone would be able to stand :-)) I'd hate to see Python > end up falling into the same trap. I did not assume that at all. The proposed test implementation should work fine with cx_Oracle, i.e. will maintain :id in the string, only it will provide a more flexible interface, for example, you could pass a list and it would create the necessary :parameters to be sent to cx_Oracle. From carsten at uniqsys.com Mon Nov 13 14:51:09 2006 From: carsten at uniqsys.com (Carsten Haese) Date: Mon, 13 Nov 2006 08:51:09 -0500 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. In-Reply-To: <8393fff0611122317q1497d8c7k96575913a0468ec9@mail.gmail.com> References: <8393fff0611122317q1497d8c7k96575913a0468ec9@mail.gmail.com> Message-ID: <1163425869.3434.20.camel@dot.uniqsys.com> On Mon, 2006-11-13 at 02:17 -0500, Martin Blais wrote: [snipped attribution to Paul Moore restored] > Paul Moore wrote: > > "Martin Blais" writes: > > > > > I want to propose a few improvements on the DBAPI 2.0 Cursor.execute() > > > method interface. You can find the details of my proposed changes > > > here: > > > http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.html > > > > [...] > > However, you assume that the "second stage", of adding variable > > bindings supplied in the cursor.execute call, is also a string > > formatting exercise (just with automatic escaping). This is most > > certainly not the case in Oracle - the query is sent to the DB engine > > as given, with variable placeholders intact, and the variable bindings > > are sent independently. > > > I think you are mistaken (either that or I do not understand what you > mean, or perhaps > you haven't read the proposed code). My proposal does not modify the > way the escaped > parameters are to be sent to the client interface. In fact, the test > implementation merely rewrites the query to take advantage of the Pythonic > interface, with the exception that it may create :parameters if needed, for > example, if you pass in a list or a dict. > [...] And yet you are still talking about escaped parameters, both in your emails and your proposal. Please try to grasp the concept that in many (most?) database engines, there is no escaping of any kind going on. In real database APIs, there are separate API calls under the hood, one for preparing a statement, one for binding parameters to the prepared statement, and finally one for executing the query. > [...] > The proposed test implementation should work fine > with cx_Oracle, i.e. will maintain :id in the string, only it will provide a > more flexible interface, for example, you could pass a list and it would create > the necessary :parameters to be sent to cx_Oracle. Your proposal appears very much geared towards executing queries in circumstances where key information about the query is unknown at design time and only known at run-time. Such circumstances do exist, for example when you're implementing an ORM. However, this use case is much rarer compared to most real-life usage where you do know exactly at design time which table and which columns you're working with. Feel free to implement your proposed extension as a wrapper around DB-API. You're probably not the only one who might find it useful. I just don't think it's useful enough to be part of the standard DB-API. -Carsten From aprotin at research.att.com Mon Nov 13 17:28:57 2006 From: aprotin at research.att.com (Art Protin) Date: Mon, 13 Nov 2006 11:28:57 -0500 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. In-Reply-To: <8393fff0611122317q1497d8c7k96575913a0468ec9@mail.gmail.com> References: <8393fff0611122317q1497d8c7k96575913a0468ec9@mail.gmail.com> Message-ID: <45589D49.5020905@research.att.com> Dear folks, Martin Blais wrote: [snipped attribution to Paul Moore restored] >> Paul Moore wrote: > > >>"Martin Blais" writes: >> >> >> >>>I want to propose a few improvements on the DBAPI 2.0 Cursor.execute() >>>method interface. You can find the details of my proposed changes >>>here: >>>http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.html >>> >>> I just looked at Martin's suggestions and I find some theoretical problems. I had been left with the impression that the "operation" given as an argument to the .execute() method was SQL but on rereading the specification (PEP 249) I do not find that made explicit. If it were explicitly required to be SQL, then I would turn to a text like "A Guide to THE SQL STANDARD", Fourth Edition, by C.J.Date with Hugh Darwen and quote from section 20.3, "STATEMENT PREPARATION AND EXECUTION", in the subsection on "Placeholders": Placeholders (i.e., question marks) are permitted only where literals are permitted. Note in particular, therefore, that they cannot be used to represent names (of tables, columns, etc.). ... The DBMS I am coding for requires all table names and column names be explicit in the query before it can begin to process it, where as the "proper" placeholders in query are never really filled in, the query picks up the values to use at execution time. This would require distinctly different treatment of the two placeholders in query like: select Name from ? where City = ? (and I dread having to parse the SQL in the interface to distinguish between these two). >>The model of query execution you are assuming is nothing like that >>used by Oracle (in cx_Oracle in particular). You can certainly build >>up bits of a query string using Python string formatting - this is >>nothing to do with the DB API, but on the other hand, it is also >>*extremely* uncommon in my experience. >> >>However, you assume that the "second stage", of adding variable >>bindings supplied in the cursor.execute call, is also a string >>formatting exercise (just with automatic escaping). This is most >>certainly not the case in Oracle - the query is sent to the DB engine >>as given, with variable placeholders intact, and the variable bindings >>are sent independently. >> >>This is a crucial optimisation for Oracle - with the code >> >> c.execute("select * from emp where id = :id", 100) >> c.execute("select * from emp where id = :id", 200) >> >>the DB engine only sees a SINGLE query, run twice with different >>bindings. The query plan can be cached and optimised on this basis. If >>the ID was interpolated by Python, Oracle would see 2 different >>queries, and would need to re-parse and re-optimise for each. >> >> >> My interface works similar to this one. >>So, your proposal for unifying the 2 steps you see does not make sense >>in the context of (cx_)Oracle - the steps are utterly different. >> >> > >I think you are mistaken (either that or I do not understand what you >mean, or perhaps >you haven't read the proposed code). My proposal does not modify the >way the escaped >parameters are to be sent to the client interface. In fact, the test >implementation merely rewrites the query to take advantage of the Pythonic >interface, with the exception that it may create :parameters if needed, for >example, if you pass in a list or a dict. > > > > > >>Sorry for going on at such length, but I get twitchy every time I see >>people assume that parameter binding is simply a client-side string >>interpolation exercise. That approach is the reason that huge numbers >>of poorly written Visual Basic programs exist, which destroy >>performance on Oracle databases. (It's also the cause of many SQL >>injection attacks, but I don't want to make too much of that, as I'd >>be getting perilously close to spreading FUD without providing more >>detail than anyone would be able to stand :-)) I'd hate to see Python >>end up falling into the same trap. >> >> > >I did not assume that at all. The proposed test implementation should work fine >with cx_Oracle, i.e. will maintain :id in the string, only it will provide a >more flexible interface, for example, you could pass a list and it would create >the necessary :parameters to be sent to cx_Oracle. >_______________________________________________ >DB-SIG maillist - DB-SIG at python.org >http://mail.python.org/mailman/listinfo/db-sig > > Thank you all, Arthur Protin -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20061113/f851042c/attachment.htm From phd at phd.pp.ru Mon Nov 13 18:45:29 2006 From: phd at phd.pp.ru (Oleg Broytmann) Date: Mon, 13 Nov 2006 20:45:29 +0300 Subject: [DB-SIG] SQLObject 0.7.2 beta 1 Message-ID: <20061113174529.GC29344@phd.pp.ru> Hello! I'm pleased to announce the 0.7.2b1 release of SQLObject. What is SQLObject ================= SQLObject is an object-relational mapper. Your database tables are described as classes, and rows are instances of those classes. SQLObject is meant to be easy to use and quick to get started with. SQLObject supports a number of backends: MySQL, PostgreSQL, SQLite, and Firebird. It also has newly added support for Sybase, MSSQL and MaxDB (also known as SAPDB). Where is SQLObject ================== Site: http://sqlobject.org Mailing list: https://lists.sourceforge.net/mailman/listinfo/sqlobject-discuss Archives: http://news.gmane.org/gmane.comp.python.sqlobject Download: http://cheeseshop.python.org/pypi/SQLObject/0.7.2b1 News and changes: http://sqlobject.org/docs/News.html What's New ========== Features & Interface -------------------- * sqlbuilder.Select now supports JOINs exactly like SQLObject.select. * destroySelf() removes the object from related joins. Bug Fixes --------- * Fixed a number of unicode-related problems with newer MySQLdb. * If the DB API driver returns timedelta instead of time (MySQLdb does this) it is converted to time; but if the timedelta has days an exception is raised. * Fixed a number of bugs in InheritableSQLObject related to foreign keys. * Fixed a bug in InheritableSQLObject related to the order of tableRegistry dictionary. * A bug fix that allows to use SQLObject with DateTime from Zope. For a more complete list, please see the news: http://sqlobject.org/docs/News.html Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd at phd.pp.ru Programmers don't die, they just GOSUB without RETURN. From ricardo.b at zmail.pt Mon Nov 13 19:46:31 2006 From: ricardo.b at zmail.pt (Ricardo Bugalho) Date: Mon, 13 Nov 2006 18:46:31 +0000 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. In-Reply-To: <8393fff0611122317q1497d8c7k96575913a0468ec9@mail.gmail.com> References: <8393fff0611122317q1497d8c7k96575913a0468ec9@mail.gmail.com> Message-ID: <1163443591.9561.61.camel@ezquiel> On Mon, 2006-11-13 at 02:17 -0500, Martin Blais wrote: > > "Martin Blais" writes: > > > > > I want to propose a few improvements on the DBAPI 2.0 Cursor.execute() > > > method interface. You can find the details of my proposed changes > > > here: > > > http://furius.ca/pubcode/pub/conf/common/lib/python/dbapiext.html 1 The process of building up a query happens in two steps You propose cursor.execute(''' SELECT name, address FROM %s WHERE id = %S ''', (table_name, the_id,)) as a better alternative to cursor.execute(''' SELECT name, address FROM %s WHERE id = %%s ''' % table_name, (the_id,)) I do not like your proposal, for two reasons. First, dynamic query construction can assume many forms, many more complex than this case. I think it should not be added to the functions of .execute(). Second, your proposal is error prone: %s vs %S I like one is better: cursor.execute(''' SELECT name, address FROM %s WHERE id = :1 ''' % table_name, (the_id,)) Let's just remove or depreceate the format and pyformat parameter sytles from the DB-API. They make dynamic query construction more error prone. 2 The optional parameters to execute()are not Pythonic enough Yeah, I agree with this one. 3 Having to join lists by hand is annoying and always performed the same way 4 Dictionaries can be rendered as name=valuepairs I still haven't decided if I like these two. From jekabs.andrusaitis at tietoenator.com Mon Nov 13 18:42:41 2006 From: jekabs.andrusaitis at tietoenator.com (Jekabs Andrushaitis) Date: Mon, 13 Nov 2006 19:42:41 +0200 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0Cursor.execute() method. In-Reply-To: <1163425869.3434.20.camel@dot.uniqsys.com> Message-ID: <000601c7074b$1e66be60$2a050a0a@jandrusaitis> Hello, I tend to agree that allowing to escape non-literals is a really bad idea. This is certainly not how most underlaying database APIs work - they provide their own methods for statement preparation and variable binding, in fact they use different syntax for variable binding into SQL (hence different DBAPI modules use different ways to pass the arguments - tuple, dictionary etc). And they all follow rule that only literals can be bound, not table, procedure, package, view or whatever names. Of course there are some dumb database backends which do not do variable substition themselves... but who cares about those anyway :) If the dynamic statements are real issue for you, it is possible to write wrapper which takes care of this, but I dont think that is really worth the trouble as you can simply write something like: Cursor.execute("SELECT something FROM %(tablename)s WHERE somethingelse=%%(whoami)s" % {"tablename":"sometable"},{"whoami":"cookiemonster"}) On other hand the difference between how different DBAPI modules handle bind variables is indeed quite annoying, it prevents abstraction of the query code from underlaying database, but only solution which comes to my mind would be adding "Pythonized parameter style" support for each database module which would convert Python style to whatever underlaying database actually works with, for example for Oracle it would do: "SELECT something FROM somewhere WHERE somethingelse=%(somethingelse)s",{"somethingelse":"huh"} ---- super duper argument mangler ----> "SELECT something FROM somewhere WHERE somethingelse=:1",["huh"] However actually implementing this would be no simple matter - bind variable processing goes much deeper than simple string mangling I am afraid, SQL lexical parser would be required for this sort of translation... Jekabs From carsten at uniqsys.com Mon Nov 13 21:18:55 2006 From: carsten at uniqsys.com (Carsten Haese) Date: Mon, 13 Nov 2006 15:18:55 -0500 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0 Cursor.execute() method. In-Reply-To: <45589D49.5020905@research.att.com> References: <8393fff0611122317q1497d8c7k96575913a0468ec9@mail.gmail.com> <45589D49.5020905@research.att.com> Message-ID: <1163449135.3434.96.camel@dot.uniqsys.com> On Mon, 2006-11-13 at 11:28 -0500, Art Protin wrote: > I just looked at Martin's suggestions and I find some theoretical > problems. I had been left with the impression that the "operation" > given as an argument to the .execute() method was SQL but on rereading > the specification (PEP 249) I do not find that made explicit. If it > were explicitly required to be SQL, then I would turn to a text like > "A Guide to THE SQL STANDARD", Fourth Edition, by C.J.Date with Hugh > Darwen and quote from section 20.3, "STATEMENT PREPARATION AND > EXECUTION", in the subsection on "Placeholders": > > Placeholders (i.e., question marks) are permitted only where > literals are permitted. > Note in particular, therefore, that they cannot be used to represent > names (of tables, > columns, etc.). ... > > The DBMS I am coding for requires all table names and column names be > explicit in the query before it can begin to process it, where as the > "proper" placeholders in query are never really filled in, the query > picks up the values to use at execution time. This would require > distinctly different treatment of the two placeholders in query like: > > select Name from ? where City = ? > > (and I dread having to parse the SQL in the interface to distinguish > between these two). You're correct, the description of the execute method does not specify that the operation be an SQL query. However, other sections of the PEP do explicitly mention SQL, so it's clear that that's the intent. You don't have to worry about catering to people wanting to use parameter passing to fill in table names. The fact that this works in some DB-API implementations is an unfortunate side-effect of resorting to string formatting due to the lack of a parameter passing API in the underlying database. This behavior is by no means required, and IMHO not desired. Hope this helps, Carsten From ricardo.b at zmail.pt Mon Nov 13 22:14:35 2006 From: ricardo.b at zmail.pt (Ricardo Bugalho) Date: Mon, 13 Nov 2006 21:14:35 +0000 Subject: [DB-SIG] Proposed improvements to DBAPI 2.0Cursor.execute() method. In-Reply-To: <000601c7074b$1e66be60$2a050a0a@jandrusaitis> References: <000601c7074b$1e66be60$2a050a0a@jandrusaitis> Message-ID: <1163452475.9561.97.camel@ezquiel> On Mon, 2006-11-13 at 19:42 +0200, jekabs.andrusaitisa wrote: > On other hand the difference between how different DBAPI modules > handle bind > variables is indeed quite annoying, it prevents abstraction of the > query > code from underlaying database, but only solution which comes to my > mind I fully support the idea of having a single (or a single set) of paramter styles that are to be supported by *all* the DB-API compliant instead of the current state of affairs. > would be adding "Pythonized parameter style" support for each database > module which would convert Python style to whatever underlaying > database > actually works with, for example for Oracle it would do: > > "SELECT something FROM somewhere WHERE > somethingelse=%(somethingelse)s",{"somethingelse":"huh"} > ---- super duper argument mangler ----> > "SELECT something FROM somewhere WHERE somethingelse=:1",["huh"] However, as I've said in another message, I think it's best *not* to avoid format and pyformat paramter styles in the DB-API, because they just keeps getting confused with string manipulation. Just about any other format it better than those two. My favorite option is to support on both numeric (:1) and named (:name) styles. > > However actually implementing this would be no simple matter - bind > variable > processing goes much deeper than simple string mangling I am afraid, > SQL > lexical parser would be required for this sort of translation... It shouldn't be any harder than implementing parameter binding for backends which don't actually support parameterized queries. Of course, this is for a given level of robusness and the modules for such backends aren't as robust as some of us would like... But overall, I think it could be done without much work. IMHO, all .execute() needs to do before replacing parameter style is to check it's not replacing the contents of a literal or a quoted identifier. That is, it must not convert "SELECT * FROM my_table WHERE real_name = 'foo:bar' and user_name = :username" into "SELECT * FROM my_table WHERE real_name = 'foo?' and user_name = ?" In case the user writes something he shouldn't, like "SELECT * FROM :table_name WHERE user_name = :user_name" it's ok if .execute() silently converts it into "SELECT * FROM ? WHERE user_name = ?" because when it's passed to the backend, the back end will reject it. Or am I missing any case in SQL syntax? From aprotin at research.att.com Thu Nov 16 19:18:09 2006 From: aprotin at research.att.com (Art Protin) Date: Thu, 16 Nov 2006 13:18:09 -0500 Subject: [DB-SIG] SQL DDL Message-ID: <455CAB61.2090301@research.att.com> Dear folks, In earlier messages, I noted that the Specification (v2.0) does not adequately express that the "operation" argument is SQL (although the coverage of the API in "PYHON in a Nutshell is hardly ambiguous at all). Now I am wondering about the distinction between SQL DQL (Data Query Language) and SQL DDL (Data Definition Language). Is there intended to be any support for DDL in this API? (I am not sure of if or how I could put that support into the implementation I am working on, but I do not need to think about it much if none is intended.) Thank you all, Arthur Protin From mal at egenix.com Thu Nov 16 19:31:05 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 16 Nov 2006 19:31:05 +0100 Subject: [DB-SIG] SQL DDL In-Reply-To: <455CAB61.2090301@research.att.com> References: <455CAB61.2090301@research.att.com> Message-ID: <455CAE69.4000908@egenix.com> Art Protin wrote: > Dear folks, > In earlier messages, I noted that the Specification (v2.0) does > not adequately express that the "operation" argument is SQL (although > the coverage of the API in "PYHON in a Nutshell is hardly ambiguous at > all). Now I am wondering about the distinction between SQL DQL (Data > Query Language) and SQL DDL (Data Definition Language). Is there > intended to be any support for DDL in this API? (I am not sure of if or > how I could put that support into the implementation I am working on, > but I do not need to think about it much if none is intended.) The API could potentially also work with non SQL ways of defining queries or actions. In practice it is almost always used with some form of SQL. There is no distinction being made based on the type of SQL, ie. you can use any variant or subclass of the SQL language supported by the database backend in .execute() calls. That said, I'm not sure what you mean by "support for DDL". Regards, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 16 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From carsten at uniqsys.com Thu Nov 16 19:43:32 2006 From: carsten at uniqsys.com (Carsten Haese) Date: Thu, 16 Nov 2006 13:43:32 -0500 Subject: [DB-SIG] SQL DDL In-Reply-To: <455CAB61.2090301@research.att.com> References: <455CAB61.2090301@research.att.com> Message-ID: <1163702612.3380.38.camel@dot.uniqsys.com> On Thu, 2006-11-16 at 13:18 -0500, Art Protin wrote: > Dear folks, > In earlier messages, I noted that the Specification (v2.0) does > not adequately express that the "operation" argument is SQL (although > the coverage of the API in "PYHON in a Nutshell is hardly ambiguous at > all). Now I am wondering about the distinction between SQL DQL (Data > Query Language) and SQL DDL (Data Definition Language). Is there > intended to be any support for DDL in this API? (I am not sure of if or > how I could put that support into the implementation I am working on, > but I do not need to think about it much if none is intended.) I'm afraid you'll have to think about it, unless the database engine you're interfacing with doesn't support any kind of DDL, which I find unlikely. The (unwritten) intent of the specification is that an implementation should support any operation that is supported by the underlying DBMS. For SQL engines that means DQL, DML, DDL, and DCL. Maybe it would help if you explained why you think you need to make an exception for DDL. -Carsten From ianb at colorstudy.com Thu Nov 16 19:59:28 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Thu, 16 Nov 2006 12:59:28 -0600 Subject: [DB-SIG] Extending DB-API Message-ID: <455CB510.2010802@colorstudy.com> I probably won't have the time to really follow this through, but since there has been a little discussion of this stuff lately I'd like to throw out an idea for where I think Python database standards should go. This mostly builds on the dbapi rather than extending it directly (I think). The major things I think we can standardize: * There's no common way to configure databases. I'd like to see a single URI syntax that everyone can use. This should be modestly extensible via query string parameters. * Given a database connection there should be a well-documented strategy for discovering the type of the connection (including the server's version) and loading up code specific to that module and some interface. This allows compatibility code to be developed separately from the database connection modules, and separately from consumers like ORMs. This would also give a clear place to build database introspection tools, without tying that work to the release schedules or development process of the current database modules. Realistically those existing modules are developed fairly slowly and conservatively, and require skill in things like writing C extensions. Compatibility layers have none of these qualities. * Unified exceptions. This can be done currently with monkeypatching the inheritance hierarchy, but they aren't unified to anything in particular that you can rely on. * Figure out a strategy for parameter styles. Maybe this just means a reasonable way to handle SQL in a more abstract way than as strings with markers (that way no update is required to the dbapi). Or maybe something more sophisticated. Or we could even be lazy and use %s/pyformat, which is the only marker that can be easily translated to other markers. * Maybe some work on database connection pooling strategies. Maybe this can just be library code. I think we need a little more data on the threading restrictions than dbapi gives (sqlite in particular doesn't fit into any of the current levels). From there I see some other useful database standards: * A standard transaction container. The Zope transaction container is a reasonable and straight-forward implementation (though it needs to be better extracted from Zope). * A standard way to retrieve database configuration and connections. This way database library/framework code can be written in a reasonably abstract way without worrying about deployment concerns. -- Ian Bicking | ianb at colorstudy.com | http://blog.ianbicking.org From aprotin at research.att.com Thu Nov 16 20:20:15 2006 From: aprotin at research.att.com (Art Protin) Date: Thu, 16 Nov 2006 14:20:15 -0500 Subject: [DB-SIG] SQL DDL In-Reply-To: <1163702612.3380.38.camel@dot.uniqsys.com> References: <455CAB61.2090301@research.att.com> <1163702612.3380.38.camel@dot.uniqsys.com> Message-ID: <455CB9EF.6030705@research.att.com> Dear folks, Carsten Haese wrote: >On Thu, 2006-11-16 at 13:18 -0500, Art Protin wrote: > > >>Dear folks, >> In earlier messages, I noted that the Specification (v2.0) does >>not adequately express that the "operation" argument is SQL (although >>the coverage of the API in "PYHON in a Nutshell is hardly ambiguous at >>all). Now I am wondering about the distinction between SQL DQL (Data >>Query Language) and SQL DDL (Data Definition Language). Is there >>intended to be any support for DDL in this API? (I am not sure of if or >>how I could put that support into the implementation I am working on, >>but I do not need to think about it much if none is intended.) >> >> > >I'm afraid you'll have to think about it, unless the database engine >you're interfacing with doesn't support any kind of DDL, which I find >unlikely. The (unwritten) intent of the specification is that an >implementation should support any operation that is supported by the >underlying DBMS. For SQL engines that means DQL, DML, DDL, and DCL. > > Good. This is more than enough leeway. >Maybe it would help if you explained why you think you need to make an >exception for DDL. > > > I am implementing support for a DBMS that is not an SQL engine but does have a translator of SQL into native queries. The DQL & DML are handled together through one interface while the DDL is handled through a different one (and I have not found support for DCL). So, as I understand it now, I need to make a "good faith effort" and as long as my users are happy I am not lying when I claim the interface largely complies with the spec. Or stated slightly different, the failure to support DDL (if it proves too difficult) will simply need to be listed in my documentation as another point of non-conformance. Whenever the specification is revised, I would recommend that this intent be given some visibility. >-Carsten > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20061116/621d3aaa/attachment.html From jannee at brikks.com Thu Nov 16 22:57:16 2006 From: jannee at brikks.com (=?iso-8859-1?Q?Jan_Ekstr=F6m?=) Date: Thu, 16 Nov 2006 22:57:16 +0100 Subject: [DB-SIG] (no subject) Message-ID: <001801c709ca$2e25c720$6384e953@jedkb9a8f76ce8> From mal at egenix.com Fri Nov 17 00:06:56 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 17 Nov 2006 00:06:56 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <455CB510.2010802@colorstudy.com> References: <455CB510.2010802@colorstudy.com> Message-ID: <455CEF10.3020801@egenix.com> Ian Bicking wrote: > I probably won't have the time to really follow this through, but since > there has been a little discussion of this stuff lately I'd like to > throw out an idea for where I think Python database standards should go. > This mostly builds on the dbapi rather than extending it directly (I > think). I think that we should add a few of the standard extensions to the main spec and possibly add some useful extra attributes that make life easier for people writing wrappers, ORM, etc. on top of the DB-API module APIs. Some comments: > The major things I think we can standardize: > > * There's no common way to configure databases. I'd like to see a > single URI syntax that everyone can use. This should be modestly > extensible via query string parameters. There has been some discussion about this. I believe that this is something a wrapper would have to implement since part of the URI is mostly going to be the name of the database module to use. Also, I'm not sure whether a URI syntax would be of any benefit, since the parameters required by various backends are rather diverse. ODBC uses something called a data source definition string which is a simple string format of "=;" pairs. Only a few of the keys are standardized. Many depend on the backend being used and often vary between ODBC drivers. I doubt that you could convert a URI into such a string in a sensible way. In the end, you'd probably have to use the query part of the URI to pass in all the parameters that are non-standard in the URI syntax. This wouldn't really help anyone, since it's just another way of writing things, but doesn't make things easier for the user. This is different at the application level, since an application will typically only support a handful of backends. In this case a simple URI would suffice, since all the other details would be added to the connection parameters at lower level of the application, e.g. in the database abstraction layer. > * Given a database connection there should be a well-documented strategy > for discovering the type of the connection (including the server's > version) and loading up code specific to that module and some interface. In mxODBC we have these connection attributes: .dbms_name String identifying the database manager system. .dbms_version String identifying the database manager system version. .driver_name String identifying the ODBC driver. .driver_version String identifying the ODBC driver version. They have proven to be quite useful, esp. when it comes to coding against specific backends. > This allows compatibility code to be developed separately from the > database connection modules, and separately from consumers like ORMs. > This would also give a clear place to build database introspection > tools, without tying that work to the release schedules or development > process of the current database modules. Realistically those existing > modules are developed fairly slowly and conservatively, and require > skill in things like writing C extensions. Compatibility layers have > none of these qualities. > > * Unified exceptions. This can be done currently with monkeypatching > the inheritance hierarchy, but they aren't unified to anything in > particular that you can rely on. We already have a standard way for this: all exceptions should be exposed on the connection object as attributes. This makes writing polymorphic code easy. > * Figure out a strategy for parameter styles. Maybe this just means a > reasonable way to handle SQL in a more abstract way than as strings with > markers (that way no update is required to the dbapi). Or maybe > something more sophisticated. Or we could even be lazy and use > %s/pyformat, which is the only marker that can be easily translated to > other markers. See past discussions: pyformat is probably the worst of all parameter styles. Note that any change in this respect will break *a lot* of existing and working code. Perhaps we ought to make the parameter style a connection parameter that's writeable and then agree on a very limited set of required styles - perhaps just the qmark and the numeric style since these are easy to implement and can be mapped to all other styles. > * Maybe some work on database connection pooling strategies. Maybe this > can just be library code. I think we need a little more data on the > threading restrictions than dbapi gives (sqlite in particular doesn't > fit into any of the current levels). Connection pooling is something which higher level interfaces have to manage and provide, since database drivers can't possibly know which connections to pool and in what way to suit the application needs. Note that simply placing connections into a dictionary is not good enough, since each connection keeps state and thus may not be reusable. > From there I see some other useful database standards: > > * A standard transaction container. The Zope transaction container is a > reasonable and straight-forward implementation (though it needs to be > better extracted from Zope). I don't understand this one. The industry standards for transaction management are * X/Open DTP (XA) * MS DTC See http://docs.openlinksw.com/mt/xamt.html for a good overview of how XA works. The MS DTC is described here: http://msdn2.microsoft.com/en-US/library/ms191440.aspx Both interfaces are C level interfaces. > * A standard way to retrieve database configuration and connections. > This way database library/framework code can be written in a reasonably > abstract way without worrying about deployment concerns. Not sure what you mean here. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 16 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From ianb at colorstudy.com Fri Nov 17 00:36:26 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Thu, 16 Nov 2006 17:36:26 -0600 Subject: [DB-SIG] Extending DB-API In-Reply-To: <455CEF10.3020801@egenix.com> References: <455CB510.2010802@colorstudy.com> <455CEF10.3020801@egenix.com> Message-ID: <455CF5FA.5050900@colorstudy.com> M.-A. Lemburg wrote: >> The major things I think we can standardize: >> >> * There's no common way to configure databases. I'd like to see a >> single URI syntax that everyone can use. This should be modestly >> extensible via query string parameters. > > There has been some discussion about this. I believe that this > is something a wrapper would have to implement since part of the > URI is mostly going to be the name of the database module to use. > > Also, I'm not sure whether a URI syntax would be of any benefit, > since the parameters required by various backends are rather > diverse. > > ODBC uses something called a data source definition string > which is a simple string format of "=;" pairs. > > Only a few of the keys are standardized. Many depend on the > backend being used and often vary between ODBC drivers. > > I doubt that you could convert a URI into such a string in > a sensible way. In the end, you'd probably have to use > the query part of the URI to pass in all the parameters > that are non-standard in the URI syntax. > > This wouldn't really help anyone, since it's just another > way of writing things, but doesn't make things easier for > the user. Having a consistent way to configure databases through a single string would be very helpful. I'm not particularly set on one format. I do think a string -- not Python data structures -- is the right way to do this configuration. Right now the dbapi doesn't define any consistent signature for the connect() function (and in practice the signatures are not at all consistent with each other). There's two parts to this -- first, given a string there needs to be a way to find the connection factory, then the connection factory needs to accept a string. I'd like it if extra arguments could also be parsed out, so for instance logging could be indicated through the connection string (or other conveniences). This just means that the string should be reasonably extended in some way, not that the connection factory has to handle any of these extended bits of information. > This is different at the application level, since an > application will typically only support a handful of > backends. In this case a simple URI would suffice, > since all the other details would be added to the > connection parameters at lower level of the application, > e.g. in the database abstraction layer. I don't see how this matters; the supported backends changes over time, and consistency among applications where backends overlap is still good. Application's can have their support extended by external libraries, if there is a reasonable way to do this. Right now there usually isn't. >> * Given a database connection there should be a well-documented strategy >> for discovering the type of the connection (including the server's >> version) and loading up code specific to that module and some interface. > > In mxODBC we have these connection attributes: > > .dbms_name > String identifying the database manager system. > .dbms_version > String identifying the database manager system version. > > .driver_name > String identifying the ODBC driver. > .driver_version > String identifying the ODBC driver version. > > They have proven to be quite useful, esp. when it comes to > coding against specific backends. Yes, this would be useful. >> This allows compatibility code to be developed separately from the >> database connection modules, and separately from consumers like ORMs. >> This would also give a clear place to build database introspection >> tools, without tying that work to the release schedules or development >> process of the current database modules. Realistically those existing >> modules are developed fairly slowly and conservatively, and require >> skill in things like writing C extensions. Compatibility layers have >> none of these qualities. >> >> * Unified exceptions. This can be done currently with monkeypatching >> the inheritance hierarchy, but they aren't unified to anything in >> particular that you can rely on. > > We already have a standard way for this: all exceptions should be > exposed on the connection object as attributes. > > This makes writing polymorphic code easy. Currently you have to know the connection in order to catch the exception. You cannot catch an exception when the code the connection is not directly exposed to your code. There's lots of good reasons you might want to catch an exception when you don't have a handle on the connection that might raise it. >> * Figure out a strategy for parameter styles. Maybe this just means a >> reasonable way to handle SQL in a more abstract way than as strings with >> markers (that way no update is required to the dbapi). Or maybe >> something more sophisticated. Or we could even be lazy and use >> %s/pyformat, which is the only marker that can be easily translated to >> other markers. > > See past discussions: pyformat is probably the worst of all > parameter styles. Yes, it's annoying. It's also the only format with decent implementations. Everyone else thinks parsing SQL is easy. And maybe it can be done, but of course it is not easy. > Note that any change in this respect will break *a lot* of existing > and working code. If it's an abstraction on top of the current dbapi then it isn't a problem. I have no desire to break existing code, and would rather avoid changing any existing methods defined through the dbapi. If there must be backward compatibilities, we should just define new method names. > Perhaps we ought to make the parameter style a connection > parameter that's writeable and then agree on a very limited > set of required styles - perhaps just the qmark and the > numeric style since these are easy to implement and can be > mapped to all other styles. You have to know something about the underlying SQL to do this. For instance: "UPDATE foo SET x = 'bob\'s your uncle?'" On some databases this is valid, and some not, and it effects whether the ? is a marker or not. With a bit of thought I could come up with SQL that would be valid on both kinds of databases, but with different parses. >> * Maybe some work on database connection pooling strategies. Maybe this >> can just be library code. I think we need a little more data on the >> threading restrictions than dbapi gives (sqlite in particular doesn't >> fit into any of the current levels). > > Connection pooling is something which higher level interfaces > have to manage and provide, since database drivers can't > possibly know which connections to pool and in what way to suit > the application needs. > > Note that simply placing connections into a dictionary > is not good enough, since each connection keeps state and > thus may not be reusable. Yes; mostly there's just some more information we need to implemented this separately. Specifically whether connections can move between threads; sqlite is picky about this, and dbapi seems vague about it. Also, sqlite :memory: databases can't have more than one connection, so you just can't pool connections. There's no way to detect this from the outside. >> From there I see some other useful database standards: >> >> * A standard transaction container. The Zope transaction container is a >> reasonable and straight-forward implementation (though it needs to be >> better extracted from Zope). > > I don't understand this one. > > The industry standards for transaction management are > * X/Open DTP (XA) > * MS DTC > > See http://docs.openlinksw.com/mt/xamt.html for a good > overview of how XA works. The MS DTC is described here: > http://msdn2.microsoft.com/en-US/library/ms191440.aspx > > Both interfaces are C level interfaces. Ugh. Well, that's not really a Python standard. Here's Zope's: http://svn.zope.org/ZODB/trunk/src/transaction/ Probably the interface is the best description: http://svn.zope.org/ZODB/trunk/src/transaction/interfaces.py?rev=70066&view=markup I assume a bridge from that interface to either of the transaction managers you give would be possible; since those are C-level *some* bridge is inevitable anyway. >> * A standard way to retrieve database configuration and connections. >> This way database library/framework code can be written in a reasonably >> abstract way without worrying about deployment concerns. > > Not sure what you mean here. Given some database code, a strategy to fetch the "current" database connection or connection factory, or something like that. The particular motivation here is that given this and some of the other pieces, web frameworks could just support "databases" and wouldn't require any specific code related to any one database wrapper/library/ORM. And the story would be relatively consistent across environments -- not just web frameworks, but potentially any database-consuming system. More concretely, something like: conn = get_database_connection('myapp') Where somewhere else you configure a specific database for 'myapp'. The name 'myapp' could be used to connect to multiple databases at the same time, by using different names for potentially different database connections; maybe using hierarchical dotted names kind of like the logging module does for logger names. This is a convention built on dbapi, not something dbapi would be involved in itself. -- Ian Bicking | ianb at colorstudy.com | http://blog.ianbicking.org From mfrasca at zonnet.nl Mon Nov 20 16:40:42 2006 From: mfrasca at zonnet.nl (Mario Frasca) Date: Mon, 20 Nov 2006 16:40:42 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <455CB510.2010802@colorstudy.com> References: <455CB510.2010802@colorstudy.com> Message-ID: <20061120154042.GA18141@localhost.localdomain> On 2006-1116 12:59:28, Ian Bicking wrote: > The major things I think we can standardize: may I propose one more point... what about using the standard logging module (if available) to unify the logging style? I would be happy if a db-api2 module would... * expose a logger named as the module, * set its verbosity to logging.WARNING (or ERROR), * log relevant errors, warning and debug info at the proper level, * log SQL commands at INFO level (maybe long parameters lists as DEBUG?) a program loading the module may choose to modify the logging level of the module and get all information generated by the module into the handlers defined by the program. what do you think about it? thanks, regards, Mario Frasca -- "Le soldat et le pr??tre, ce sont les pires ennemis de l'humanit??, car si le soldat tue, le pr??tre ment." (Victor Hugo From ianb at colorstudy.com Mon Nov 20 19:25:46 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Mon, 20 Nov 2006 12:25:46 -0600 Subject: [DB-SIG] Extending DB-API In-Reply-To: <20061120154042.GA18141@localhost.localdomain> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> Message-ID: <4561F32A.2000500@colorstudy.com> Mario Frasca wrote: > On 2006-1116 12:59:28, Ian Bicking wrote: >> The major things I think we can standardize: > > may I propose one more point... > > what about using the standard logging module (if available) to unify > the logging style? I would be happy if a db-api2 module would... > > * expose a logger named as the module, > * set its verbosity to logging.WARNING (or ERROR), > * log relevant errors, warning and debug info at the proper level, > * log SQL commands at INFO level (maybe long parameters lists as DEBUG?) > > a program loading the module may choose to modify the logging level > of the module and get all information generated by the module into the > handlers defined by the program. > > what do you think about it? It's possible to resolve this through a wrapper around the connection. For instance: http://svn.sqlobject.org/sqlapi/trunk/sqlapi/connect/wrapper.py I suppose the problems with multiple exception hierarchies could also be resolved this way. -- Ian Bicking | ianb at colorstudy.com | http://blog.ianbicking.org From mfrasca at zonnet.nl Tue Nov 21 16:09:45 2006 From: mfrasca at zonnet.nl (Mario Frasca) Date: Tue, 21 Nov 2006 16:09:45 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <4561F32A.2000500@colorstudy.com> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> Message-ID: <20061121150945.GA10943@localhost.localdomain> Ian Bicking wrote: > It's possible to resolve this through a wrapper around the connection. > For instance: > http://svn.sqlobject.org/sqlapi/trunk/sqlapi/connect/wrapper.py well, I know that this is possible, since I'm already doing something similar in my software, but it would be nice (I mean, I think it would) if also the logging policy would be stated in the db-api2++ > I suppose the problems with multiple exception hierarchies could also > be resolved this way. well, but then wouldn't many of us be doing the same extra work around the missing agreements? wouldn't it be nicer for all to aknowledge that the db-api2 is lacking on some points and fill these in? have we got a wiki place where we can work at a next version of the document? I feel that this would help a lot to keep the discussion focused... best regards, Mario Frasca. From mal at egenix.com Tue Nov 21 16:23:26 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 21 Nov 2006 16:23:26 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <20061121150945.GA10943@localhost.localdomain> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> <20061121150945.GA10943@localhost.localdomain> Message-ID: <456319EE.90706@egenix.com> Mario Frasca wrote: > Ian Bicking wrote: > >> It's possible to resolve this through a wrapper around the connection. >> For instance: >> http://svn.sqlobject.org/sqlapi/trunk/sqlapi/connect/wrapper.py > > well, I know that this is possible, since I'm already doing something > similar in my software, but it would be nice (I mean, I think it would) > if also the logging policy would be stated in the db-api2++ I'm not really sure what logging has to do with the DB-API. Could you explain ? Regards, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 21 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From mfrasca at zonnet.nl Tue Nov 21 16:46:19 2006 From: mfrasca at zonnet.nl (Mario Frasca) Date: Tue, 21 Nov 2006 16:46:19 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <456319EE.90706@egenix.com> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> <20061121150945.GA10943@localhost.localdomain> <456319EE.90706@egenix.com> Message-ID: <20061121154619.GA11049@localhost.localdomain> On 2006-1121 16:23:26, M.-A. Lemburg wrote: > Mario Frasca wrote: > > [...] it would be nice (I mean, I think it would) > > if also the logging policy would be stated in the db-api2++ > > I'm not really sure what logging has to do with the DB-API. > > Could you explain ? I can try... as I see it, logging has to do with every module, so since there is a standard logging module, my feeling is that there could be also a standard logging policy... a client (a program) using modules could take advance of the fact that modules log in a standardized way all kind of information... (once they do so, I mean) the program would then decide whether to handle the messages or not. [[ about performance: according to the documentation of the logging module, a logging call to a logger set to a higher logging level (a DEBUG message to a logger set to CRITICAL) is discarded immediately after a level comparison ]] so if I want to know what a (db-api2) module is doing (and possibly how), I would do this: import logging import MySQLdb logging.getLogger('MySQLdb').setLevel(logging.INFO) well, since the above would not cause errors even if the module does not define the logger (but it also would not have any effect), we could maybe agree that *if* the module developers want to provide logging information, they should do so using the logging module and calling the module root logger as the name of the module... and that per default its level should be set to CRITICAL (or ERROR)... and then state at which level to log some of the more relevant information, leaving lower logging levels free to the developers... I know that this may sound too strict for many, but after all it would serve to save a lot of time to many more... what about the wiki space? is there any already available? regards, MF -- An honest politician is one who stays bought. From m.r.frasca at sron.nl Tue Nov 21 15:27:28 2006 From: m.r.frasca at sron.nl (Mario Frasca) Date: Tue, 21 Nov 2006 15:27:28 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <4561F32A.2000500@colorstudy.com> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> Message-ID: <45630CD0.8050801@sron.nl> Ian Bicking wrote: > It's possible to resolve this through a wrapper around the connection. > For instance: > http://svn.sqlobject.org/sqlapi/trunk/sqlapi/connect/wrapper.py well, I know that this is possible, since I'm already doing something similar in my software, but it would be nice (I mean, I think it would) if also the logging policy would be stated in the db-api2++ > I suppose the problems with multiple exception hierarchies could also > be resolved this way. well, but then wouldn't many of us be doing the same extra work around the missing agreements? wouldn't it be nicer for all to aknowledge that the db-api2 is lacking on some points and fill these in? have we got a wiki place where we can work at a next version of the document? I feel that this would help a lot to keep the discussion focused... best regards, Mario Frasca. From mal at egenix.com Tue Nov 21 21:41:55 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 21 Nov 2006 21:41:55 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <20061121154619.GA11049@localhost.localdomain> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> <20061121150945.GA10943@localhost.localdomain> <456319EE.90706@egenix.com> <20061121154619.GA11049@localhost.localdomain> Message-ID: <45636493.3020009@egenix.com> Mario Frasca wrote: > On 2006-1121 16:23:26, M.-A. Lemburg wrote: >> Mario Frasca wrote: >>> [...] it would be nice (I mean, I think it would) >>> if also the logging policy would be stated in the db-api2++ >> I'm not really sure what logging has to do with the DB-API. >> >> Could you explain ? > > I can try... > > as I see it, logging has to do with every module, so since there is a > standard logging module, my feeling is that there could be also a > standard logging policy... > > a client (a program) using modules could take advance of the fact that > modules log in a standardized way all kind of information... (once they do > so, I mean) the program would then decide whether to handle the messages > or not. [[ about performance: according to the documentation of the > logging module, a logging call to a logger set to a higher logging level > (a DEBUG message to a logger set to CRITICAL) is discarded immediately > after a level comparison ]] > > so if I want to know what a (db-api2) module is doing (and possibly how), > I would do this: > > import logging > import MySQLdb > logging.getLogger('MySQLdb').setLevel(logging.INFO) > > well, since the above would not cause errors even if the module does not > define the logger (but it also would not have any effect), we could maybe > agree that *if* the module developers want to provide logging information, > they should do so using the logging module and calling the module root > logger as the name of the module... and that per default its level should > be set to CRITICAL (or ERROR)... and then state at which level to log > some of the more relevant information, leaving lower logging levels free > to the developers... I know that this may sound too strict for many, > but after all it would serve to save a lot of time to many more... Whether or not module authors use the logging module should really be up to them and not be required by the DB-API. Note that many database modules are written as C extensions and this makes it hard for them to use the logging module as there is no C API for it which could be used, AFAIK. > what about the wiki space? is there any already available? There's a page in the wiki already: http://wiki.python.org/moin/DbApi3 It lists some of the things that were discussed on this mailing list. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 21 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From Chris.Clark at ingres.com Wed Nov 22 18:43:00 2006 From: Chris.Clark at ingres.com (Chris Clark) Date: Wed, 22 Nov 2006 09:43:00 -0800 Subject: [DB-SIG] Extending DB-API In-Reply-To: <45636493.3020009@egenix.com> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> <20061121150945.GA10943@localhost.localdomain> <456319EE.90706@egenix.com> <20061121154619.GA11049@localhost.localdomain> <45636493.3020009@egenix.com> Message-ID: <45648C24.2060603@ingres.com> M.-A. Lemburg wrote: >Mario Frasca wrote: > > >>On 2006-1121 16:23:26, M.-A. Lemburg wrote: >> >> >>>Mario Frasca wrote: >>> >>> >>>>[...] it would be nice (I mean, I think it would) >>>>if also the logging policy would be stated in the db-api2++ >>>> >>>> >>>I'm not really sure what logging has to do with the DB-API. >>> >>>Could you explain ? >>> >>> >>[....] >>as I see it, logging has to do with every module, so since there is a >>standard logging module, my feeling is that there could be also a >>standard logging policy... >> >> >Whether or not module authors use the logging module should >really be up to them and not be required by the DB-API. > >Note that many database modules are written as C extensions >and this makes it hard for them to use the logging module >as there is no C API for it which could be used, AFAIK. > > > My 2 cents. I agree that logging isn't something that _must_ be part of a driver but logging is extremely useful. I think we are looking at another case where we need a higher level manager (along the lines of the wrapper example) something akin to the ODBC driver manager. In ODBC an ODBC driver is (almost) completely useless without the driver manager; under Windows this is part of the OS, under Unix you use something like unixodbc.org (openlink.com, etc.). The manager object then performs logging (of course this doesn't prevent each driver having a completely different set of logging options too). There have been a number of suggestions recently, such as the URI for connect strings that would be perfect for a higher level module that could be part of the *DBI* api but not part of the _driver_ API. Currently PEP 249 is just for the driver API (and that is and has been extremely useful). I would love to see the next spec include an api (and implementation) for a driver manager that makes life easier for application developers BUT existing apps should still be able to call the driver directly and avoid the potential overhead. The driver manager (I'm sticking with ODBC terms here for simplicity) is essentially a Decorator pattern. There are existing modules around that could be looked at so that we don't design from scratch. e.g. http://sourceforge.net/projects/pythondbo/ has already got a working URI approach (note I've not used it but the docs are promising). pythondbo also has some code for attempting to deal with the different param styles. Any comments? Chris From dieter at handshake.de Wed Nov 22 19:57:07 2006 From: dieter at handshake.de (Dieter Maurer) Date: Wed, 22 Nov 2006 19:57:07 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <20061121154619.GA11049@localhost.localdomain> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> <20061121150945.GA10943@localhost.localdomain> <456319EE.90706@egenix.com> <20061121154619.GA11049@localhost.localdomain> Message-ID: <17764.40323.871755.26697@gargle.gargle.HOWL> Mario Frasca wrote at 2006-11-21 16:46 +0100: > ... >as I see it, logging has to do with every module, so since there is a >standard logging module, my feeling is that there could be also a >standard logging policy... I do not think so... Usually, I do not want to see logs of database operations (as they may contain sensible information) *BUT* if I am analysing problems with database interaction, I want such operations logged. Whether or not I want to see logs of these operations is independent from the logging policy I like for other modules. >a client (a program) using modules could take advance of the fact that >modules log in a standardized way all kind of information... (once they do >so, I mean) the program would then decide whether to handle the messages >or not. [[ about performance: according to the documentation of the >logging module, a logging call to a logger set to a higher logging level >(a DEBUG message to a logger set to CRITICAL) is discarded immediately >after a level comparison ]] I have seen discarded logging generate a quadratic runtime behavior: This occured as follows: The information could be very large. To limit the amount of logging, a "limited_repr" was used. This "limited_repr" had the quadratic runtime (for some data types). As the "limited_repr" was used in the log parameter, the price was already paid before the log record was discarded. Logging database operations can also involve huge data... -- Dieter From mfrasca at zonnet.nl Wed Nov 22 22:26:54 2006 From: mfrasca at zonnet.nl (Mario Frasca) Date: Wed, 22 Nov 2006 22:26:54 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <17764.40323.871755.26697@gargle.gargle.HOWL> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> <20061121150945.GA10943@localhost.localdomain> <456319EE.90706@egenix.com> <20061121154619.GA11049@localhost.localdomain> <17764.40323.871755.26697@gargle.gargle.HOWL> Message-ID: <20061122212654.GA23890@localhost.localdomain> On 2006-1122 19:57:07, Dieter Maurer wrote: > Usually, I do not want to see logs of database operations > (as they may contain sensible information) *BUT* if I am > analysing problems with database interaction, I want such > operations logged. the standard logging module solves this quite nicely. a logger placed at logging.ERROR level will not log anything except errors... (we could prescribe that data is not logged above the logging.INFO level...) on the other hand, if a module does not do logging, either you add it yourself or you don't have it. (duh!) > Whether or not I want to see logs of these operations is independent > from the logging policy I like for other modules. I've been using the logging module recently extensively and it answers exactly this kind of problems, this is why I was suggesting to include in the db-api3 some reference to HOW to log things IF the module wants to log things... > I have seen discarded logging generate a quadratic runtime behavior: > > This occured as follows: [...] funny. but this is not a problem here, since discarding a logging call is done just based on the 'level' of the logger and the 'level' of the message. if the message level is below the logger level, then the logger returns immediately. as stated before, I feel that logging is an important part of a piece of software. we have a standard logging module. we are writing a set of directives for writing modules. these directives could prescribe the way to make use of the standard logging module. or not, then each of us will need to build a layer around the modules used, in order to get things the way he needs... which is what we're already doing... regards, Mario. From dieter at handshake.de Thu Nov 23 20:05:55 2006 From: dieter at handshake.de (Dieter Maurer) Date: Thu, 23 Nov 2006 20:05:55 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <20061122212654.GA23890@localhost.localdomain> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> <20061121150945.GA10943@localhost.localdomain> <456319EE.90706@egenix.com> <20061121154619.GA11049@localhost.localdomain> <17764.40323.871755.26697@gargle.gargle.HOWL> <20061122212654.GA23890@localhost.localdomain> Message-ID: <17765.61715.412515.819919@gargle.gargle.HOWL> Mario Frasca wrote at 2006-11-22 22:26 +0100: > .... >> I have seen discarded logging generate a quadratic runtime behavior: >> >> This occured as follows: [...] > >funny. but this is not a problem here, since discarding a logging call >is done just based on the 'level' of the logger and the 'level' of the >message. if the message level is below the logger level, then the >logger returns immediately. You have not read carefully enough: The logger used (while not the Python logger) has had the same behaviour -- it discarded messages based on the log level. *BUT* the quadadric runtime went into determining the parameters for the logging call -- spent before the logger could discard the entry. >as stated before, I feel that logging is an important part of a piece >of software. we have a standard logging module. we are writing a set >of directives for writing modules. these directives could prescribe >the way to make use of the standard logging module. I do not feel like you. I feel more along aspect oriented principles: Logging is an aspect, highly application dependent, relevant across module boundaries. It should *NOT* be embedded into the low level modules but attached via aspects (if aspect orientation is supported) or be provided by high level wrappers. > or not, then each >of us will need to build a layer around the modules used, in order to >get things the way he needs... which is what we're already doing... When I read this I get the impression that this were difficult. But, it fact, it is trivial. Here is a wrapper, we use to provide standard handling of a few exceptions (to be used with Zope). An even simpler wrapper could handle logging: class _PostgresAccessCursor(object): _cursor = None def __init__(self, da): C = self._conn = da() C._register() self._cursor = C._cursor() def wrap(f): def wrapped(self, *args, **kw): try: return f(self, *args, **kw) except (psycopg.ProgrammingError,psycopg.IntegrityError), perr: if 'concurrent update' in perr.args[0]: raise PostgresTransactionalError('Postgres conflict', perr) raise except (psycopg.OperationalError,psycopg.InterfaceError), perr: C = self._conn C.close(); C.connect(C.connection) raise PostgresTransactionalError('Postgres operational error', perr) return wrapped @wrap def execute(self, *args, **kw): return self._cursor.execute(*args, **kw) @wrap def executemany(self, *args, **kw): return self._cursor.executemany(*args, **kw) def __getattr__(self, key): return getattr(self._cursor, key) del wrap -- Dieter From mfrasca at zonnet.nl Fri Nov 24 09:10:47 2006 From: mfrasca at zonnet.nl (Mario Frasca) Date: Fri, 24 Nov 2006 09:10:47 +0100 Subject: [DB-SIG] Extending DB-API In-Reply-To: <17765.61715.412515.819919@gargle.gargle.HOWL> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> <20061121150945.GA10943@localhost.localdomain> <456319EE.90706@egenix.com> <20061121154619.GA11049@localhost.localdomain> <17764.40323.871755.26697@gargle.gargle.HOWL> <20061122212654.GA23890@localhost.localdomain> <17765.61715.412515.819919@gargle.gargle.HOWL> Message-ID: <20061124081047.GA2088@localhost.localdomain> On 2006-1123 20:05:55, Dieter Maurer wrote: > You have not read carefully enough: > > The logger used (while not the Python logger) has had the same > behaviour -- it discarded messages based on the log level. > *BUT* the quadadric runtime went into determining the > parameters for the logging call -- spent before the logger > could discard the entry. you're right, I didn't read it carefully enough... well, a programmer knowing he's not in a lazy environment should take care of such issues, don't you think so? > When I read this I get the impression that this were difficult. > But, it fact, it is trivial. ((well, it's more work than no work at all)) > Here is a wrapper, we use to provide standard handling of [...] thanks, I'll surely follow the hint. I like reading interesting code. MF -- Linux - It is now safe to turn on your computer. From ianb at colorstudy.com Sun Nov 26 22:36:35 2006 From: ianb at colorstudy.com (Ian Bicking) Date: Sun, 26 Nov 2006 15:36:35 -0600 Subject: [DB-SIG] Extending DB-API In-Reply-To: <45648C24.2060603@ingres.com> References: <455CB510.2010802@colorstudy.com> <20061120154042.GA18141@localhost.localdomain> <4561F32A.2000500@colorstudy.com> <20061121150945.GA10943@localhost.localdomain> <456319EE.90706@egenix.com> <20061121154619.GA11049@localhost.localdomain> <45636493.3020009@egenix.com> <45648C24.2060603@ingres.com> Message-ID: <456A08E3.70305@colorstudy.com> Chris Clark wrote: > My 2 cents. I agree that logging isn't something that _must_ be part of > a driver but logging is extremely useful. I think we are looking at > another case where we need a higher level manager (along the lines of > the wrapper example) something akin to the ODBC driver manager. In ODBC > an ODBC driver is (almost) completely useless without the driver > manager; under Windows this is part of the OS, under Unix you use > something like unixodbc.org (openlink.com, etc.). The manager object > then performs logging (of course this doesn't prevent each driver having > a completely different set of logging options too). > > There have been a number of suggestions recently, such as the URI for > connect strings that would be perfect for a higher level module that > could be part of the *DBI* api but not part of the _driver_ API. > Currently PEP 249 is just for the driver API (and that is and has been > extremely useful). I would love to see the next spec include an api (and > implementation) for a driver manager that makes life easier for > application developers BUT existing apps should still be able to call > the driver directly and avoid the potential overhead. The driver manager > (I'm sticking with ODBC terms here for simplicity) is essentially a > Decorator pattern. There are existing modules around that could be > looked at so that we don't design from scratch. e.g. > http://sourceforge.net/projects/pythondbo/ has already got a working URI > approach (note I've not used it but the docs are promising). pythondbo > also has some code for attempting to deal with the different param styles. > > Any comments? Yes, this is the sort of thing I was thinking about. We don't really need to place more burdens on drivers, or require all the drivers to be upgraded -- drivers just need to provide a *little* more information. (And even in some cases probably can be wrapped for that information -- for instance, to get the remote version number of a database often it's just a SQL query.) I also think some things are well outside the possible scope of what can be done here. For instance, standardizing database metadata access. It's mostly bound to the details of the remote database, has little relation to the driver specifically, and needs a lot more maintenance and work and maybe is accessible via different APIs. One could argue that even integrating the different exceptions could be part of this, as databases (and/or drivers) aren't terribly consistent about what kind of exceptions they throw. I doubt they could be made consistent without parsing the text portion of the exceptions. Incidentally, I tried to pull together a few of these things in sqlapi: http://sqlobject.org/sqlapi/ -- but I don't really have the time to push that forward, and I think it's scope is a little too large (e.g., the SQL abstraction layer). -- Ian Bicking | ianb at colorstudy.com | http://blog.ianbicking.org From royhobbsx42 at yahoo.com Mon Nov 27 22:23:17 2006 From: royhobbsx42 at yahoo.com (Christopher Eckman) Date: Mon, 27 Nov 2006 13:23:17 -0800 (PST) Subject: [DB-SIG] Question on odbc with cross apply and for xml... Message-ID: <20061127212317.65924.qmail@web58006.mail.re3.yahoo.com> Hi all, I am doing a select to concatenate a number of entries into a field like this under 'operators' (sample header is the first line): name company uis_access_control uis_tp_ticketpassing operators UNINA FOO unrestricted No uni-catherine_srvage,uni-robert_woyzik,uni-susan_fooman using the SQL Server functionality cross apply and for xml. Sample select is below: select support_group_name "name", sg.Company "company", sg.f5 "uis_access_control", sg.f6 "uis_tp_ticketpassing", sg.REZ_Manager "manager", substring(memberList, 1, datalength(memberList)/2 - 1) "operators" -- strip the last ',' from the list from ctm_support_group sg cross apply (select convert(nvarchar(60), sgm.support_group_member_name) + ',' as [text()] from tsmi_support_group_members sgm where sg.Support_Group_ID = sgm.Support_Group_ID and sg.Company = 'UNINA' and sg.support_group_name like 'UNI-NA%' order by support_group_name for xml path('')) as Dummy(memberList) go The problem is when I call this via dbi and odbc it will always put 'None' for operators even though if I do this in TOAD or MS Query it will pull the correct values? I tried to get around this by making this a stored procedure but the behavior is the same. Is there something I am missing? I am calling this with the typical cursor.execute(sample_query) for row in cursor.fetchall()... Any help would be appreciated. Thanks, --Chris From mal at egenix.com Mon Nov 27 22:42:06 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 27 Nov 2006 22:42:06 +0100 Subject: [DB-SIG] Question on odbc with cross apply and for xml... In-Reply-To: <20061127212317.65924.qmail@web58006.mail.re3.yahoo.com> References: <20061127212317.65924.qmail@web58006.mail.re3.yahoo.com> Message-ID: <456B5BAE.3070607@egenix.com> Christopher Eckman wrote: > Hi all, > > I am doing a select to concatenate a number of entries into a field like this under 'operators' (sample header is the first line): > name company uis_access_control uis_tp_ticketpassing operators > UNINA FOO unrestricted No uni-catherine_srvage,uni-robert_woyzik,uni-susan_fooman > > using the SQL Server functionality cross apply and for xml. Sample select is below: > > select support_group_name "name", sg.Company "company", sg.f5 "uis_access_control", sg.f6 "uis_tp_ticketpassing", sg.REZ_Manager "manager", > substring(memberList, 1, datalength(memberList)/2 - 1) "operators" > -- strip the last ',' from the list > from > ctm_support_group sg cross apply > (select convert(nvarchar(60), sgm.support_group_member_name) + ',' as [text()] > from tsmi_support_group_members sgm > where sg.Support_Group_ID = sgm.Support_Group_ID and sg.Company = 'UNINA' and sg.support_group_name like 'UNI-NA%' > order by support_group_name > for xml path('')) as Dummy(memberList) > go > > The problem is when I call this via dbi and odbc it will always put 'None' for operators even though if I do this in TOAD or MS Query it will pull the correct values? I tried to get around this by making this a stored procedure but the behavior is the same. Is there something I am missing? I am calling this with the typical > > cursor.execute(sample_query) > for row in cursor.fetchall()... > > Any help would be appreciated. You could try this with mxODBC to see whether it's a problem related to the ODBC driver or not. Note that string processing such as what you are applying to the "operators" is much better done in Python than at the SQL level. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 27 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From royhobbsx42 at yahoo.com Tue Nov 28 04:53:11 2006 From: royhobbsx42 at yahoo.com (Christopher Eckman) Date: Mon, 27 Nov 2006 19:53:11 -0800 (PST) Subject: [DB-SIG] Question on odbc with cross apply and for xml... Message-ID: <20061128035311.36216.qmail@web58007.mail.re3.yahoo.com> Hi Marc-Andre, Thank you very much for the suggestion. I tried mxODBC and it behaved in a similiar manner as the plain odbc module. I don't think the ODBC driver itself is the problem itself though as if I run Microsoft Query, select that exact same DSN and execute the query it will give the expected results (concatenates the operators into the operator field). The main reason I tried to do this in SQL is that I have a number of queries in a report dictionary. It gets the query associated to a given report, runs it and makes a .csv out of them. I was trying to avoid putting in special handlers for any of the reports (all the others work without me doing any query specific handling). At the time I did not know this query would prove to be so difficult to handle. The secondary reason is that I am the only person that familiar with Python here on this gig. Most all of the people on my team are pretty decent with SQL. Thanks, --Chris ----- Original Message ---- From: M.-A. Lemburg To: Christopher Eckman Cc: db-sig at python.org Sent: Monday, November 27, 2006 4:42:06 PM Subject: Re: [DB-SIG] Question on odbc with cross apply and for xml... Christopher Eckman wrote: > Hi all, > > I am doing a select to concatenate a number of entries into a field like this under 'operators' (sample header is the first line): > name company uis_access_control uis_tp_ticketpassing operators > UNINA FOO unrestricted No uni-catherine_srvage,uni-robert_woyzik,uni-susan_fooman > > using the SQL Server functionality cross apply and for xml. Sample select is below: > > select support_group_name "name", sg.Company "company", sg.f5 "uis_access_control", sg.f6 "uis_tp_ticketpassing", sg.REZ_Manager "manager", > substring(memberList, 1, datalength(memberList)/2 - 1) "operators" > -- strip the last ',' from the list > from > ctm_support_group sg cross apply > (select convert(nvarchar(60), sgm.support_group_member_name) + ',' as [text()] > from tsmi_support_group_members sgm > where sg.Support_Group_ID = sgm.Support_Group_ID and sg.Company = 'UNINA' and sg.support_group_name like 'UNI-NA%' > order by support_group_name > for xml path('')) as Dummy(memberList) > go > > The problem is when I call this via dbi and odbc it will always put 'None' for operators even though if I do this in TOAD or MS Query it will pull the correct values? I tried to get around this by making this a stored procedure but the behavior is the same. Is there something I am missing? I am calling this with the typical > > cursor.execute(sample_query) > for row in cursor.fetchall()... > > Any help would be appreciated. You could try this with mxODBC to see whether it's a problem related to the ODBC driver or not. Note that string processing such as what you are applying to the "operators" is much better done in Python than at the SQL level. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 27 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From jimmy.briggs at gmail.com Tue Nov 28 05:56:37 2006 From: jimmy.briggs at gmail.com (James Briggs) Date: Tue, 28 Nov 2006 15:56:37 +1100 Subject: [DB-SIG] Question on odbc with cross apply and for xml... In-Reply-To: <20061127212317.65924.qmail@web58006.mail.re3.yahoo.com> References: <20061127212317.65924.qmail@web58006.mail.re3.yahoo.com> Message-ID: <23b1b67f0611272056n4c9200d6na97a731c9d30abb@mail.gmail.com> I have come across similar problems with executing complicated SQL. One solution could be to place this in a view, and as the sg.support_group_nameand sg.company are in the select list these can be moved to the where clause of the select of the view. eg. select * from ctm_support_group_view where company = 'UNINA' and support_group_name like 'UNI-NA%' This isn't a python solution, and of course I don't know how dynamic you want these selects to be and if you will need to create thse views on the fly either, and maintaining these views creates more work for your DBAs ... James On 11/28/06, Christopher Eckman wrote: > > Hi all, > > I am doing a select to concatenate a number of entries into a field like > this under 'operators' (sample header is the first line): > > name company uis_access_control uis_tp_ticketpassing operators > UNINA FOO unrestricted > No > uni-catherine_srvage,uni-robert_woyzik,uni-susan_fooman > > using the SQL Server functionality cross apply and for xml. Sample select > is below: > > select support_group_name "name", sg.Company "company", sg.f5"uis_access_control", > sg.f6 "uis_tp_ticketpassing", sg.REZ_Manager "manager", > substring(memberList, 1, datalength(memberList)/2 - 1) > "operators" > -- strip the last ',' from the list > from > ctm_support_group sg cross apply > (select convert(nvarchar(60), sgm.support_group_member_name) + ',' > as [text()] > from tsmi_support_group_members sgm > where sg.Support_Group_ID = sgm.Support_Group_ID and sg.Company = > 'UNINA' and sg.support_group_name like 'UNI-NA%' > order by support_group_name > for xml path('')) as Dummy(memberList) > go > > The problem is when I call this via dbi and odbc it will always put 'None' > for operators even though if I do this in TOAD or MS Query it will pull the > correct values? I tried to get around this by making this a stored > procedure but the behavior is the same. Is there something I am missing? I > am calling this with the typical > > cursor.execute(sample_query) > for row in cursor.fetchall()... > > Any help would be appreciated. > > Thanks, > > --Chris > > > > > > _______________________________________________ > DB-SIG maillist - DB-SIG at python.org > http://mail.python.org/mailman/listinfo/db-sig > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.python.org/pipermail/db-sig/attachments/20061128/b393f650/attachment.htm From mal at egenix.com Tue Nov 28 10:23:36 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 28 Nov 2006 10:23:36 +0100 Subject: [DB-SIG] Question on odbc with cross apply and for xml... In-Reply-To: <20061128035311.36216.qmail@web58007.mail.re3.yahoo.com> References: <20061128035311.36216.qmail@web58007.mail.re3.yahoo.com> Message-ID: <456C0018.8030107@egenix.com> Christopher Eckman wrote: > Hi Marc-Andre, > > Thank you very much for the suggestion. I tried mxODBC and it behaved in a similiar manner as the plain odbc module. I don't think the ODBC driver itself is the problem itself though as if I run Microsoft Query, select that exact same DSN and execute the query it will give the expected results (concatenates the operators into the operator field). I'd have to see the log of a mxODBC debug build to comment on that. Note that ODBC has various ways of accessing data. It is possible that MS Query uses a different way of asking for the relevant data than mxODBC - one which doesn't trigger the problem in the driver. The None value is only returned if the driver sends the special SQL_NULL_DATA field length value, so something in the chain is setting this value explicitly. > The main reason I tried to do this in SQL is that I have a number of queries in a report dictionary. It gets the query associated to a given report, runs it and makes a .csv out of them. I was trying to avoid putting in special handlers for any of the reports (all the others work without me doing any query specific handling). At the time I did not know this query would prove to be so difficult to handle. The secondary reason is that I am the only person that familiar with Python here on this gig. Most all of the people on my team are pretty decent with SQL. Fair enough :-) BTW, what does substring() return if you pass it a -1 as third argument ? > Thanks, > > --Chris > > ----- Original Message ---- > From: M.-A. Lemburg > To: Christopher Eckman > Cc: db-sig at python.org > Sent: Monday, November 27, 2006 4:42:06 PM > Subject: Re: [DB-SIG] Question on odbc with cross apply and for xml... > > Christopher Eckman wrote: >> Hi all, >> >> I am doing a select to concatenate a number of entries into a field like this under 'operators' (sample header is the first line): >> name company uis_access_control uis_tp_ticketpassing operators >> UNINA FOO unrestricted No uni-catherine_srvage,uni-robert_woyzik,uni-susan_fooman >> >> using the SQL Server functionality cross apply and for xml. Sample select is below: >> >> select support_group_name "name", sg.Company "company", sg.f5 "uis_access_control", sg.f6 "uis_tp_ticketpassing", sg.REZ_Manager "manager", >> substring(memberList, 1, datalength(memberList)/2 - 1) "operators" >> -- strip the last ',' from the list >> from >> ctm_support_group sg cross apply >> (select convert(nvarchar(60), sgm.support_group_member_name) + ',' as [text()] >> from tsmi_support_group_members sgm >> where sg.Support_Group_ID = sgm.Support_Group_ID and sg.Company = 'UNINA' and sg.support_group_name like 'UNI-NA%' >> order by support_group_name >> for xml path('')) as Dummy(memberList) >> go >> >> The problem is when I call this via dbi and odbc it will always put 'None' for operators even though if I do this in TOAD or MS Query it will pull the correct values? I tried to get around this by making this a stored procedure but the behavior is the same. Is there something I am missing? I am calling this with the typical >> >> cursor.execute(sample_query) >> for row in cursor.fetchall()... >> >> Any help would be appreciated. > > You could try this with mxODBC to see whether it's a problem related to > the ODBC driver or not. > > Note that string processing such as what you are applying to the > "operators" is much better done in Python than at the SQL level. > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 28 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From royhobbsx42 at yahoo.com Thu Nov 30 00:44:39 2006 From: royhobbsx42 at yahoo.com (Christopher Eckman) Date: Wed, 29 Nov 2006 15:44:39 -0800 (PST) Subject: [DB-SIG] Question on odbc with cross apply and for xml... Message-ID: <20061129234439.2913.qmail@web58015.mail.re3.yahoo.com> Hello all, I was able to get around this problem. I noticed when I did a cursor description it gave me a very strange length value for "operators" (the consolidated field). It reported it was 'STRING' with the length being 1073741823. So, I took James advice and made a view and did a convert on that field and made it a nvarchar2 with a much shorter length. I don't know why it sees it this way but it did. Marc, I tried to get find out what it did when you passed substring -1 as the last argument but it would throw errors every time I did it. Thanks for all the help and good advice, --Chris ----- Original Message ---- From: M.-A. Lemburg To: Christopher Eckman Cc: db-sig at python.org Sent: Tuesday, November 28, 2006 4:23:36 AM Subject: Re: [DB-SIG] Question on odbc with cross apply and for xml... Christopher Eckman wrote: > Hi Marc-Andre, > > Thank you very much for the suggestion. I tried mxODBC and it behaved in a similiar manner as the plain odbc module. I don't think the ODBC driver itself is the problem itself though as if I run Microsoft Query, select that exact same DSN and execute the query it will give the expected results (concatenates the operators into the operator field). I'd have to see the log of a mxODBC debug build to comment on that. Note that ODBC has various ways of accessing data. It is possible that MS Query uses a different way of asking for the relevant data than mxODBC - one which doesn't trigger the problem in the driver. The None value is only returned if the driver sends the special SQL_NULL_DATA field length value, so something in the chain is setting this value explicitly. > The main reason I tried to do this in SQL is that I have a number of queries in a report dictionary. It gets the query associated to a given report, runs it and makes a .csv out of them. I was trying to avoid putting in special handlers for any of the reports (all the others work without me doing any query specific handling). At the time I did not know this query would prove to be so difficult to handle. The secondary reason is that I am the only person that familiar with Python here on this gig. Most all of the people on my team are pretty decent with SQL. Fair enough :-) BTW, what does substring() return if you pass it a -1 as third argument ? > Thanks, > > --Chris > > ----- Original Message ---- > From: M.-A. Lemburg > To: Christopher Eckman > Cc: db-sig at python.org > Sent: Monday, November 27, 2006 4:42:06 PM > Subject: Re: [DB-SIG] Question on odbc with cross apply and for xml... > > Christopher Eckman wrote: >> Hi all, >> >> I am doing a select to concatenate a number of entries into a field like this under 'operators' (sample header is the first line): >> name company uis_access_control uis_tp_ticketpassing operators >> UNINA FOO unrestricted No uni-catherine_srvage,uni-robert_woyzik,uni-susan_fooman >> >> using the SQL Server functionality cross apply and for xml. Sample select is below: >> >> select support_group_name "name", sg.Company "company", sg.f5 "uis_access_control", sg.f6 "uis_tp_ticketpassing", sg.REZ_Manager "manager", >> substring(memberList, 1, datalength(memberList)/2 - 1) "operators" >> -- strip the last ',' from the list >> from >> ctm_support_group sg cross apply >> (select convert(nvarchar(60), sgm.support_group_member_name) + ',' as [text()] >> from tsmi_support_group_members sgm >> where sg.Support_Group_ID = sgm.Support_Group_ID and sg.Company = 'UNINA' and sg.support_group_name like 'UNI-NA%' >> order by support_group_name >> for xml path('')) as Dummy(memberList) >> go >> >> The problem is when I call this via dbi and odbc it will always put 'None' for operators even though if I do this in TOAD or MS Query it will pull the correct values? I tried to get around this by making this a stored procedure but the behavior is the same. Is there something I am missing? I am calling this with the typical >> >> cursor.execute(sample_query) >> for row in cursor.fetchall()... >> >> Any help would be appreciated. > > You could try this with mxODBC to see whether it's a problem related to > the ODBC driver or not. > > Note that string processing such as what you are applying to the > "operators" is much better done in Python than at the SQL level. > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 28 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: ____________________________________________________________________________________ Cheap talk? Check out Yahoo! Messenger's low PC-to-Phone call rates. http://voice.yahoo.com From mal at egenix.com Thu Nov 30 01:08:09 2006 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 30 Nov 2006 01:08:09 +0100 Subject: [DB-SIG] Question on odbc with cross apply and for xml... In-Reply-To: <20061129234439.2913.qmail@web58015.mail.re3.yahoo.com> References: <20061129234439.2913.qmail@web58015.mail.re3.yahoo.com> Message-ID: <456E20E9.9050903@egenix.com> Christopher Eckman wrote: > Hello all, > > I was able to get around this problem. I noticed when I did a cursor description it gave me a very strange length value for "operators" (the consolidated field). It reported it was 'STRING' with the length being 1073741823. So, I took James advice and made a view and did a convert on that field and made it a nvarchar2 with a much shorter length. I don't know why it sees it this way but it did. That's an interesting value: 2**30 - 1. Note that the special SQL_NULL_DATA length value is -1. This does look like an ODBC driver bug to me... have you checked the MS KB regarding this behavior ? > Marc, I tried to get find out what it did when you passed substring -1 as the last argument but it would throw errors every time I did it. Thanks. I was just asking because this case will occur if your memberList is empty. > Thanks for all the help and good advice, Cheers, -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Nov 30 2006) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: > --Chris > > ----- Original Message ---- > From: M.-A. Lemburg > To: Christopher Eckman > Cc: db-sig at python.org > Sent: Tuesday, November 28, 2006 4:23:36 AM > Subject: Re: [DB-SIG] Question on odbc with cross apply and for xml... > > Christopher Eckman wrote: >> Hi Marc-Andre, >> >> Thank you very much for the suggestion. I tried mxODBC and it behaved in a similiar manner as the plain odbc module. I don't think the ODBC driver itself is the problem itself though as if I run Microsoft Query, select that exact same DSN and execute the query it will give the expected results (concatenates the operators into the operator field). > > I'd have to see the log of a mxODBC debug build to comment on that. > > Note that ODBC has various ways of accessing data. It is possible > that MS Query uses a different way of asking for the relevant data > than mxODBC - one which doesn't trigger the problem in the driver. > > The None value is only returned if the driver sends the special > SQL_NULL_DATA field length value, so something in the chain is > setting this value explicitly. > >> The main reason I tried to do this in SQL is that I have a number of queries in a report dictionary. It gets the query associated to a given report, runs it and makes a .csv out of them. I was trying to avoid putting in special handlers for any of the reports (all the others work without me doing any query specific handling). At the time I did not know this query would prove to be so difficult to handle. The secondary reason is that I am the only person that familiar with Python here on this gig. Most all of the people on my team are pretty decent with SQL. > > Fair enough :-) BTW, what does substring() return if you pass it > a -1 as third argument ? > >> Thanks, >> >> --Chris >> >> ----- Original Message ---- >> From: M.-A. Lemburg >> To: Christopher Eckman >> Cc: db-sig at python.org >> Sent: Monday, November 27, 2006 4:42:06 PM >> Subject: Re: [DB-SIG] Question on odbc with cross apply and for xml... >> >> Christopher Eckman wrote: >>> Hi all, >>> >>> I am doing a select to concatenate a number of entries into a field like this under 'operators' (sample header is the first line): >>> name company uis_access_control uis_tp_ticketpassing operators >>> UNINA FOO unrestricted No uni-catherine_srvage,uni-robert_woyzik,uni-susan_fooman >>> >>> using the SQL Server functionality cross apply and for xml. Sample select is below: >>> >>> select support_group_name "name", sg.Company "company", sg.f5 "uis_access_control", sg.f6 "uis_tp_ticketpassing", sg.REZ_Manager "manager", >>> substring(memberList, 1, datalength(memberList)/2 - 1) "operators" >>> -- strip the last ',' from the list >>> from >>> ctm_support_group sg cross apply >>> (select convert(nvarchar(60), sgm.support_group_member_name) + ',' as [text()] >>> from tsmi_support_group_members sgm >>> where sg.Support_Group_ID = sgm.Support_Group_ID and sg.Company = 'UNINA' and sg.support_group_name like 'UNI-NA%' >>> order by support_group_name >>> for xml path('')) as Dummy(memberList) >>> go >>> >>> The problem is when I call this via dbi and odbc it will always put 'None' for operators even though if I do this in TOAD or MS Query it will pull the correct values? I tried to get around this by making this a stored procedure but the behavior is the same. Is there something I am missing? I am calling this with the typical >>> >>> cursor.execute(sample_query) >>> for row in cursor.fetchall()... >>> >>> Any help would be appreciated. >> You could try this with mxODBC to see whether it's a problem related to >> the ODBC driver or not. >> >> Note that string processing such as what you are applying to the >> "operators" is much better done in Python than at the SQL level. >> >