From olli.rajala at gmail.com Fri Jun 3 19:29:13 2005 From: olli.rajala at gmail.com (Olli Rajala) Date: Fri, 3 Jun 2005 20:29:13 +0300 Subject: [DB-SIG] Database connections don't stay alive In-Reply-To: References: Message-ID: I sent this to python-tutor mailing list, but they suggested me to ask here. So here I am with my quite basic/newbie question. Hope that it doesn't matter too much. :) I've made a little cgi program with Python and Mysql, but would like to change MySQL to Postgresql. There just seem not to be quite many tutorials about this combination. I've been able to gather some info, but much more would be necessary. I have quite big problem now, after I learned how to connect to the Postgresql database. With MySQL I can do like this: import MySQLdb def connectDB(): try: db = MySQLdb.connect(host='localhost', user='user', db='pictures', passwd='passwd') cursor = db.cursor() return cursor except: print 'Error' cursor = connectDB() cursor.execute('SELECT * FROM categories') print cursor.fetchall() And everything works as I thought. But with Postgre, it seems that the connection don't stay alive. I mean, with the same kind of code: from pyPgSQL import PgSQL def connectDB(): try: db = PgSQL.connect(host='localhost', database='pictures', user='user', password='passwd') return db.cursor() except: print "Error" cursor = connectDB() cursor.execute("SELECT * FROM categories") print cursor.fetchall() The result is: Traceback (most recent call last): File "test.py", line 23, in ? cursor.execute("SELECT * FROM categories") File "/usr/lib/python2.4/site-packages/pyPgSQL/PgSQL.py", line 2992, in execute raise InterfaceError, "execute failed - the cursor is closed." libpq.InterfaceError: execute failed - the cursor is closed. So, what's the solution for this? I saw somewhere some mentions about 'connection pooling', what's that and how I'm supposed to use that? It's quite hard to code when you don't have good manuals and have to put together information from very different sources and try to make it work... For example, this is from a manual I've used: 2.1.3.1 PQconnectdb Syntax: c = PQconnectdb(conninfo) Where conninfo is a string containing connection information. What the heck is 'conninfo', I mean, what's it's syntax? Yeah, I was finally able to figure it out, but it took hours googling, trying and stumbling. Okay, okay, back to business. Hope that someone will be able to help me. :) My system is Python2.4+Postgresql 7.4.7 running on Ubuntu 5.04. if that matters... Yours sincerely, -- Olli Rajala <>< Tampere, Finland http://www.students.tut.fi/~rajala37/ "In theory, Theory and Practice should be the same. But in practice, they aren't." - Murphy's Proverbs From mattnuzum at gmail.com Fri Jun 3 19:42:16 2005 From: mattnuzum at gmail.com (Matthew Nuzum) Date: Fri, 3 Jun 2005 12:42:16 -0500 Subject: [DB-SIG] Database connections don't stay alive In-Reply-To: References: Message-ID: On 6/3/05, Olli Rajala wrote: > from pyPgSQL import PgSQL > def connectDB(): > try: > db = PgSQL.connect(host='localhost', database='pictures', > user='user', password='passwd') > return db.cursor() > except: > print "Error" > > cursor = connectDB() > cursor.execute("SELECT * FROM categories") > print cursor.fetchall() > > The result is: > > Traceback (most recent call last): > File "test.py", line 23, in ? > cursor.execute("SELECT * FROM categories") > File "/usr/lib/python2.4/site-packages/pyPgSQL/PgSQL.py", line 2992, > in execute > raise InterfaceError, "execute failed - the cursor is closed." > libpq.InterfaceError: execute failed - the cursor is closed. > > So, what's the solution for this? I saw somewhere some mentions about > 'connection pooling', what's that and how I'm supposed to use that? > Connection pooling allows several programs to share connections to a database. Your first step should be to get things working and then, if you start killing your db server look into pooling. Here's how I connect to a postgres db and use it. It may not be the most modern way, but it works fine: import pgdb try: db = pgdb.connect(dsn="hostname:dbname", user="username", password="pwd") except: print >> sys.stderr, "Problem making database connection... Try again" cursor = db.cursor() sql = "select * from table" cursor.execute(sql) while (1): row = cursor.fetchone() if row == None: break print row cursor.close() db.close() -- Matthew Nuzum www.bearfruit.org From chris at cogdon.org Fri Jun 3 19:50:16 2005 From: chris at cogdon.org (Chris Cogdon) Date: Fri, 3 Jun 2005 10:50:16 -0700 Subject: [DB-SIG] Database connections don't stay alive In-Reply-To: References: Message-ID: On Jun 3, 2005, at 10:29, Olli Rajala wrote: > I sent this to python-tutor mailing list, but they suggested me to ask > here. So here I am with my quite basic/newbie question. Hope that it > doesn't matter too much. :) > > I've made a little cgi program with Python and Mysql, but would like > to change MySQL to Postgresql. There just seem not to be quite many > tutorials about this combination. I've been able to gather some info, > but much more would be necessary. > > I have quite big problem now, after I learned how to connect to the > Postgresql database. With MySQL I can do like this: > > import MySQLdb > def connectDB(): > try: > db = MySQLdb.connect(host='localhost', user='user', > db='pictures', passwd='passwd') > cursor = db.cursor() > return cursor > except: > print 'Error' > > cursor = connectDB() > cursor.execute('SELECT * FROM categories') > print cursor.fetchall() > > And everything works as I thought. But with Postgre, it seems that the > connection don't stay alive. I mean, with the same kind of code: > > from pyPgSQL import PgSQL > def connectDB(): > try: > db = PgSQL.connect(host='localhost', database='pictures', > user='user', password='passwd') > return db.cursor() > except: > print "Error" The 'problem' here is that the database object goes out of scope when the function exits. When 'out of scope' objects get cleaned up is fairly implementation dependant, which is likely why it was working with MySQL. You really need to do something like this, instead: from pyPgSQL import PgSQL def connectDB(): return PgSQL.connect( ...blahblahblah... ) # Do the following statement ONCE in your program db = connectDB() # Do the following block every time you need to do a database operation try: cur = db.cursor () ... all your stuff with the cursor. db.commit () except: db.rollback() This method will also work with MySQL too. -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From chris at cogdon.org Fri Jun 3 19:56:47 2005 From: chris at cogdon.org (Chris Cogdon) Date: Fri, 3 Jun 2005 10:56:47 -0700 Subject: [DB-SIG] Database connections don't stay alive In-Reply-To: References: Message-ID: <349a98c211c05f889f9d699bedd705be@cogdon.org> On Jun 3, 2005, at 10:50, Chris Cogdon wrote: > The 'problem' here is that the database object goes out of scope when > the function exits. When 'out of scope' objects get cleaned up is > fairly implementation dependant, which is likely why it was working > with MySQL. Oops... sorry to respond to myself here :) In the MySQL case, the cursor is probably keeping the database connection object alive, and THAT is why it works in MySQL. It may well be that the pyPgSQL cursor object is NOT connected to the database object through python, so when the database object goes out of scope, the reference count goes to zero, and it's cleaned up. On second thoughts, this DOES seem a little odd. However, You really shouldn't be throwing the database object away (or trying to) since you need it in order to do your db.commit() and db.rollback() instructions. Yes, enabling autocommit will alleviate the need for this, but you'll find that as your database application grows, you'll want to start doing more than one instruction atomically. And... its easier to get into the habit of wrapping your transactions in a try/except block now. -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From Michael at Hipp.com Fri Jun 3 20:03:33 2005 From: Michael at Hipp.com (Michael Hipp) Date: Fri, 03 Jun 2005 13:03:33 -0500 Subject: [DB-SIG] Database connections don't stay alive In-Reply-To: References: Message-ID: <42A09B75.404@Hipp.com> Olli Rajala wrote: > Okay, okay, back to business. Hope that someone will be able to help me. :) > > My system is Python2.4+Postgresql 7.4.7 running on Ubuntu 5.04. if > that matters... I use pyPgSQL every day and it generally works as expected. Note that most of my work is Windows client with a Ubuntu server on the back end. You might want to ask on the pyPgSQL users mailing list: http://lists.sourceforge.net/lists/listinfo/pypgsql-users Offhand I don't see anything wrong with your code and I've never experienced the particular problem you're seeing. You might try in-lining it instead of calling a function just for experimentation purposes. pyPgSQL has been somewhat picky about versions in the past so you might want to verify everything is at compatible versions. Michael From olli.rajala at gmail.com Sat Jun 4 18:39:50 2005 From: olli.rajala at gmail.com (Olli Rajala) Date: Sat, 4 Jun 2005 19:39:50 +0300 Subject: [DB-SIG] Database connections don't stay alive In-Reply-To: <492fe6061a429bb2b588927b503c5c6f@furry.org.au> References: <492fe6061a429bb2b588927b503c5c6f@furry.org.au> Message-ID: Chris Cogdon wrote: > The 'problem' here is that the database object goes out of scope when > the function exits. When 'out of scope' objects get cleaned up is > fairly implementation dependant, which is likely why it was working > with MySQL. Oh, thanks. Now it works as it should. Yeah, I thought that the solution isn't probably very hard, but when you don't get it, you just don't get it. :) When doing this transfer (MySQL->PostgreSQL) in my code, I invent some ways to reduce the number of code lines and was able to cut the size of my sql-module about 40-50%. So, some thing lead to another, and so on. There is stille quite much cleaning and refactoring to do, but luckily it just is for my own (and my wife's too) personal use, so... ;)) Maybe some day I know enough to make good code. These days I know enough to be dangerous. ;) Thanks guys! -- Olli Rajala <>< Tampere, Finland http://www.students.tut.fi/~rajala37/ "In theory, Theory and Practice should be the same. But in practice, they aren't." - Murphy's Proverbs From dcrespo at grupozoom.com Tue Jun 14 18:01:28 2005 From: dcrespo at grupozoom.com (Daniel Crespo) Date: Tue, 14 Jun 2005 12:01:28 -0400 Subject: [DB-SIG] Include adodb support in py2exe Message-ID: Hi all... Anyone knows how to include adodb and psycopg in py2exe? From anthony at interlink.com.au Wed Jun 22 15:08:00 2005 From: anthony at interlink.com.au (Anthony Baxter) Date: Wed, 22 Jun 2005 23:08:00 +1000 Subject: [DB-SIG] cursor.fetchiter()? Message-ID: <200506222308.02580.anthony@interlink.com.au> Would it make sense for DB-SIG compliant modules to support a cursor.fetchiter() type method, to use the iterator protocol (instead of fetchall()'s current load-everything-into-memory first approach). Clever extensions could be written to grab N at a time, where N is less than 10,000,000, but more than 1 - this then means you get faster performance than fetchone(), and a less sucky API. res = cursor.fetchone() while res: dostuffwith(res) res = cursor.fetchone() vs for res in cursor.fetchiter(): dostuffwith(res) And yes, you can get something that's most of the way there now with something like the code in the cookbook entry http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/137270 - just wondering if it's something worth adding to the API. Anthony -- Anthony Baxter It's never too late to have a happy childhood. From mal at egenix.com Wed Jun 22 15:31:38 2005 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 22 Jun 2005 15:31:38 +0200 Subject: [DB-SIG] cursor.fetchiter()? In-Reply-To: <200506222308.02580.anthony@interlink.com.au> References: <200506222308.02580.anthony@interlink.com.au> Message-ID: <42B9683A.8040605@egenix.com> Anthony Baxter wrote: > Would it make sense for DB-SIG compliant modules to support a > cursor.fetchiter() type method, to use the iterator protocol > (instead of fetchall()'s current load-everything-into-memory > first approach). Clever extensions could be written to grab > N at a time, where N is less than 10,000,000, but more than 1 - > this then means you get faster performance than fetchone(), > and a less sucky API. > > res = cursor.fetchone() > while res: > dostuffwith(res) > res = cursor.fetchone() > > vs > > for res in cursor.fetchiter(): > dostuffwith(res) > > And yes, you can get something that's most of the way there > now with something like the code in the cookbook entry > http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/137270 - > just wondering if it's something worth adding to the API. >From the DB API (in the standard extensions section): Cursor Method .next() Return the next row from the currently executing SQL statement using the same semantics as .fetchone(). A StopIteration exception is raised when the result set is exhausted for Python versions 2.2 and later. Previous versions don't have the StopIteration exception and so the method should raise an IndexError instead. Warning Message: "DB-API extension cursor.next() used" Cursor Method .__iter__() Return self to make cursors compatible to the iteration protocol. Warning Message: "DB-API extension cursor.__iter__() used" The only difference compared to your proposal is that you'd write: for res in cursor: dostuff(res) -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jun 22 2005) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From anthony at interlink.com.au Wed Jun 22 15:41:10 2005 From: anthony at interlink.com.au (Anthony Baxter) Date: Wed, 22 Jun 2005 23:41:10 +1000 Subject: [DB-SIG] cursor.fetchiter()? In-Reply-To: <42B9683A.8040605@egenix.com> References: <200506222308.02580.anthony@interlink.com.au> <42B9683A.8040605@egenix.com> Message-ID: <200506222341.12998.anthony@interlink.com.au> On Wednesday 22 June 2005 23:31, you wrote: > >From the DB API (in the standard extensions section): > > Cursor Method .next() > Cursor Method .__iter__() Ooo. Time-machiney goodness. So a) is there a nice reference saying which modules implement which extensions? Should I go ahead and create a page in the wiki for this and put a link on the database topic guide page? It would seem like a good idea. b) is there any sort of sense of which of these extensions should become part of a DB API 3.0? Anthony -- Anthony Baxter It's never too late to have a happy childhood. From mal at egenix.com Wed Jun 22 16:36:21 2005 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 22 Jun 2005 16:36:21 +0200 Subject: [DB-SIG] cursor.fetchiter()? In-Reply-To: <200506222341.12998.anthony@interlink.com.au> References: <200506222308.02580.anthony@interlink.com.au> <42B9683A.8040605@egenix.com> <200506222341.12998.anthony@interlink.com.au> Message-ID: <42B97765.6010409@egenix.com> Anthony Baxter wrote: > On Wednesday 22 June 2005 23:31, you wrote: > >>>From the DB API (in the standard extensions section): >> >> Cursor Method .next() >> Cursor Method .__iter__() > > > Ooo. Time-machiney goodness. > > So > a) is there a nice reference saying which modules > implement which extensions? Should I go ahead and > create a page in the wiki for this and put a link > on the database topic guide page? It would seem like > a good idea. That would be a good idea. > b) is there any sort of sense of which of these extensions > should become part of a DB API 3.0? I think we should add all extensions that can be implemented easily by the majority of database modules. I'm not sure which should be made mandatory, though. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jun 22 2005) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try mxODBC.Zope.DA for Windows,Linux,Solaris,FreeBSD for free ! :::: From kolbe at kolbekegel.com Fri Jun 24 06:39:15 2005 From: kolbe at kolbekegel.com (Kolbe Kegel) Date: Thu, 23 Jun 2005 21:39:15 -0700 Subject: [DB-SIG] Strange resultset with ORDER BY col1, GROUP BY col2 Message-ID: <23407cbb56fbe7d06ffc84c289a891dc@kolbekegel.com> Hello, I am very new to Python, but I have encountered something that seems to be an undocumented, possibly erroneous behavior when fetching rows from a cursor object. I am using MySQL 5.0.7 for these tests. Here is a way to test: 1) Create a table similar to this... CREATE TABLE `gctest` ( `comment` varchar(255) default NULL, `group` int(10) unsigned NOT NULL ) 2) Insert a row similar to this... INSERT INTO `gctest` VALUES ('test',1); 3) Execute the following SELECT in a Python program... SELECT GROUP_CONCAT(comment) AS comment FROM gctest GROUP BY `group` ORDER BY comment; for example.. import MySQLdb db = MySQLdb.connect(host="localhost", user="", passwd="", db="test") cursor = db.cursor() cursor.execute("SELECT GROUP_CONCAT(comment) AS comment FROM gctest GROUP BY `group` ORDER BY comment;") for record in cursor.fetchall(): print record[0] 4) Observe the results... array('c', 'test') This indicates that an array of some sort is being returned. I don't know the significance of its contents in the Python world, but it means that while all other elements in the resultset are returned as strings, this one bizarre exception exists. I was able to narrow my test case down to the behavior occurring when GROUP BY and ORDER BY appear in the same statement AND they reference different columns. That is, when the results are GROUPped BY one column and ORDERed BY another, this behavior occurs. It seems like it is probably an issue with the MySQLdb interface. But I am passing along this information and this test case so that I can hear the thoughts of those more experience in these matters. I hope that someone will be able to shed some light on this matter. Again, I am using MySQL 5.0.7. I am using Python 2.4.2. I am using MySQLdb 1.1.6-1ubuntu2 (installed through apt on Ubuntu Linux 5.04). Thank you, Kolbe Kegel From chris at cogdon.org Fri Jun 24 08:19:17 2005 From: chris at cogdon.org (Chris Cogdon) Date: Thu, 23 Jun 2005 23:19:17 -0700 Subject: [DB-SIG] Strange resultset with ORDER BY col1, GROUP BY col2 In-Reply-To: <23407cbb56fbe7d06ffc84c289a891dc@kolbekegel.com> References: <23407cbb56fbe7d06ffc84c289a891dc@kolbekegel.com> Message-ID: On Jun 23, 2005, at 9:39 PM, Kolbe Kegel wrote: > Again, I am using MySQL 5.0.7. I am using Python 2.4.2. I am using > MySQLdb 1.1.6-1ubuntu2 (installed through apt on Ubuntu Linux 5.04). I just tried something similar. I don't have the same versions as you, so I couldn't even use group_contact. I just created a column I could sum() instead. The table looks like this: +---------+----+------+ | comment | gr | x | +---------+----+------+ | hello | 1 | 1 | | there | 1 | 2 | | thingy | 1 | 3 | | foo | 2 | 4 | | boo | 2 | 5 | | baz | 2 | 6 | +---------+----+------+ The query was "select sum(x) as blah from gctest group by gr order by blah" And all rows returned through MySQLdb were: ((6.0,), (15.0,)) Which is exactly what I was expecting (except the numbers being floats... that's annoying) So... no repro here. But it could be a version thing: Mysql: 3.23.40 Python: 2.2.2 MySQLdb: 0.9.2.final.1 (No picking on my versions... its the only system I had that had MySQL and MySQLdb installed :) -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From kolbe at kolbekegel.com Fri Jun 24 09:08:54 2005 From: kolbe at kolbekegel.com (Kolbe Kegel) Date: Fri, 24 Jun 2005 00:08:54 -0700 Subject: [DB-SIG] Strange resultset with ORDER BY col1, GROUP BY col2 In-Reply-To: References: <23407cbb56fbe7d06ffc84c289a891dc@kolbekegel.com> Message-ID: <1608854c933ecc08470b322af95ef6a9@kolbekegel.com> Chris, > I just tried something similar. I don't have the same versions as you, > so I couldn't even use group_contact. I just created a column I could > sum() instead. I don't reproduce it either with this query on my original dataset (with the addition of a column `id` containing the value 6 (arbitrary): mysql> select * from gctest; +---------+-------+------+ | comment | group | id | +---------+-------+------+ | test | 1 | 6 | +---------+-------+------+ Here is the query i used: select sum(id) as x from gctest group by `group` order by comment; It simply returned "6", as one would hope it would do. Let me know if that doesn't capture the spirit of what you were trying to do. Perhaps interesting is the fact that this doesn't cause any "problem" either (returns 'testtest'): SELECT CONCAT(comment,comment) AS comment FROM gctest GROUP BY `group` ORDER BY `comment`; While not using the aggregate function in the field list, it does still use the other elements. This must mean that the problem is related to the aggregate function, and more specifically to GROUP_CONCAT since the problem is not triggered by SUM(). Thanks for looking into it :) Kolbe From andy47 at halfcooked.com Fri Jun 24 13:19:14 2005 From: andy47 at halfcooked.com (Andy Todd) Date: Fri, 24 Jun 2005 21:19:14 +1000 Subject: [DB-SIG] Strange resultset with ORDER BY col1, GROUP BY col2 In-Reply-To: <23407cbb56fbe7d06ffc84c289a891dc@kolbekegel.com> References: <23407cbb56fbe7d06ffc84c289a891dc@kolbekegel.com> Message-ID: <42BBEC32.3020205@halfcooked.com> Kolbe Kegel wrote: > Hello, > > I am very new to Python, but I have encountered something that seems to > be an undocumented, possibly erroneous behavior when fetching rows from > a cursor object. I am using MySQL 5.0.7 for these tests. > > Here is a way to test: > > 1) Create a table similar to this... > > CREATE TABLE `gctest` ( > `comment` varchar(255) default NULL, > `group` int(10) unsigned NOT NULL > ) > > 2) Insert a row similar to this... > > INSERT INTO `gctest` VALUES ('test',1); > > 3) Execute the following SELECT in a Python program... > > SELECT GROUP_CONCAT(comment) AS comment FROM gctest GROUP BY `group` > ORDER BY comment; > > for example.. > > import MySQLdb > db = MySQLdb.connect(host="localhost", user="", passwd="", db="test") > cursor = db.cursor() > cursor.execute("SELECT GROUP_CONCAT(comment) AS comment FROM gctest > GROUP BY `group` ORDER BY comment;") > for record in cursor.fetchall(): > print record[0] > > 4) Observe the results... > > array('c', 'test') > > This indicates that an array of some sort is being returned. I don't > know the significance of its contents in the Python world, but it means > that while all other elements in the resultset are returned as strings, > this one bizarre exception exists. > > I was able to narrow my test case down to the behavior occurring when > GROUP BY and ORDER BY appear in the same statement AND they reference > different columns. That is, when the results are GROUPped BY one column > and ORDERed BY another, this behavior occurs. > > It seems like it is probably an issue with the MySQLdb interface. But I > am passing along this information and this test case so that I can hear > the thoughts of those more experience in these matters. I hope that > someone will be able to shed some light on this matter. > > Again, I am using MySQL 5.0.7. I am using Python 2.4.2. I am using > MySQLdb 1.1.6-1ubuntu2 (installed through apt on Ubuntu Linux 5.04). > > Thank you, > > Kolbe Kegel As group_concat is a MySQL specific function I suspect that the problem you are seeing is specific to the MySQLdb module. I'm running MySQL 4.0.17 and your query doesn't work as GROUP_CONCAT is not available in this version. A couple of questions do spring to mind though; - Do you really need to group by one thing and order by another? - Do you need to do the group_concat transposition in SQL or would it be better achieved in Python after you've select all of the group, comment pairs from the database? Regards, Andy -- -------------------------------------------------------------------------------- From the desk of Andrew J Todd esq - http://www.halfcooked.com/ From chris at cogdon.org Fri Jun 24 17:14:00 2005 From: chris at cogdon.org (Chris Cogdon) Date: Fri, 24 Jun 2005 08:14:00 -0700 Subject: [DB-SIG] Strange resultset with ORDER BY col1, GROUP BY col2 In-Reply-To: <42BBEC32.3020205@halfcooked.com> References: <23407cbb56fbe7d06ffc84c289a891dc@kolbekegel.com> <42BBEC32.3020205@halfcooked.com> Message-ID: On Jun 24, 2005, at 4:19 AM, Andy Todd wrote: > - Do you need to do the group_concat transposition in SQL or would > it be > better achieved in Python after you've select all of the group, > comment > pairs from the database? If you're grouping a particular column, you MUST use some kind of aggregate function on that column (eg, sum, max, or the mysql specific group_concat). The alternative is to retrieve N times more rows and do the grouping and concatenation in python, which is inefficient. What Kolbe trying to do should work just fine. -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From chris at cogdon.org Fri Jun 24 17:25:19 2005 From: chris at cogdon.org (Chris Cogdon) Date: Fri, 24 Jun 2005 08:25:19 -0700 Subject: [DB-SIG] Strange resultset with ORDER BY col1, GROUP BY col2 In-Reply-To: References: <23407cbb56fbe7d06ffc84c289a891dc@kolbekegel.com> <42BBEC32.3020205@halfcooked.com> Message-ID: <701F620A-6481-40EC-AC98-253CEC8E30CB@cogdon.org> On Jun 24, 2005, at 8:14 AM, Chris Cogdon wrote: > > On Jun 24, 2005, at 4:19 AM, Andy Todd wrote: > > >> - Do you need to do the group_concat transposition in SQL or would >> it be >> better achieved in Python after you've select all of the group, >> comment >> pairs from the database? >> > > If you're grouping a particular column, you MUST use some kind of > aggregate function on that column (eg, sum, max, or the mysql > specific group_concat). Self correction: an aggregate function on OTHER columns :) -- ("`-/")_.-'"``-._ Chris Cogdon . . `; -._ )-;-,_`) (v_,)' _ )`-.\ ``-' _.- _..-_/ / ((.' ((,.-' ((,/ fL From kolbe at kolbekegel.com Fri Jun 24 17:28:11 2005 From: kolbe at kolbekegel.com (Kolbe Kegel) Date: Fri, 24 Jun 2005 08:28:11 -0700 Subject: [DB-SIG] Strange resultset with ORDER BY col1, GROUP BY col2 In-Reply-To: <42BBEC32.3020205@halfcooked.com> References: <23407cbb56fbe7d06ffc84c289a891dc@kolbekegel.com> <42BBEC32.3020205@halfcooked.com> Message-ID: <347d613b8b31d089995306936fd1fac4@kolbekegel.com> Hi Andy, > As group_concat is a MySQL specific function I suspect that the > problem you are seeing is specific to the MySQLdb module. This seems reasonable. I suppose that I ought to make an attempt to contact the maintainer directly. I'll look into doing so soon. > I'm running MySQL 4.0.17 and your query doesn't work as GROUP_CONCAT > is not available in this version. Upgrade! You're missing out on a lot of great features in 4.1 :) > A couple of questions do spring to mind though; > > - Do you really need to group by one thing and order by another? Absolutely. In my application, I want a lit of actions grouped by the type of action and upon whom it was performed and ordered by the most recently performed action in each of those groups. > - Do you need to do the group_concat transposition in SQL or would it > be better achieved in Python after you've select all of the group, > comment pairs from the database? Of course there are workarounds. Unfortunately, as Chris pointed out, the workarounds will result in unnecessary inefficiency. At any rate, this is almost certainly a bug in MySQLdb, so fixing that is preferable to developing workarounds. At this point, my workaround is to simply use .tostring() in my application, but that's a rather ugly and short term solution. Thanks for your comments. Kolbe