From bogus@does.not.exist.com Tue Dec 5 00:20:01 2000 From: bogus@does.not.exist.com () Date: Tue Dec 5 00:20:07 2000 Subject: No subject [132.151.7.31])by cnri.reston.va.us (8.9.1a/8.9.1) with ESMTP id KAA10024;Fri, 5 Nov 1999 10:02:51 -0500 (EST) Received: from eric.cnri.reston.va.us (eric.cnri.reston.va.us [10.27.10.23])by kaluha.cnri.reston.va.us (8.9.1b+Sun/8.9.1) with ESMTP id KAA03654;Fri, 5 Nov 1999 10:02:57 -0500 (EST) Received: from CNRI.Reston.VA.US (localhost [127.0.0.1])by eric.cnri.reston.va.us (8.9.3+Sun/8.9.1) with ESMTP id KAA16554;Fri, 5 Nov 1999 10:02:54 -0500 (EST) Message-Id: <199911051502.KAA16554@eric.cnri.reston.va.us> In-reply-to: Your message of "Fri, 05 Nov 1999 09:16:35 EST." <199911051416.JAA24937@python.org> References: <199911051416.JAA24937@python.org> > Don't know if this is considered a bug: > Python.exe crashed when I tried to use a recursive function, > like the infamous factorial. > I've downloaded and tried this on both Python 1.5.1 and 1.5.2 > of the Windows 95/98/NT version. > > The function is as follows > > def fact(n): > if n==1: return 1 > else: return n*fact(n) > > fact(1) returned 1 with no problems. > But fact(2) or any argument greater than 1 crashed python > with a Windows application error window popping up. > > I've run this before on a Linux version of Python without any problems. Do you realize that your code has a bug? It recurses infinitely because you are calling fact(n) instead of fact(n-1). On Linux, you would have gotten a stack overflow error. Unfortunately, the stack overflow code on Windows is broken (the limit on recursion is too high, so it runs out of stack memory causing a crash before the recursion limit is triggered). Christian Tismer has posted a patch for the binary (python15.dll) which solves this. It has also of course been fixed in our source code. --Guido van Rossum (home page: http://www.python.org/~guido/) ______________________________________________________ Get Your Private, Free Email at http://www.hotmail.com From lannert@uni-duesseldorf.de Tue Dec 5 00:20:01 2000 From: lannert@uni-duesseldorf.de (lannert@uni-duesseldorf.de) Date: Tue, 4 Apr 100 12:49:19 +0200 (MEST) Subject: gpk@bell-labs.com: [Python-bugs-list] netrc module has bad error handling (PR#265) In-Reply-To: <20000403192328.A4548@thyrsus.com> from "Eric S. Raymond" at "Apr 3, 0 07:23:28 pm" Message-ID: <20000404104919.22186.qmail@lannert.rz.uni-duesseldorf.de> "Eric S. Raymond" wrote: > > > BTW, the following lines: > > > > > > 31: lexer.whitepace = ' \t' > > > 35: lexer.whitepace = ' \t\r\n' > > > > > > look a bit strange; shouldn't it be "whitespace" as defined in shlex? > > > (But then, how did this ever work? :) > > This code is correct, though perhaps not as explicit as it should be. > > It is a workaround for the fact that macdefs actually have different > lexical rules than the rest of the .netrc format. If you notice that > the assignment on line 35 is actually restoring the shlex default I > think you'll grok what's happening. > > It's kind of ugly, but that's not my fault :-). Go pound on whoever > designed the .netrc format. I'm afraid I didn't make my point clear enough -- isn't "whitepace" a misspelling for "whitespace"? If not, I'd suggest using a much different name ... The workaround to cope with a strange file format is certainly legitimate! ;-) Detlef From lannert@uni-duesseldorf.de Tue Dec 5 00:20:01 2000 From: lannert@uni-duesseldorf.de (lannert@uni-duesseldorf.de) Date: Wed, 5 Apr 100 13:45:09 +0200 (MEST) Subject: gpk@bell-labs.com: [Python-bugs-list] netrc module has bad error handling (PR#265) In-Reply-To: <20000404082300.C6404@thyrsus.com> from "Eric S. Raymond" at "Apr 4, 0 08:23:00 am" Message-ID: <20000405114509.23669.qmail@lannert.rz.uni-duesseldorf.de> --ELM954935109-23611-0_ Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sorry to keep bothering you ... "Eric S. Raymond" wrote: > Zounds. You're right, this can't have worked. I guess nobody has tried to > parse macdefs yet! Now that I have, I get the impression that netrc.py's idea of a macdef is quite unlike the idea ftp has. Obviously macdef's are *not* toplevel entries (a macdef before any machine entry was simply not found when I entered "$ macname") but belong to the preceding machine entry. Different macros with the same name but for different machines are handled by (my!) ftp separately. Therefore I recklessly hacked netrc.py to make it accept macdef's (the whitespace fix, BTW, wasn't sufficient) and put them into self.macros indexed by hostname _and_ macroname. IMHO this is a change that doesn't break anything that did work ;-) with the previous version. Although it would seem more logical to put macro definitions into the corresponding host entries, I chose not to because (a) this would be an interface change (what did I hear about revolution? :) and (b) this whole macdef feature is obviously rarely used (except by myself). Detlef --ELM954935109-23611-0_ Content-Type: text/plain; charset=ISO-8859-1 Content-Disposition: attachment; filename=netrc.py Content-Description: /tmp/netrc.py Content-Transfer-Encoding: 7bit """An object-oriented interface to .netrc files.""" # Module and documentation by Eric S. Raymond, 21 Dec 1998 # Recklessly hacked and macdefs fixed by Detlef Lannert, 05 Apr 2000 import os, shlex import string # not from Python 1.6 onwards class netrc: def __init__(self, file=None): if not file: file = os.path.join(os.environ['HOME'], ".netrc") fp = open(file) self.hosts = {} self.macros = {} # indexed by hostnames lexer = shlex.shlex(fp) lexer.wordchars = lexer.wordchars + '.-' while 1: # Look for a machine or default top-level keyword nexttoken = lexer.get_token() if nexttoken in ('', None): break elif nexttoken == 'machine': entryname = lexer.get_token() elif nexttoken == 'default': entryname = 'default' elif nexttoken == 'macdef': # this is a toplevel macdef; what the heck is it good for?? entryname = '' # put it into self.macros[''] lexer.push_token(nexttoken) else: raise SyntaxError, "bad toplevel token %s, file %s, line %d" \ % (nexttoken, file, lexer.lineno) # We're looking at start of an entry for a named machine or default. login = account = password = None macdefs = {} while 1: nexttoken = lexer.get_token() if nexttoken in ('machine', 'default', ''): if (login and not password) or (password and not login): # macdef-only entries are acceptable! raise SyntaxError( "incomplete %s entry terminated by %s" % (`entryname`, nexttoken or "EOF")) if login: self.hosts[entryname] = (login, account, password) if macdefs: self.macros[entryname] = macdefs lexer.push_token(nexttoken) break elif nexttoken in ('login', 'user'): login = lexer.get_token() elif nexttoken == 'account': account = lexer.get_token() elif nexttoken == 'password': password = lexer.get_token() elif nexttoken == 'macdef': macroname = lexer.get_token() macro = [] while 1: # macro continues until empty line line = lexer.instream.readline() if not line or line == "\n": break macro.append(line) macdefs[macroname] = macro else: raise SyntaxError( "bad follower token %s, file %s, line %d" % (nexttoken, file, lexer.lineno)) def authenticators(self, host): """Return a (user, account, password) tuple for given host.""" if self.hosts.has_key(host): return self.hosts[host] elif self.hosts.has_key('default'): return self.hosts['default'] else: return None def __repr__(self): """Dump the class data in the format of a .netrc file.""" result = [] # First process the mysterious top-level macdef's: host = "" # dummy entry for macroname in self.macros.get(host, {}).keys(): result.append("\tmacdef %s\n" % macroname) result.extend(self.macros[host][macroname]) result.append("\n") # Now for the machines (and the optional default entry): for host in self.hosts.keys(): login, account, password = self.hosts[host] result.append("machine %s \n\tlogin %s\n" % (host, login)) if account: result.append("\taccount %s\n" % account) result.append("\tpassword %s\n" % password) for macroname in self.macros.get(host, {}).keys(): result.append("\tmacdef %s\n" % macroname) result.extend(self.macros[host][macroname]) result.append("\n") # That's it ... return string.join(result, "") #return "".join(result) # Python 1.6? def test(): import sys if len(sys.argv) > 1: file = sys.argv[1] else: file = "" n = netrc(file) print "hosts:", `n.hosts` print "macros:", `n.macros` print n if __name__ == '__main__': #test() print netrc() --ELM954935109-23611-0_-- From lannert@uni-duesseldorf.de Tue Dec 5 00:20:01 2000 From: lannert@uni-duesseldorf.de (lannert@uni-duesseldorf.de) Date: Wed, 5 Apr 100 18:17:49 +0200 (MEST) Subject: gpk@bell-labs.com: [Python-bugs-list] netrc module has bad error handling (PR#265) In-Reply-To: <200004051435.KAA16262@eric.cnri.reston.va.us> from Guido van Rossum at "Apr 5, 0 10:35:35 am" Message-ID: <20000405161749.30486.qmail@lannert.rz.uni-duesseldorf.de> "Guido van Rossum" wrote: > Detlef, if you want this added to the Python distribution, please > consider the patch guidelines at www.python.org/patches/. OK, I'll send a [context] diff to patches@python.org, together with the disclaimer. > Also, in > this case, I'd prefer to receive Eric's consent, as I don't understand > the netrc.py code at all myself... :-) Sure, I also think it should depend on Eric's approval. Detlef From noreply@sourceforge.net Fri Dec 1 02:10:09 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 30 Nov 2000 18:10:09 -0800 Subject: [Python-bugs-list] [Bug #124003] sys.path[0] is not the script directory Message-ID: <200012010210.SAA19166@sf-web3.vaspecialprojects.com> Bug #124003, was updated on 2000-Nov-30 18:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: Seehof Assigned to : Nobody Summary: sys.path[0] is not the script directory Details: Reproduced on Windows (not tried on other systems) The docs say: """The first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of $PYTHONPATH.""" Instead, the script directory is appended to the end. I suspect alot of existing code may be affected since this is the most convenient way to find data files that are in the same directory as a script. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124003&group_id=5470 From noreply@sourceforge.net Fri Dec 1 02:18:53 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 30 Nov 2000 18:18:53 -0800 Subject: [Python-bugs-list] [Bug #124003] sys.path[0] is not the script directory Message-ID: <200012010218.SAA19363@sf-web3.vaspecialprojects.com> Bug #124003, was updated on 2000-Nov-30 18:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 1 Submitted by: Seehof Assigned to : Nobody Summary: sys.path[0] is not the script directory Details: Reproduced on Windows (not tried on other systems) The docs say: """The first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of $PYTHONPATH.""" Instead, the script directory is appended to the end. I suspect alot of existing code may be affected since this is the most convenient way to find data files that are in the same directory as a script. Follow-Ups: Date: 2000-Nov-30 18:18 By: gvanrossum Comment: I don't believe this. I just tried this (on WIndows 98 with Python 2.0) and it correctly places the script directory at the front of sys.path. Can you create a simple script that prints sys.path, and run this to test your hypothesis? Show us the answer. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124003&group_id=5470 From noreply@sourceforge.net Fri Dec 1 02:18:53 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 30 Nov 2000 18:18:53 -0800 Subject: [Python-bugs-list] [Bug #124003] sys.path[0] is not the script directory Message-ID: <200012010218.SAA19360@sf-web3.vaspecialprojects.com> Bug #124003, was updated on 2000-Nov-30 18:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: Seehof Assigned to : Nobody Summary: sys.path[0] is not the script directory Details: Reproduced on Windows (not tried on other systems) The docs say: """The first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of $PYTHONPATH.""" Instead, the script directory is appended to the end. I suspect alot of existing code may be affected since this is the most convenient way to find data files that are in the same directory as a script. Follow-Ups: Date: 2000-Nov-30 18:18 By: gvanrossum Comment: I don't believe this. I just tried this (on WIndows 98 with Python 2.0) and it correctly places the script directory at the front of sys.path. Can you create a simple script that prints sys.path, and run this to test your hypothesis? Show us the answer. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124003&group_id=5470 From noreply@sourceforge.net Fri Dec 1 12:59:30 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 04:59:30 -0800 Subject: [Python-bugs-list] [Bug #124038] MCGI : Uploaded file is whole stored in memory Message-ID: <200012011259.EAA19779@sf-web1.i.sourceforge.net> Bug #124038, was updated on 2000-Dec-01 04:59 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Irreproducible Priority: 5 Submitted by: vlk Assigned to : Nobody Summary: MCGI : Uploaded file is whole stored in memory Details: In module cgi, when file is uploaded, file is stored into temporary file and into list cgi.FieldStorage.lines. Why ? This is a problem, when large file must be uploaded. I think, that lines which contains string "self.lines" may by deleted. ( 4 occureces ). This problem is in version 1.5.2 and 2.0 too. vlk For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124038&group_id=5470 From noreply@sourceforge.net Fri Dec 1 13:42:30 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 05:42:30 -0800 Subject: [Python-bugs-list] [Bug #124038] CGI : Uploaded file is whole stored in memory Message-ID: <200012011342.FAA29092@sf-web3.vaspecialprojects.com> Bug #124038, was updated on 2000-Dec-01 04:59 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Closed Resolution: None Bug Group: Irreproducible Priority: 5 Submitted by: vlk Assigned to : Nobody Summary: CGI : Uploaded file is whole stored in memory Details: In module cgi, when file is uploaded, file is stored into temporary file and into list cgi.FieldStorage.lines. Why ? This is a problem, when large file must be uploaded. I think, that lines which contains string "self.lines" may by deleted. ( 4 occureces ). This problem is in version 1.5.2 and 2.0 too. vlk Follow-Ups: Date: 2000-Dec-01 05:42 By: gvanrossum Comment: This is already fixed in the current CVS version. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124038&group_id=5470 From noreply@sourceforge.net Fri Dec 1 13:42:30 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 05:42:30 -0800 Subject: [Python-bugs-list] [Bug #124038] MCGI : Uploaded file is whole stored in memory Message-ID: <200012011342.FAA29089@sf-web3.vaspecialprojects.com> Bug #124038, was updated on 2000-Dec-01 04:59 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Irreproducible Priority: 5 Submitted by: vlk Assigned to : Nobody Summary: MCGI : Uploaded file is whole stored in memory Details: In module cgi, when file is uploaded, file is stored into temporary file and into list cgi.FieldStorage.lines. Why ? This is a problem, when large file must be uploaded. I think, that lines which contains string "self.lines" may by deleted. ( 4 occureces ). This problem is in version 1.5.2 and 2.0 too. vlk Follow-Ups: Date: 2000-Dec-01 05:42 By: gvanrossum Comment: This is already fixed in the current CVS version. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124038&group_id=5470 From noreply@sourceforge.net Fri Dec 1 15:17:57 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 07:17:57 -0800 Subject: [Python-bugs-list] [Bug #124051] ndiff bug: "?" lines are out-of-sync Message-ID: <200012011517.HAA17863@sf-web2.i.sourceforge.net> Bug #124051, was updated on 2000-Dec-01 07:17 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: flight Assigned to : Nobody Summary: ndiff bug: "?" lines are out-of-sync Details: I wonder if this result (the "?" line) of ndiff is intentional: clapton:1> cat a Millionen für so 'n Kamelrennen sind clapton:2> cat b Millionen für so "n Kamelrennen sind clapton:3> /tmp/ndiff.py -q a b - Millionen für so 'n Kamelrennen sind + Millionen für so "n Kamelrennen sind ? ^ clapton:4> cat c Millionen deren für so "n Kamelrennen sind clapton:5> /tmp/ndiff.py -q a c - Millionen für so 'n Kamelrennen sind + Millionen deren für so "n Kamelrennen sind ? ++++++ - + Instead of a - and a subsequent +, I would expect to find here a ^, too. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124051&group_id=5470 From noreply@sourceforge.net Fri Dec 1 17:03:11 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 09:03:11 -0800 Subject: [Python-bugs-list] [Bug #124060] Python 2.0 -- Problems with Unicode Translate Message-ID: <200012011703.JAA00796@sf-web3.vaspecialprojects.com> Bug #124060, was updated on 2000-Dec-01 09:03 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: alburt Assigned to : Nobody Summary: Python 2.0 -- Problems with Unicode Translate Details: I don't know what this new-fangled Unicode stuff is all about. I do know that old code that has: string.translate(s, table) now bombs when "s" is Unicode. The definition of "string.translate" passes on the call with a "deletechars" argument that is not expected by the Unicode version. Using "str(s)" keeps Python 2.0 happy. -- Alastair P.S. Sorry if the bug is already reported but I do not know how to search past bug reports. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124060&group_id=5470 From mal@lemburg.com Fri Dec 1 18:10:48 2000 From: mal@lemburg.com (M.-A. Lemburg) Date: Fri, 01 Dec 2000 19:10:48 +0100 Subject: [Python-bugs-list] [Bug #124060] Python 2.0 -- Problems with Unicode Translate References: <200012011703.JAA00796@sf-web3.vaspecialprojects.com> Message-ID: <3A27E9A8.63F807CE@lemburg.com> [Replying via mail -- SF seems to be broken] Unicode objects have a different signature for .translate(). The reason for this is simple: Unicode has 64k character points and it wouldn't be wise to build such huge translation maps. Instead, the Unicode .translate() method takes a table which also provides the deletechars functionality. I guess the string.py function should be fixed to special case Unicode objects and check whether deletechars is used or not. Without the deletechars parameter, Unicode .translate() should work just like the corresponding string method. -- Marc-Andre Lemburg ______________________________________________________________________ Company: http://www.egenix.com/ Consulting: http://www.lemburg.com/ Python Pages: http://www.lemburg.com/python/ From noreply@sourceforge.net Fri Dec 1 19:35:00 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 11:35:00 -0800 Subject: [Python-bugs-list] [Bug #123695] xml.sax.handler.ContentHandler.characters() not SAX2 Message-ID: <200012011935.LAA11331@sf-web1.i.sourceforge.net> Bug #123695, was updated on 2000-Nov-28 06:26 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: Nobody Assigned to : fdrake Summary: xml.sax.handler.ContentHandler.characters() not SAX2 Details: It takes the wrong arguments and differs from the SAX specification. I also submitted the following bug to the pyxml-sig bugtracker: http://sourceforge.net/bugs/?func=detailbug&bug_id=123693&group_id=6473 SAX2 API documentation says: http://www.megginson.com/SAX/Java/javadoc/org/xml/sax/ContentHandler.html#characters(char[], public void characters(char[] ch,int start, int length) PyXML-0.6.2/doc/xml-howto.txt says: def characters(self, ch) in line 770 This corresponds to how the pyexpat implementation distributed with python 2.0 works, but is not following the SAX API as pyexpat in pyXML 0.6.2 does. It looks like a bug in python 2.0 and at least a documentation bug for pyxml 0.6.2. See the following example program, which should work. import xml.sax import xml.sax.handler class processTask(xml.sax.handler.ContentHandler): def startElement(self, name, attrs): print "startElement: %s=" % (name), print repr(attrs) # SAX compliant def characters(self, ch, start, length): print "characters=%s" %(ch[start:start+length]) # works with python 2.0, but is not SAX compliant # def characters(self, ch): # print "characters=%s" %(ch) # def endElement(self, name): print "endElement: %s=" % (name) dh = processTask() string=""" Text goes here More text """ xml.sax.parseString(string,dh) but instead is bombs: startElement: parent= startElement: child1= Traceback (most recent call last): File "xmltest2.py", line 27, in ? xml.sax.parseString(string,dh) File "/usr/src/packages/python/install//lib/python2.0/xml/sax/__init__.py", line 49, in parseString parser.parse(inpsrc) File "/usr/src/packages/python/install//lib/python2.0/xml/sax/expatreader.py", line 42, in parse xmlreader.IncrementalParser.parse(self, source) File "/usr/src/packages/python/install//lib/python2.0/xml/sax/xmlreader.py", line 120, in parse self.feed(buffer) File "/usr/src/packages/python/install//lib/python2.0/xml/sax/expatreader.py", line 81, in feed self._parser.Parse(data, isFinal) TypeError: not enough arguments; expected 4, got 2 Regards, Bernhard Reiter Follow-Ups: Date: 2000-Dec-01 11:35 By: loewis Comment: There is no error in either the documentation or the implementation; see the discussion of the PyXML report for details. The deviation from the Java SAX ContentHandler interface is intentional, see http://www.python.org/pipermail/xml-sig/2000-November/005510.html for details. Regards, Martin ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=123695&group_id=5470 From noreply@sourceforge.net Fri Dec 1 19:35:00 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 11:35:00 -0800 Subject: [Python-bugs-list] [Bug #123695] xml.sax.handler.ContentHandler.characters() not SAX2 Message-ID: <200012011935.LAA11334@sf-web1.i.sourceforge.net> Bug #123695, was updated on 2000-Nov-28 06:26 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: Nobody Assigned to : fdrake Summary: xml.sax.handler.ContentHandler.characters() not SAX2 Details: It takes the wrong arguments and differs from the SAX specification. I also submitted the following bug to the pyxml-sig bugtracker: http://sourceforge.net/bugs/?func=detailbug&bug_id=123693&group_id=6473 SAX2 API documentation says: http://www.megginson.com/SAX/Java/javadoc/org/xml/sax/ContentHandler.html#characters(char[], public void characters(char[] ch,int start, int length) PyXML-0.6.2/doc/xml-howto.txt says: def characters(self, ch) in line 770 This corresponds to how the pyexpat implementation distributed with python 2.0 works, but is not following the SAX API as pyexpat in pyXML 0.6.2 does. It looks like a bug in python 2.0 and at least a documentation bug for pyxml 0.6.2. See the following example program, which should work. import xml.sax import xml.sax.handler class processTask(xml.sax.handler.ContentHandler): def startElement(self, name, attrs): print "startElement: %s=" % (name), print repr(attrs) # SAX compliant def characters(self, ch, start, length): print "characters=%s" %(ch[start:start+length]) # works with python 2.0, but is not SAX compliant # def characters(self, ch): # print "characters=%s" %(ch) # def endElement(self, name): print "endElement: %s=" % (name) dh = processTask() string=""" Text goes here More text """ xml.sax.parseString(string,dh) but instead is bombs: startElement: parent= startElement: child1= Traceback (most recent call last): File "xmltest2.py", line 27, in ? xml.sax.parseString(string,dh) File "/usr/src/packages/python/install//lib/python2.0/xml/sax/__init__.py", line 49, in parseString parser.parse(inpsrc) File "/usr/src/packages/python/install//lib/python2.0/xml/sax/expatreader.py", line 42, in parse xmlreader.IncrementalParser.parse(self, source) File "/usr/src/packages/python/install//lib/python2.0/xml/sax/xmlreader.py", line 120, in parse self.feed(buffer) File "/usr/src/packages/python/install//lib/python2.0/xml/sax/expatreader.py", line 81, in feed self._parser.Parse(data, isFinal) TypeError: not enough arguments; expected 4, got 2 Regards, Bernhard Reiter Follow-Ups: Date: 2000-Dec-01 11:35 By: loewis Comment: There is no error in either the documentation or the implementation; see the discussion of the PyXML report for details. The deviation from the Java SAX ContentHandler interface is intentional, see http://www.python.org/pipermail/xml-sig/2000-November/005510.html for details. Regards, Martin ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=123695&group_id=5470 From noreply@sourceforge.net Fri Dec 1 22:18:47 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 14:18:47 -0800 Subject: [Python-bugs-list] [Bug #124003] sys.path[0] is not the script directory Message-ID: <200012012218.OAA00357@sf-web3.vaspecialprojects.com> Bug #124003, was updated on 2000-Nov-30 18:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 1 Submitted by: Seehof Assigned to : Nobody Summary: sys.path[0] is not the script directory Details: Reproduced on Windows (not tried on other systems) The docs say: """The first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of $PYTHONPATH.""" Instead, the script directory is appended to the end. I suspect alot of existing code may be affected since this is the most convenient way to find data files that are in the same directory as a script. Follow-Ups: Date: 2000-Nov-30 18:18 By: gvanrossum Comment: I don't believe this. I just tried this (on WIndows 98 with Python 2.0) and it correctly places the script directory at the front of sys.path. Can you create a simple script that prints sys.path, and run this to test your hypothesis? Show us the answer. ------------------------------------------------------- Date: 2000-Dec-01 14:18 By: Seehof Comment: Oh, I get it. PythonWin is doing something. The first entry ('') is the (unqualified) script directory. The last entry (the fully qualified script directory) is added by PythonWin. So there is no bug. Sorry 'bout the false alarm. On the other hand, it would seem more correct for path[0] to be '.' instead of '', or better yet it should be fully qualified so that a module could find who called it after changing directories. PythonWin 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32. Portions Copyright 1994-2000 Mark Hammond (MarkH@ActiveState.com) - see 'Help/About PythonWin' for further copyright information. >>> ['', 'c:\\python20\\installer', 'c:\\python20\\pythonwin','c:\\python20\\win32', 'c:\\python20\\win32\\lib', 'c:\\python20','c:\\python20\\dlls', 'c:\\python20\\lib', 'c:\\python20\\lib\\plat-win', 'c:\\python20\\lib\\lib-tk', 'D:\\qdev'] ... where D:\qdev\test.py is the name of my script. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124003&group_id=5470 From noreply@sourceforge.net Fri Dec 1 23:24:18 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 15:24:18 -0800 Subject: [Python-bugs-list] [Bug #124003] sys.path[0] is not the script directory Message-ID: <200012012324.PAA03079@sf-web1.i.sourceforge.net> Bug #124003, was updated on 2000-Nov-30 18:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 1 Submitted by: Seehof Assigned to : Nobody Summary: sys.path[0] is not the script directory Details: Reproduced on Windows (not tried on other systems) The docs say: """The first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of $PYTHONPATH.""" Instead, the script directory is appended to the end. I suspect alot of existing code may be affected since this is the most convenient way to find data files that are in the same directory as a script. Follow-Ups: Date: 2000-Nov-30 18:18 By: gvanrossum Comment: I don't believe this. I just tried this (on WIndows 98 with Python 2.0) and it correctly places the script directory at the front of sys.path. Can you create a simple script that prints sys.path, and run this to test your hypothesis? Show us the answer. ------------------------------------------------------- Date: 2000-Dec-01 14:18 By: Seehof Comment: Oh, I get it. PythonWin is doing something. The first entry ('') is the (unqualified) script directory. The last entry (the fully qualified script directory) is added by PythonWin. So there is no bug. Sorry 'bout the false alarm. On the other hand, it would seem more correct for path[0] to be '.' instead of '', or better yet it should be fully qualified so that a module could find who called it after changing directories. PythonWin 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32. Portions Copyright 1994-2000 Mark Hammond (MarkH@ActiveState.com) - see 'Help/About PythonWin' for further copyright information. >>> ['', 'c:\\python20\\installer', 'c:\\python20\\pythonwin','c:\\python20\\win32', 'c:\\python20\\win32\\lib', 'c:\\python20','c:\\python20\\dlls', 'c:\\python20\\lib', 'c:\\python20\\lib\\plat-win', 'c:\\python20\\lib\\lib-tk', 'D:\\qdev'] ... where D:\qdev\test.py is the name of my script. ------------------------------------------------------- Date: 2000-Dec-01 15:24 By: gvanrossum Comment: Pythonwin problem, not core Python. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124003&group_id=5470 From noreply@sourceforge.net Fri Dec 1 23:24:19 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 15:24:19 -0800 Subject: [Python-bugs-list] [Bug #124003] sys.path[0] is not the script directory Message-ID: <200012012324.PAA03082@sf-web1.i.sourceforge.net> Bug #124003, was updated on 2000-Nov-30 18:10 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Closed Resolution: None Bug Group: 3rd Party Priority: 1 Submitted by: Seehof Assigned to : Nobody Summary: sys.path[0] is not the script directory Details: Reproduced on Windows (not tried on other systems) The docs say: """The first item of this list, path[0], is the directory containing the script that was used to invoke the Python interpreter. If the script directory is not available (e.g. if the interpreter is invoked interactively or if the script is read from standard input), path[0] is the empty string, which directs Python to search modules in the current directory first. Notice that the script directory is inserted before the entries inserted as a result of $PYTHONPATH.""" Instead, the script directory is appended to the end. I suspect alot of existing code may be affected since this is the most convenient way to find data files that are in the same directory as a script. Follow-Ups: Date: 2000-Nov-30 18:18 By: gvanrossum Comment: I don't believe this. I just tried this (on WIndows 98 with Python 2.0) and it correctly places the script directory at the front of sys.path. Can you create a simple script that prints sys.path, and run this to test your hypothesis? Show us the answer. ------------------------------------------------------- Date: 2000-Dec-01 14:18 By: Seehof Comment: Oh, I get it. PythonWin is doing something. The first entry ('') is the (unqualified) script directory. The last entry (the fully qualified script directory) is added by PythonWin. So there is no bug. Sorry 'bout the false alarm. On the other hand, it would seem more correct for path[0] to be '.' instead of '', or better yet it should be fully qualified so that a module could find who called it after changing directories. PythonWin 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32. Portions Copyright 1994-2000 Mark Hammond (MarkH@ActiveState.com) - see 'Help/About PythonWin' for further copyright information. >>> ['', 'c:\\python20\\installer', 'c:\\python20\\pythonwin','c:\\python20\\win32', 'c:\\python20\\win32\\lib', 'c:\\python20','c:\\python20\\dlls', 'c:\\python20\\lib', 'c:\\python20\\lib\\plat-win', 'c:\\python20\\lib\\lib-tk', 'D:\\qdev'] ... where D:\qdev\test.py is the name of my script. ------------------------------------------------------- Date: 2000-Dec-01 15:24 By: gvanrossum Comment: Pythonwin problem, not core Python. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124003&group_id=5470 From noreply@sourceforge.net Fri Dec 1 23:24:28 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 15:24:28 -0800 Subject: [Python-bugs-list] [Bug #124051] ndiff bug: "?" lines are out-of-sync Message-ID: <200012012324.PAA03085@sf-web1.i.sourceforge.net> Bug #124051, was updated on 2000-Dec-01 07:17 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: flight Assigned to : tim_one Summary: ndiff bug: "?" lines are out-of-sync Details: I wonder if this result (the "?" line) of ndiff is intentional: clapton:1> cat a Millionen für so 'n Kamelrennen sind clapton:2> cat b Millionen für so "n Kamelrennen sind clapton:3> /tmp/ndiff.py -q a b - Millionen für so 'n Kamelrennen sind + Millionen für so "n Kamelrennen sind ? ^ clapton:4> cat c Millionen deren für so "n Kamelrennen sind clapton:5> /tmp/ndiff.py -q a c - Millionen für so 'n Kamelrennen sind + Millionen deren für so "n Kamelrennen sind ? ++++++ - + Instead of a - and a subsequent +, I would expect to find here a ^, too. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124051&group_id=5470 From noreply@sourceforge.net Fri Dec 1 23:25:18 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 15:25:18 -0800 Subject: [Python-bugs-list] [Bug #124060] Python 2.0 -- Problems with Unicode Translate Message-ID: <200012012325.PAA03106@sf-web1.i.sourceforge.net> Bug #124060, was updated on 2000-Dec-01 09:03 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: alburt Assigned to : Nobody Summary: Python 2.0 -- Problems with Unicode Translate Details: I don't know what this new-fangled Unicode stuff is all about. I do know that old code that has: string.translate(s, table) now bombs when "s" is Unicode. The definition of "string.translate" passes on the call with a "deletechars" argument that is not expected by the Unicode version. Using "str(s)" keeps Python 2.0 happy. -- Alastair P.S. Sorry if the bug is already reported but I do not know how to search past bug reports. Follow-Ups: Date: 2000-Dec-01 15:25 By: gvanrossum Comment: Marc already explained in imail the Unicode translate() method has a different signature. Maybe the string.translate function could special-case Unicode objects. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124060&group_id=5470 From noreply@sourceforge.net Fri Dec 1 23:25:18 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 15:25:18 -0800 Subject: [Python-bugs-list] [Bug #124060] Python 2.0 -- Problems with Unicode Translate Message-ID: <200012012325.PAA03109@sf-web1.i.sourceforge.net> Bug #124060, was updated on 2000-Dec-01 09:03 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 3 Submitted by: alburt Assigned to : lemburg Summary: Python 2.0 -- Problems with Unicode Translate Details: I don't know what this new-fangled Unicode stuff is all about. I do know that old code that has: string.translate(s, table) now bombs when "s" is Unicode. The definition of "string.translate" passes on the call with a "deletechars" argument that is not expected by the Unicode version. Using "str(s)" keeps Python 2.0 happy. -- Alastair P.S. Sorry if the bug is already reported but I do not know how to search past bug reports. Follow-Ups: Date: 2000-Dec-01 15:25 By: gvanrossum Comment: Marc already explained in imail the Unicode translate() method has a different signature. Maybe the string.translate function could special-case Unicode objects. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124060&group_id=5470 From noreply@sourceforge.net Sat Dec 2 01:21:10 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 17:21:10 -0800 Subject: [Python-bugs-list] [Bug #124106] isinstance() doesn't *quite* work on ExtensionClasses Message-ID: <200012020121.RAA08477@sf-web2.i.sourceforge.net> Bug #124106, was updated on 2000-Dec-01 17:21 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: gward Assigned to : Nobody Summary: isinstance() doesn't *quite* work on ExtensionClasses Details: In 1.6, isinstance() and issubclass() were generalized so they work on ExtensionClass instances and classes as well as vanilla Python instances and classes... almost. Unfortunately, it doesn't work in one case: isinstance(inst, ECClass) where inst is a vanilla Python instance and ECClass an ExtensionClass-derived class raises TypeError with the message "second argument must be a class". Neil says this should work with his PyInstance_Check() patch, so I'm going to assign this one to him. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124106&group_id=5470 From noreply@sourceforge.net Sat Dec 2 01:23:25 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 17:23:25 -0800 Subject: [Python-bugs-list] [Bug #124106] isinstance() doesn't *quite* work on ExtensionClasses Message-ID: <200012020123.RAA08525@sf-web2.i.sourceforge.net> Bug #124106, was updated on 2000-Dec-01 17:21 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: gward Assigned to : Nobody Summary: isinstance() doesn't *quite* work on ExtensionClasses Details: In 1.6, isinstance() and issubclass() were generalized so they work on ExtensionClass instances and classes as well as vanilla Python instances and classes... almost. Unfortunately, it doesn't work in one case: isinstance(inst, ECClass) where inst is a vanilla Python instance and ECClass an ExtensionClass-derived class raises TypeError with the message "second argument must be a class". Neil says this should work with his PyInstance_Check() patch, so I'm going to assign this one to him. Follow-Ups: Date: 2000-Dec-01 17:23 By: gward Comment: Oops, forgot to include this script that demonstrates the bug: """ from ExtensionClass import ExtensionClass, Base class SuperEC (Base): pass class ChildEC (SuperEC): pass class Super: pass class Child (Super): pass def test(cond): print cond and "ok" or "not ok" c1 = Child() c2 = ChildEC() test(issubclass(Child, Super)) test(issubclass(ChildEC, SuperEC)) test(isinstance(c1, Child)) test(isinstance(c1, Super)) test(not isinstance(c2, Child)) test(isinstance(c2, ChildEC)) test(isinstance(c2, SuperEC)) test(not isinstance(c1, ChildEC)) test(not isinstance(c1, SuperEC)) """ When I run this, I get the following output: """ ok ok ok ok ok ok ok Traceback (most recent call last): File "ectest", line 30, in ? test(not isinstance(c1, ChildEC)) TypeError: second argument must be a class """ Output is the same with Python 1.6 and 2.0. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124106&group_id=5470 From noreply@sourceforge.net Sat Dec 2 01:23:25 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 17:23:25 -0800 Subject: [Python-bugs-list] [Bug #124106] isinstance() doesn't *quite* work on ExtensionClasses Message-ID: <200012020123.RAA08528@sf-web2.i.sourceforge.net> Bug #124106, was updated on 2000-Dec-01 17:21 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: gward Assigned to : nascheme Summary: isinstance() doesn't *quite* work on ExtensionClasses Details: In 1.6, isinstance() and issubclass() were generalized so they work on ExtensionClass instances and classes as well as vanilla Python instances and classes... almost. Unfortunately, it doesn't work in one case: isinstance(inst, ECClass) where inst is a vanilla Python instance and ECClass an ExtensionClass-derived class raises TypeError with the message "second argument must be a class". Neil says this should work with his PyInstance_Check() patch, so I'm going to assign this one to him. Follow-Ups: Date: 2000-Dec-01 17:23 By: gward Comment: Oops, forgot to include this script that demonstrates the bug: """ from ExtensionClass import ExtensionClass, Base class SuperEC (Base): pass class ChildEC (SuperEC): pass class Super: pass class Child (Super): pass def test(cond): print cond and "ok" or "not ok" c1 = Child() c2 = ChildEC() test(issubclass(Child, Super)) test(issubclass(ChildEC, SuperEC)) test(isinstance(c1, Child)) test(isinstance(c1, Super)) test(not isinstance(c2, Child)) test(isinstance(c2, ChildEC)) test(isinstance(c2, SuperEC)) test(not isinstance(c1, ChildEC)) test(not isinstance(c1, SuperEC)) """ When I run this, I get the following output: """ ok ok ok ok ok ok ok Traceback (most recent call last): File "ectest", line 30, in ? test(not isinstance(c1, ChildEC)) TypeError: second argument must be a class """ Output is the same with Python 1.6 and 2.0. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124106&group_id=5470 From noreply@sourceforge.net Sat Dec 2 01:32:08 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 17:32:08 -0800 Subject: [Python-bugs-list] [Bug #124106] isinstance() doesn't *quite* work on ExtensionClasses Message-ID: <200012020132.RAA08645@sf-web2.i.sourceforge.net> Bug #124106, was updated on 2000-Dec-01 17:21 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: gward Assigned to : nascheme Summary: isinstance() doesn't *quite* work on ExtensionClasses Details: In 1.6, isinstance() and issubclass() were generalized so they work on ExtensionClass instances and classes as well as vanilla Python instances and classes... almost. Unfortunately, it doesn't work in one case: isinstance(inst, ECClass) where inst is a vanilla Python instance and ECClass an ExtensionClass-derived class raises TypeError with the message "second argument must be a class". Neil says this should work with his PyInstance_Check() patch, so I'm going to assign this one to him. Follow-Ups: Date: 2000-Dec-01 17:23 By: gward Comment: Oops, forgot to include this script that demonstrates the bug: """ from ExtensionClass import ExtensionClass, Base class SuperEC (Base): pass class ChildEC (SuperEC): pass class Super: pass class Child (Super): pass def test(cond): print cond and "ok" or "not ok" c1 = Child() c2 = ChildEC() test(issubclass(Child, Super)) test(issubclass(ChildEC, SuperEC)) test(isinstance(c1, Child)) test(isinstance(c1, Super)) test(not isinstance(c2, Child)) test(isinstance(c2, ChildEC)) test(isinstance(c2, SuperEC)) test(not isinstance(c1, ChildEC)) test(not isinstance(c1, SuperEC)) """ When I run this, I get the following output: """ ok ok ok ok ok ok ok Traceback (most recent call last): File "ectest", line 30, in ? test(not isinstance(c1, ChildEC)) TypeError: second argument must be a class """ Output is the same with Python 1.6 and 2.0. ------------------------------------------------------- Date: 2000-Dec-01 17:32 By: nascheme Comment: Just to be clear, that's not quite what I said. This bug should be fixed as part of the coerce cleanups. I hope we can remove most of the cases in the interpreter where PyInstances are treated specially. I'm speculating that this would allow this bug to be easily fixed as well as opening the door for any type to be use as a base class. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124106&group_id=5470 From noreply@sourceforge.net Sat Dec 2 01:32:09 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 17:32:09 -0800 Subject: [Python-bugs-list] [Bug #124106] isinstance() doesn't *quite* work on ExtensionClasses Message-ID: <200012020132.RAA08648@sf-web2.i.sourceforge.net> Bug #124106, was updated on 2000-Dec-01 17:21 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: gward Assigned to : nascheme Summary: isinstance() doesn't *quite* work on ExtensionClasses Details: In 1.6, isinstance() and issubclass() were generalized so they work on ExtensionClass instances and classes as well as vanilla Python instances and classes... almost. Unfortunately, it doesn't work in one case: isinstance(inst, ECClass) where inst is a vanilla Python instance and ECClass an ExtensionClass-derived class raises TypeError with the message "second argument must be a class". Neil says this should work with his PyInstance_Check() patch, so I'm going to assign this one to him. Follow-Ups: Date: 2000-Dec-01 17:23 By: gward Comment: Oops, forgot to include this script that demonstrates the bug: """ from ExtensionClass import ExtensionClass, Base class SuperEC (Base): pass class ChildEC (SuperEC): pass class Super: pass class Child (Super): pass def test(cond): print cond and "ok" or "not ok" c1 = Child() c2 = ChildEC() test(issubclass(Child, Super)) test(issubclass(ChildEC, SuperEC)) test(isinstance(c1, Child)) test(isinstance(c1, Super)) test(not isinstance(c2, Child)) test(isinstance(c2, ChildEC)) test(isinstance(c2, SuperEC)) test(not isinstance(c1, ChildEC)) test(not isinstance(c1, SuperEC)) """ When I run this, I get the following output: """ ok ok ok ok ok ok ok Traceback (most recent call last): File "ectest", line 30, in ? test(not isinstance(c1, ChildEC)) TypeError: second argument must be a class """ Output is the same with Python 1.6 and 2.0. ------------------------------------------------------- Date: 2000-Dec-01 17:32 By: nascheme Comment: Just to be clear, that's not quite what I said. This bug should be fixed as part of the coerce cleanups. I hope we can remove most of the cases in the interpreter where PyInstances are treated specially. I'm speculating that this would allow this bug to be easily fixed as well as opening the door for any type to be use as a base class. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124106&group_id=5470 From noreply@sourceforge.net Sat Dec 2 02:33:37 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 1 Dec 2000 18:33:37 -0800 Subject: [Python-bugs-list] [Bug #124120] filecmp.dircmp crashes with TypeError Message-ID: <200012020233.SAA22099@sf-web1.i.sourceforge.net> Bug #124120, was updated on 2000-Dec-01 18:33 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: Nobody Assigned to : Nobody Summary: filecmp.dircmp crashes with TypeError Details: By executing in Python 2.0 my_dircmp = filecmp.dircmp(sys.argv[1], sys.argv[2]) my_dircmp.report() a TypeError occurs: File "c:\python\lib\filecmp.py", line 241, in report if self.same_files: File "c:\python\lib\filecmp.py", line 147, in __getattr__ self.phase3() File "c:\python\lib\filecmp.py", line 214, in phase3 xx = cmpfiles(self.left, self.right, self.common_files) File "c:\python\lib\filecmp.py", line 288, in cmpfiles res[_cmp(ax, bx, shallow, use_statcache)].append(x) TypeError: too many arguments; expected 2, got 4 filecmp, 288: res[_cmp(ax, bx, shallow, use_statcache)].append(x) filecmp, 298: def _cmp(a, b): For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124120&group_id=5470 From noreply@sourceforge.net Sun Dec 3 14:12:20 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 3 Dec 2000 06:12:20 -0800 Subject: [Python-bugs-list] [Bug #124120] filecmp.dircmp crashes with TypeError Message-ID: <200012031412.GAA05656@sf-web2.i.sourceforge.net> Bug #124120, was updated on 2000-Dec-01 18:33 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 6 Submitted by: Nobody Assigned to : moshez Summary: filecmp.dircmp crashes with TypeError Details: By executing in Python 2.0 my_dircmp = filecmp.dircmp(sys.argv[1], sys.argv[2]) my_dircmp.report() a TypeError occurs: File "c:\python\lib\filecmp.py", line 241, in report if self.same_files: File "c:\python\lib\filecmp.py", line 147, in __getattr__ self.phase3() File "c:\python\lib\filecmp.py", line 214, in phase3 xx = cmpfiles(self.left, self.right, self.common_files) File "c:\python\lib\filecmp.py", line 288, in cmpfiles res[_cmp(ax, bx, shallow, use_statcache)].append(x) TypeError: too many arguments; expected 2, got 4 filecmp, 288: res[_cmp(ax, bx, shallow, use_statcache)].append(x) filecmp, 298: def _cmp(a, b): For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124120&group_id=5470 From noreply@sourceforge.net Sun Dec 3 18:31:47 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 3 Dec 2000 10:31:47 -0800 Subject: [Python-bugs-list] [Bug #119822] urllib doesn't like unicode Message-ID: <200012031831.KAA09107@sf-web1.i.sourceforge.net> Bug #119822, was updated on 2000-Oct-30 17:12 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: dildog Assigned to : jhylton Summary: urllib doesn't like unicode Details: getting a file via urllib.urlopen(u"http://foobar/blahblah") produces undesirable results. Something like: File "/usr/local/lib/python2.0/urllib.py", line 61, in urlopen return _urlopener.open(url) File "/usr/local/lib/python2.0/urllib.py", line 166, in open return getattr(self, name)(url) File "/usr/local/lib/python2.0/urllib.py", line 248, in open_http host, selector = url ValueError: unpack sequence of wrong size Follow-Ups: Date: 2000-Nov-12 03:07 By: loewis Comment: A patch for that bug is in http://sourceforge.net/patch/?func=detailpatch&patch_id=102364&group_id=5470 ------------------------------------------------------- Date: 2000-Dec-03 10:31 By: loewis Comment: Fixed in urllib.py 1.108. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119822&group_id=5470 From noreply@sourceforge.net Sun Dec 3 18:31:47 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 3 Dec 2000 10:31:47 -0800 Subject: [Python-bugs-list] [Bug #119822] urllib doesn't like unicode Message-ID: <200012031831.KAA09110@sf-web1.i.sourceforge.net> Bug #119822, was updated on 2000-Oct-30 17:12 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: dildog Assigned to : jhylton Summary: urllib doesn't like unicode Details: getting a file via urllib.urlopen(u"http://foobar/blahblah") produces undesirable results. Something like: File "/usr/local/lib/python2.0/urllib.py", line 61, in urlopen return _urlopener.open(url) File "/usr/local/lib/python2.0/urllib.py", line 166, in open return getattr(self, name)(url) File "/usr/local/lib/python2.0/urllib.py", line 248, in open_http host, selector = url ValueError: unpack sequence of wrong size Follow-Ups: Date: 2000-Nov-12 03:07 By: loewis Comment: A patch for that bug is in http://sourceforge.net/patch/?func=detailpatch&patch_id=102364&group_id=5470 ------------------------------------------------------- Date: 2000-Dec-03 10:31 By: loewis Comment: Fixed in urllib.py 1.108. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119822&group_id=5470 From noreply@sourceforge.net Mon Dec 4 03:11:40 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 3 Dec 2000 19:11:40 -0800 Subject: [Python-bugs-list] [Bug #124051] ndiff bug: "?" lines are out-of-sync Message-ID: <200012040311.TAA21291@sf-web2.i.sourceforge.net> Bug #124051, was updated on 2000-Dec-01 07:17 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: flight Assigned to : tim_one Summary: ndiff bug: "?" lines are out-of-sync Details: I wonder if this result (the "?" line) of ndiff is intentional: clapton:1> cat a Millionen für so 'n Kamelrennen sind clapton:2> cat b Millionen für so "n Kamelrennen sind clapton:3> /tmp/ndiff.py -q a b - Millionen für so 'n Kamelrennen sind + Millionen für so "n Kamelrennen sind ? ^ clapton:4> cat c Millionen deren für so "n Kamelrennen sind clapton:5> /tmp/ndiff.py -q a c - Millionen für so 'n Kamelrennen sind + Millionen deren für so "n Kamelrennen sind ? ++++++ - + Instead of a - and a subsequent +, I would expect to find here a ^, too. Follow-Ups: Date: 2000-Dec-03 19:11 By: tim_one Comment: A caret means that the character in the line two above and in the same column was replaced by the character in the line one above and in the same column. That's why you get a caret in the first example but not the second: the replacement involves two distinct columns. If you did get a caret in the second example, where would it go? If under the single quote from the line two above, it would look the single quote got replaced by the ü in für; if under the double quote from the line one above, like the first e in Kamelrennen got replaced by a double quote. Both readings would be wrong. Edit sequences aren't unique, and in the absence of an obvious and non-ambiguous way to show replacements across columns, ndiff settles for a *correct* sequence ("deren " was inserted, "'" was deleted, '"' was inserted). In this respect ndiff is functioning as designed, so it's not a bug. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124051&group_id=5470 From noreply@sourceforge.net Mon Dec 4 03:11:40 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 3 Dec 2000 19:11:40 -0800 Subject: [Python-bugs-list] [Bug #124051] ndiff bug: "?" lines are out-of-sync Message-ID: <200012040311.TAA21288@sf-web2.i.sourceforge.net> Bug #124051, was updated on 2000-Dec-01 07:17 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: flight Assigned to : tim_one Summary: ndiff bug: "?" lines are out-of-sync Details: I wonder if this result (the "?" line) of ndiff is intentional: clapton:1> cat a Millionen für so 'n Kamelrennen sind clapton:2> cat b Millionen für so "n Kamelrennen sind clapton:3> /tmp/ndiff.py -q a b - Millionen für so 'n Kamelrennen sind + Millionen für so "n Kamelrennen sind ? ^ clapton:4> cat c Millionen deren für so "n Kamelrennen sind clapton:5> /tmp/ndiff.py -q a c - Millionen für so 'n Kamelrennen sind + Millionen deren für so "n Kamelrennen sind ? ++++++ - + Instead of a - and a subsequent +, I would expect to find here a ^, too. Follow-Ups: Date: 2000-Dec-03 19:11 By: tim_one Comment: A caret means that the character in the line two above and in the same column was replaced by the character in the line one above and in the same column. That's why you get a caret in the first example but not the second: the replacement involves two distinct columns. If you did get a caret in the second example, where would it go? If under the single quote from the line two above, it would look the single quote got replaced by the ü in für; if under the double quote from the line one above, like the first e in Kamelrennen got replaced by a double quote. Both readings would be wrong. Edit sequences aren't unique, and in the absence of an obvious and non-ambiguous way to show replacements across columns, ndiff settles for a *correct* sequence ("deren " was inserted, "'" was deleted, '"' was inserted). In this respect ndiff is functioning as designed, so it's not a bug. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124051&group_id=5470 From noreply@sourceforge.net Mon Dec 4 08:16:24 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 00:16:24 -0800 Subject: [Python-bugs-list] [Bug #124324] Python 2.0/ConfigParser: "remove_option" raises "NameError" Message-ID: <200012040816.AAA23608@sf-web3.vaspecialprojects.com> Bug #124324, was updated on 2000-Dec-04 00:16 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: andijust Assigned to : Nobody Summary: Python 2.0/ConfigParser: "remove_option" raises "NameError" Details: When calling method "remove_option" on an existing option, this will raise exceptio NameError: Skript: import ConfigParser a=ConfigParser.ConfigParser() a.read("t.cfg") a.remove_option("sec1","opt12") cfg file I used: [sec1] opt11=11 opt12=12 opt13=13 Error message: Traceback (most recent call last): File "t", line 4, in ? a.remove_option("sec1","opt12") File "/users/dxcasi/user15/justa/tmp/Python-2.0/alpha/V4.0/lib/python2.0/ConfigParser.py", line 364, in remove_option existed = sectdict.has_key(key) NameError: There is no variable named 'key' For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124324&group_id=5470 From noreply@sourceforge.net Mon Dec 4 10:50:58 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 02:50:58 -0800 Subject: [Python-bugs-list] [Bug #124344] smtplib quoteaddr() has problems with RFC821 source routing Message-ID: <200012041050.CAA09180@sf-web3.vaspecialprojects.com> Bug #124344, was updated on 2000-Dec-04 02:50 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: carey Assigned to : Nobody Summary: smtplib quoteaddr() has problems with RFC821 source routing Details: RFC821 defines source routed SMTP addresses of the form <@USC-ISIE.ARPA:JQP@MIT-AI.ARPA>. RFC1123 (STD3) deprecates these kinds of addresses, but does not forbid them. If an address like this is passed to smtplib.quoteaddr(), the result is '<@USC-ISIE.ARPA>', which is useless, and illegal according to RFC821. smtplib should probably leave the source routing there, assuming anyone using an address like this knows what they're doing, and since any SMTP server "MUST" still accept this syntax. Alternatively, smtplib could just refuse to deliver to an address like this, with some justification. (RFC1123 section 5.2.19.) In any case, this isn't very important at all. I'll probably write a patch when I have some time, using one of the two solutions outlined above. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124344&group_id=5470 From noreply@sourceforge.net Mon Dec 4 12:28:02 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 04:28:02 -0800 Subject: [Python-bugs-list] [Bug #124120] filecmp.dircmp crashes with TypeError Message-ID: <200012041228.EAA19592@sf-web3.vaspecialprojects.com> Bug #124120, was updated on 2000-Dec-01 18:33 Here is a current snapshot of the bug. Project: Python Category: Library Status: Closed Resolution: None Bug Group: None Priority: 6 Submitted by: Nobody Assigned to : moshez Summary: filecmp.dircmp crashes with TypeError Details: By executing in Python 2.0 my_dircmp = filecmp.dircmp(sys.argv[1], sys.argv[2]) my_dircmp.report() a TypeError occurs: File "c:\python\lib\filecmp.py", line 241, in report if self.same_files: File "c:\python\lib\filecmp.py", line 147, in __getattr__ self.phase3() File "c:\python\lib\filecmp.py", line 214, in phase3 xx = cmpfiles(self.left, self.right, self.common_files) File "c:\python\lib\filecmp.py", line 288, in cmpfiles res[_cmp(ax, bx, shallow, use_statcache)].append(x) TypeError: too many arguments; expected 2, got 4 filecmp, 288: res[_cmp(ax, bx, shallow, use_statcache)].append(x) filecmp, 298: def _cmp(a, b): Follow-Ups: Date: 2000-Dec-04 04:28 By: moshez Comment: Solved by applying a patch. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124120&group_id=5470 From noreply@sourceforge.net Mon Dec 4 12:28:01 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 04:28:01 -0800 Subject: [Python-bugs-list] [Bug #124120] filecmp.dircmp crashes with TypeError Message-ID: <200012041228.EAA19589@sf-web3.vaspecialprojects.com> Bug #124120, was updated on 2000-Dec-01 18:33 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 6 Submitted by: Nobody Assigned to : moshez Summary: filecmp.dircmp crashes with TypeError Details: By executing in Python 2.0 my_dircmp = filecmp.dircmp(sys.argv[1], sys.argv[2]) my_dircmp.report() a TypeError occurs: File "c:\python\lib\filecmp.py", line 241, in report if self.same_files: File "c:\python\lib\filecmp.py", line 147, in __getattr__ self.phase3() File "c:\python\lib\filecmp.py", line 214, in phase3 xx = cmpfiles(self.left, self.right, self.common_files) File "c:\python\lib\filecmp.py", line 288, in cmpfiles res[_cmp(ax, bx, shallow, use_statcache)].append(x) TypeError: too many arguments; expected 2, got 4 filecmp, 288: res[_cmp(ax, bx, shallow, use_statcache)].append(x) filecmp, 298: def _cmp(a, b): Follow-Ups: Date: 2000-Dec-04 04:28 By: moshez Comment: Solved by applying a patch. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124120&group_id=5470 From noreply@sourceforge.net Mon Dec 4 12:57:45 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 04:57:45 -0800 Subject: [Python-bugs-list] [Bug #124367] problem with shelve not handling some db entries. Message-ID: <200012041257.EAA29546@sf-web1.i.sourceforge.net> Bug #124367, was updated on 2000-Dec-04 04:57 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: sjordan Assigned to : Nobody Summary: problem with shelve not handling some db entries. Details: I've had a problem with shelve not handling specific database requests. For example: >>> import shelve >>> x = shelve.open('testshelve') >>> x['test'] = {} >>> x['test']['foobar'] = 'unf' >>> x['test'] {} >>> yet it works fine if I do: >>> x['test'] = {'foobar':'unf'} >>> x['test'] {'foobar': 'unf'} >>> this was using python2.0 on redhat linux 7.0 and also tried on freebsd4 running python2.0. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124367&group_id=5470 From noreply@sourceforge.net Mon Dec 4 13:32:37 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 05:32:37 -0800 Subject: [Python-bugs-list] [Bug #124324] Python 2.0/ConfigParser: "remove_option" raises "NameError" Message-ID: <200012041332.FAA00634@sf-web1.i.sourceforge.net> Bug #124324, was updated on 2000-Dec-04 00:16 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: andijust Assigned to : fdrake Summary: Python 2.0/ConfigParser: "remove_option" raises "NameError" Details: When calling method "remove_option" on an existing option, this will raise exceptio NameError: Skript: import ConfigParser a=ConfigParser.ConfigParser() a.read("t.cfg") a.remove_option("sec1","opt12") cfg file I used: [sec1] opt11=11 opt12=12 opt13=13 Error message: Traceback (most recent call last): File "t", line 4, in ? a.remove_option("sec1","opt12") File "/users/dxcasi/user15/justa/tmp/Python-2.0/alpha/V4.0/lib/python2.0/ConfigParser.py", line 364, in remove_option existed = sectdict.has_key(key) NameError: There is no variable named 'key' Follow-Ups: Date: 2000-Dec-04 05:32 By: gvanrossum Comment: Methinks 'key' should be 'option' in the source code. Extra points for adding a full test suite for ConfigParser -- there have been way too many embarrassing bug reports about it. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124324&group_id=5470 From noreply@sourceforge.net Mon Dec 4 13:32:37 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 05:32:37 -0800 Subject: [Python-bugs-list] [Bug #124324] Python 2.0/ConfigParser: "remove_option" raises "NameError" Message-ID: <200012041332.FAA00631@sf-web1.i.sourceforge.net> Bug #124324, was updated on 2000-Dec-04 00:16 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: andijust Assigned to : Nobody Summary: Python 2.0/ConfigParser: "remove_option" raises "NameError" Details: When calling method "remove_option" on an existing option, this will raise exceptio NameError: Skript: import ConfigParser a=ConfigParser.ConfigParser() a.read("t.cfg") a.remove_option("sec1","opt12") cfg file I used: [sec1] opt11=11 opt12=12 opt13=13 Error message: Traceback (most recent call last): File "t", line 4, in ? a.remove_option("sec1","opt12") File "/users/dxcasi/user15/justa/tmp/Python-2.0/alpha/V4.0/lib/python2.0/ConfigParser.py", line 364, in remove_option existed = sectdict.has_key(key) NameError: There is no variable named 'key' Follow-Ups: Date: 2000-Dec-04 05:32 By: gvanrossum Comment: Methinks 'key' should be 'option' in the source code. Extra points for adding a full test suite for ConfigParser -- there have been way too many embarrassing bug reports about it. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124324&group_id=5470 From noreply@sourceforge.net Mon Dec 4 13:41:05 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 05:41:05 -0800 Subject: [Python-bugs-list] [Bug #124367] problem with shelve not handling some db entries. Message-ID: <200012041341.FAA01695@sf-web1.i.sourceforge.net> Bug #124367, was updated on 2000-Dec-04 04:57 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: sjordan Assigned to : Nobody Summary: problem with shelve not handling some db entries. Details: I've had a problem with shelve not handling specific database requests. For example: >>> import shelve >>> x = shelve.open('testshelve') >>> x['test'] = {} >>> x['test']['foobar'] = 'unf' >>> x['test'] {} >>> yet it works fine if I do: >>> x['test'] = {'foobar':'unf'} >>> x['test'] {'foobar': 'unf'} >>> this was using python2.0 on redhat linux 7.0 and also tried on freebsd4 running python2.0. Follow-Ups: Date: 2000-Dec-04 05:41 By: gvanrossum Comment: This is not a bug, but a logical conclusion of how shelves work. The shelf object only sees setitem and getitem requests -- it doesn't see modifications to objects retrieved from it. In particular, x['test']['foobar'] = 'unf' *retrieves* x['test'] and copies it into an anonymous local dictionary, which is then modified and forgotten about -- it is never written back to the shelf using x['test'] = ... If you need to do this, try: dict = x['test'] dict['foobar'] = 'unf' x['test'] = dict ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124367&group_id=5470 From noreply@sourceforge.net Mon Dec 4 13:41:05 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 05:41:05 -0800 Subject: [Python-bugs-list] [Bug #124367] problem with shelve not handling some db entries. Message-ID: <200012041341.FAA01692@sf-web1.i.sourceforge.net> Bug #124367, was updated on 2000-Dec-04 04:57 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: sjordan Assigned to : Nobody Summary: problem with shelve not handling some db entries. Details: I've had a problem with shelve not handling specific database requests. For example: >>> import shelve >>> x = shelve.open('testshelve') >>> x['test'] = {} >>> x['test']['foobar'] = 'unf' >>> x['test'] {} >>> yet it works fine if I do: >>> x['test'] = {'foobar':'unf'} >>> x['test'] {'foobar': 'unf'} >>> this was using python2.0 on redhat linux 7.0 and also tried on freebsd4 running python2.0. Follow-Ups: Date: 2000-Dec-04 05:41 By: gvanrossum Comment: This is not a bug, but a logical conclusion of how shelves work. The shelf object only sees setitem and getitem requests -- it doesn't see modifications to objects retrieved from it. In particular, x['test']['foobar'] = 'unf' *retrieves* x['test'] and copies it into an anonymous local dictionary, which is then modified and forgotten about -- it is never written back to the shelf using x['test'] = ... If you need to do this, try: dict = x['test'] dict['foobar'] = 'unf' x['test'] = dict ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124367&group_id=5470 From noreply@sourceforge.net Mon Dec 4 15:50:09 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 07:50:09 -0800 Subject: [Python-bugs-list] [Bug #124106] isinstance() doesn't *quite* work on ExtensionClasses Message-ID: <200012041550.HAA00949@sf-web2.i.sourceforge.net> Bug #124106, was updated on 2000-Dec-01 17:21 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: gward Assigned to : nascheme Summary: isinstance() doesn't *quite* work on ExtensionClasses Details: In 1.6, isinstance() and issubclass() were generalized so they work on ExtensionClass instances and classes as well as vanilla Python instances and classes... almost. Unfortunately, it doesn't work in one case: isinstance(inst, ECClass) where inst is a vanilla Python instance and ECClass an ExtensionClass-derived class raises TypeError with the message "second argument must be a class". Neil says this should work with his PyInstance_Check() patch, so I'm going to assign this one to him. Follow-Ups: Date: 2000-Dec-01 17:23 By: gward Comment: Oops, forgot to include this script that demonstrates the bug: """ from ExtensionClass import ExtensionClass, Base class SuperEC (Base): pass class ChildEC (SuperEC): pass class Super: pass class Child (Super): pass def test(cond): print cond and "ok" or "not ok" c1 = Child() c2 = ChildEC() test(issubclass(Child, Super)) test(issubclass(ChildEC, SuperEC)) test(isinstance(c1, Child)) test(isinstance(c1, Super)) test(not isinstance(c2, Child)) test(isinstance(c2, ChildEC)) test(isinstance(c2, SuperEC)) test(not isinstance(c1, ChildEC)) test(not isinstance(c1, SuperEC)) """ When I run this, I get the following output: """ ok ok ok ok ok ok ok Traceback (most recent call last): File "ectest", line 30, in ? test(not isinstance(c1, ChildEC)) TypeError: second argument must be a class """ Output is the same with Python 1.6 and 2.0. ------------------------------------------------------- Date: 2000-Dec-01 17:32 By: nascheme Comment: Just to be clear, that's not quite what I said. This bug should be fixed as part of the coerce cleanups. I hope we can remove most of the cases in the interpreter where PyInstances are treated specially. I'm speculating that this would allow this bug to be easily fixed as well as opening the door for any type to be use as a base class. ------------------------------------------------------- Date: 2000-Dec-04 07:50 By: nascheme Comment: Fixed by patch #102630. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124106&group_id=5470 From noreply@sourceforge.net Mon Dec 4 15:50:09 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 07:50:09 -0800 Subject: [Python-bugs-list] [Bug #124106] isinstance() doesn't *quite* work on ExtensionClasses Message-ID: <200012041550.HAA00952@sf-web2.i.sourceforge.net> Bug #124106, was updated on 2000-Dec-01 17:21 Here is a current snapshot of the bug. Project: Python Category: Core Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: gward Assigned to : nascheme Summary: isinstance() doesn't *quite* work on ExtensionClasses Details: In 1.6, isinstance() and issubclass() were generalized so they work on ExtensionClass instances and classes as well as vanilla Python instances and classes... almost. Unfortunately, it doesn't work in one case: isinstance(inst, ECClass) where inst is a vanilla Python instance and ECClass an ExtensionClass-derived class raises TypeError with the message "second argument must be a class". Neil says this should work with his PyInstance_Check() patch, so I'm going to assign this one to him. Follow-Ups: Date: 2000-Dec-01 17:23 By: gward Comment: Oops, forgot to include this script that demonstrates the bug: """ from ExtensionClass import ExtensionClass, Base class SuperEC (Base): pass class ChildEC (SuperEC): pass class Super: pass class Child (Super): pass def test(cond): print cond and "ok" or "not ok" c1 = Child() c2 = ChildEC() test(issubclass(Child, Super)) test(issubclass(ChildEC, SuperEC)) test(isinstance(c1, Child)) test(isinstance(c1, Super)) test(not isinstance(c2, Child)) test(isinstance(c2, ChildEC)) test(isinstance(c2, SuperEC)) test(not isinstance(c1, ChildEC)) test(not isinstance(c1, SuperEC)) """ When I run this, I get the following output: """ ok ok ok ok ok ok ok Traceback (most recent call last): File "ectest", line 30, in ? test(not isinstance(c1, ChildEC)) TypeError: second argument must be a class """ Output is the same with Python 1.6 and 2.0. ------------------------------------------------------- Date: 2000-Dec-01 17:32 By: nascheme Comment: Just to be clear, that's not quite what I said. This bug should be fixed as part of the coerce cleanups. I hope we can remove most of the cases in the interpreter where PyInstances are treated specially. I'm speculating that this would allow this bug to be easily fixed as well as opening the door for any type to be use as a base class. ------------------------------------------------------- Date: 2000-Dec-04 07:50 By: nascheme Comment: Fixed by patch #102630. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124106&group_id=5470 From noreply@sourceforge.net Mon Dec 4 16:31:37 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 08:31:37 -0800 Subject: [Python-bugs-list] [Bug #124324] Python 2.0/ConfigParser: "remove_option" raises "NameError" Message-ID: <200012041631.IAA01815@sf-web2.i.sourceforge.net> Bug #124324, was updated on 2000-Dec-04 00:16 Here is a current snapshot of the bug. Project: Python Category: Library Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: andijust Assigned to : fdrake Summary: Python 2.0/ConfigParser: "remove_option" raises "NameError" Details: When calling method "remove_option" on an existing option, this will raise exceptio NameError: Skript: import ConfigParser a=ConfigParser.ConfigParser() a.read("t.cfg") a.remove_option("sec1","opt12") cfg file I used: [sec1] opt11=11 opt12=12 opt13=13 Error message: Traceback (most recent call last): File "t", line 4, in ? a.remove_option("sec1","opt12") File "/users/dxcasi/user15/justa/tmp/Python-2.0/alpha/V4.0/lib/python2.0/ConfigParser.py", line 364, in remove_option existed = sectdict.has_key(key) NameError: There is no variable named 'key' Follow-Ups: Date: 2000-Dec-04 05:32 By: gvanrossum Comment: Methinks 'key' should be 'option' in the source code. Extra points for adding a full test suite for ConfigParser -- there have been way too many embarrassing bug reports about it. ------------------------------------------------------- Date: 2000-Dec-04 08:31 By: fdrake Comment: Fixed in Lib/ConfigParser.py revision 1.24, with needed coverage added in Lib/test/test_cfgparser.py revision 1.4. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124324&group_id=5470 From noreply@sourceforge.net Mon Dec 4 16:31:37 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 08:31:37 -0800 Subject: [Python-bugs-list] [Bug #124324] Python 2.0/ConfigParser: "remove_option" raises "NameError" Message-ID: <200012041631.IAA01812@sf-web2.i.sourceforge.net> Bug #124324, was updated on 2000-Dec-04 00:16 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: andijust Assigned to : fdrake Summary: Python 2.0/ConfigParser: "remove_option" raises "NameError" Details: When calling method "remove_option" on an existing option, this will raise exceptio NameError: Skript: import ConfigParser a=ConfigParser.ConfigParser() a.read("t.cfg") a.remove_option("sec1","opt12") cfg file I used: [sec1] opt11=11 opt12=12 opt13=13 Error message: Traceback (most recent call last): File "t", line 4, in ? a.remove_option("sec1","opt12") File "/users/dxcasi/user15/justa/tmp/Python-2.0/alpha/V4.0/lib/python2.0/ConfigParser.py", line 364, in remove_option existed = sectdict.has_key(key) NameError: There is no variable named 'key' Follow-Ups: Date: 2000-Dec-04 05:32 By: gvanrossum Comment: Methinks 'key' should be 'option' in the source code. Extra points for adding a full test suite for ConfigParser -- there have been way too many embarrassing bug reports about it. ------------------------------------------------------- Date: 2000-Dec-04 08:31 By: fdrake Comment: Fixed in Lib/ConfigParser.py revision 1.24, with needed coverage added in Lib/test/test_cfgparser.py revision 1.4. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124324&group_id=5470 From noreply@sourceforge.net Tue Dec 5 02:31:32 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 4 Dec 2000 18:31:32 -0800 Subject: [Python-bugs-list] [Bug #124478] make -f Makefile.pre.in boot fails in RedHat 7.0 Message-ID: <200012050231.SAA25284@sf-web3.vaspecialprojects.com> Bug #124478, was updated on 2000-Dec-04 18:31 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: rossrizer Assigned to : Nobody Summary: make -f Makefile.pre.in boot fails in RedHat 7.0 Details: Running make -f Makefile.pre.in boot on RedHat 7.0 creates a broken Makefile If you manually add: CCC=g++ to Makefile.pre.in All is well. I do not have the depth of experience with python/make/sed/etc. to determine a more proper fix. Sorry. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124478&group_id=5470 From noreply@sourceforge.net Tue Dec 5 17:10:09 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 5 Dec 2000 09:10:09 -0800 Subject: [Python-bugs-list] [Bug #124572] Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Message-ID: <200012051710.JAA29688@sf-web2.i.sourceforge.net> Bug #124572, was updated on 2000-Dec-05 09:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: gwiener Assigned to : Nobody Summary: Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Details: Configuration: [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 When using os.system() to run a C executable, we noticed that the executable exited about 1/3 of the way through it's processing cycle. The same executable plus invocation ran to completion using system() within a C program. To work around the problem, we rebuilt Python using --with-threads=no. The os.system() call then worked as expected. Thanks, Gerry Wiener For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124572&group_id=5470 From noreply@sourceforge.net Tue Dec 5 19:27:31 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 5 Dec 2000 11:27:31 -0800 Subject: [Python-bugs-list] [Bug #124572] Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Message-ID: <200012051927.LAA18554@sf-web3.vaspecialprojects.com> Bug #124572, was updated on 2000-Dec-05 09:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: gwiener Assigned to : Nobody Summary: Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Details: Configuration: [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 When using os.system() to run a C executable, we noticed that the executable exited about 1/3 of the way through it's processing cycle. The same executable plus invocation ran to completion using system() within a C program. To work around the problem, we rebuilt Python using --with-threads=no. The os.system() call then worked as expected. Thanks, Gerry Wiener Follow-Ups: Date: 2000-Dec-05 11:27 By: cgw Comment: It certainly would be easier to attempt to debug this if you gave some indication about the C executable in question. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124572&group_id=5470 From noreply@sourceforge.net Tue Dec 5 19:41:19 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 5 Dec 2000 11:41:19 -0800 Subject: [Python-bugs-list] [Bug #124572] Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Message-ID: <200012051941.LAA21127@sf-web3.vaspecialprojects.com> Bug #124572, was updated on 2000-Dec-05 09:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: gwiener Assigned to : Nobody Summary: Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Details: Configuration: [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 When using os.system() to run a C executable, we noticed that the executable exited about 1/3 of the way through it's processing cycle. The same executable plus invocation ran to completion using system() within a C program. To work around the problem, we rebuilt Python using --with-threads=no. The os.system() call then worked as expected. Thanks, Gerry Wiener Follow-Ups: Date: 2000-Dec-05 11:27 By: cgw Comment: It certainly would be easier to attempt to debug this if you gave some indication about the C executable in question. ------------------------------------------------------- Date: 2000-Dec-05 11:41 By: gwiener Comment: It's probably not feasible to send the source code for the executable since the code is quite large in size and the executable also depends on a large number of libraries. The executable itself does not make any explicit use of threads. Its input is a set of files and so is its output. I'll try to cook up a simple example which exhibits the same behavior. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124572&group_id=5470 From noreply@sourceforge.net Tue Dec 5 23:50:46 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 5 Dec 2000 15:50:46 -0800 Subject: [Python-bugs-list] [Bug #124572] Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Message-ID: <200012052350.PAA25561@sf-web1.i.sourceforge.net> Bug #124572, was updated on 2000-Dec-05 09:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: gwiener Assigned to : Nobody Summary: Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Details: Configuration: [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 When using os.system() to run a C executable, we noticed that the executable exited about 1/3 of the way through it's processing cycle. The same executable plus invocation ran to completion using system() within a C program. To work around the problem, we rebuilt Python using --with-threads=no. The os.system() call then worked as expected. Thanks, Gerry Wiener Follow-Ups: Date: 2000-Dec-05 11:27 By: cgw Comment: It certainly would be easier to attempt to debug this if you gave some indication about the C executable in question. ------------------------------------------------------- Date: 2000-Dec-05 11:41 By: gwiener Comment: It's probably not feasible to send the source code for the executable since the code is quite large in size and the executable also depends on a large number of libraries. The executable itself does not make any explicit use of threads. Its input is a set of files and so is its output. I'll try to cook up a simple example which exhibits the same behavior. ------------------------------------------------------- Date: 2000-Dec-05 15:50 By: gwiener Comment:  Here's a trivial example along with Python driver code which exhibits the problem. If the size of array m is decreased, the testsys code runs correctly under os.system(). Note that testsys1 by itself runs fine on the same system and also runs fine when executed by system() within a C program. -------------------------- testsys.py -------------------------- #!/usr/bin/env python import os print 'testing system' ret = os.system("testsys") print 'ret is ', ret ----------------------- testsys.c ----------------------- #include #include main(int argc, char **argv) { int i; int m[1000000]; for (i=0; i<10; i++) { sleep(1); printf("hello there\n"); } } ------------------------------ Output of testsys.py ------------------------------ testing system ret is 11 ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124572&group_id=5470 From noreply@sourceforge.net Tue Dec 5 23:53:12 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 5 Dec 2000 15:53:12 -0800 Subject: [Python-bugs-list] [Bug #124629] urllib2.CustomProxyHandler has invalid code sequence Message-ID: <200012052353.PAA05783@sf-web2.i.sourceforge.net> Bug #124629, was updated on 2000-Dec-05 15:53 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: Trash Priority: 5 Submitted by: Nobody Assigned to : Nobody Summary: urllib2.CustomProxyHandler has invalid code sequence Details: The method "do_proxy" is defined: def do_proxy(self, p, req): p return self.parent.open(req) Where the line "p" should probably be "p()". For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124629&group_id=5470 From noreply@sourceforge.net Tue Dec 5 23:59:33 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 5 Dec 2000 15:59:33 -0800 Subject: [Python-bugs-list] [Bug #124629] urllib2.CustomProxyHandler has invalid code sequence Message-ID: <200012052359.PAA20466@sf-web3.vaspecialprojects.com> Bug #124629, was updated on 2000-Dec-05 15:53 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: Nobody Assigned to : jhylton Summary: urllib2.CustomProxyHandler has invalid code sequence Details: The method "do_proxy" is defined: def do_proxy(self, p, req): p return self.parent.open(req) Where the line "p" should probably be "p()". For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124629&group_id=5470 From noreply@sourceforge.net Wed Dec 6 19:37:19 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 11:37:19 -0800 Subject: [Python-bugs-list] [Bug #124572] Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Message-ID: <200012061937.LAA26360@sf-web1.i.sourceforge.net> Bug #124572, was updated on 2000-Dec-05 09:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: gwiener Assigned to : Nobody Summary: Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Details: Configuration: [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 When using os.system() to run a C executable, we noticed that the executable exited about 1/3 of the way through it's processing cycle. The same executable plus invocation ran to completion using system() within a C program. To work around the problem, we rebuilt Python using --with-threads=no. The os.system() call then worked as expected. Thanks, Gerry Wiener Follow-Ups: Date: 2000-Dec-05 11:27 By: cgw Comment: It certainly would be easier to attempt to debug this if you gave some indication about the C executable in question. ------------------------------------------------------- Date: 2000-Dec-05 11:41 By: gwiener Comment: It's probably not feasible to send the source code for the executable since the code is quite large in size and the executable also depends on a large number of libraries. The executable itself does not make any explicit use of threads. Its input is a set of files and so is its output. I'll try to cook up a simple example which exhibits the same behavior. ------------------------------------------------------- Date: 2000-Dec-05 15:50 By: gwiener Comment:  Here's a trivial example along with Python driver code which exhibits the problem. If the size of array m is decreased, the testsys code runs correctly under os.system(). Note that testsys1 by itself runs fine on the same system and also runs fine when executed by system() within a C program. -------------------------- testsys.py -------------------------- #!/usr/bin/env python import os print 'testing system' ret = os.system("testsys") print 'ret is ', ret ----------------------- testsys.c ----------------------- #include #include main(int argc, char **argv) { int i; int m[1000000]; for (i=0; i<10; i++) { sleep(1); printf("hello there\n"); } } ------------------------------ Output of testsys.py ------------------------------ testing system ret is 11 ------------------------------------------------------- Date: 2000-Dec-06 11:37 By: gvanrossum Comment: Yup. The same problem happens under Linux, too. The C program crashes with a SIGSEGV, apparently because it doesn't have enough stack space. Here's how I decode the s.system() return value: if ret&0xff: print "Killed by signal", ret&0x7f, if ret&0x80: print "-- core dumped", print else: print "Exit status", ret>>8 This prints "Killed by signal 11 -- core dumped" for me. The bug must be in the libc thread code, which apparently limits the stack size but doesn't reset the limit in a child process. Here's a work-around: ret = os.system("ulimit -s 8192; ./testsys") I'm closing the bug report, since there's nothing that *Python* can do to avoid this problem. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124572&group_id=5470 From noreply@sourceforge.net Wed Dec 6 19:37:19 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 11:37:19 -0800 Subject: [Python-bugs-list] [Bug #124572] Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Message-ID: <200012061937.LAA26364@sf-web1.i.sourceforge.net> Bug #124572, was updated on 2000-Dec-05 09:10 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Closed Resolution: Wont Fix Bug Group: Platform-specific Priority: 5 Submitted by: gwiener Assigned to : gvanrossum Summary: Python 2.0 os.system() failure, Debian Linux 2.2.17, i686 Details: Configuration: [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 When using os.system() to run a C executable, we noticed that the executable exited about 1/3 of the way through it's processing cycle. The same executable plus invocation ran to completion using system() within a C program. To work around the problem, we rebuilt Python using --with-threads=no. The os.system() call then worked as expected. Thanks, Gerry Wiener Follow-Ups: Date: 2000-Dec-05 11:27 By: cgw Comment: It certainly would be easier to attempt to debug this if you gave some indication about the C executable in question. ------------------------------------------------------- Date: 2000-Dec-05 11:41 By: gwiener Comment: It's probably not feasible to send the source code for the executable since the code is quite large in size and the executable also depends on a large number of libraries. The executable itself does not make any explicit use of threads. Its input is a set of files and so is its output. I'll try to cook up a simple example which exhibits the same behavior. ------------------------------------------------------- Date: 2000-Dec-05 15:50 By: gwiener Comment:  Here's a trivial example along with Python driver code which exhibits the problem. If the size of array m is decreased, the testsys code runs correctly under os.system(). Note that testsys1 by itself runs fine on the same system and also runs fine when executed by system() within a C program. -------------------------- testsys.py -------------------------- #!/usr/bin/env python import os print 'testing system' ret = os.system("testsys") print 'ret is ', ret ----------------------- testsys.c ----------------------- #include #include main(int argc, char **argv) { int i; int m[1000000]; for (i=0; i<10; i++) { sleep(1); printf("hello there\n"); } } ------------------------------ Output of testsys.py ------------------------------ testing system ret is 11 ------------------------------------------------------- Date: 2000-Dec-06 11:37 By: gvanrossum Comment: Yup. The same problem happens under Linux, too. The C program crashes with a SIGSEGV, apparently because it doesn't have enough stack space. Here's how I decode the s.system() return value: if ret&0xff: print "Killed by signal", ret&0x7f, if ret&0x80: print "-- core dumped", print else: print "Exit status", ret>>8 This prints "Killed by signal 11 -- core dumped" for me. The bug must be in the libc thread code, which apparently limits the stack size but doesn't reset the limit in a child process. Here's a work-around: ret = os.system("ulimit -s 8192; ./testsys") I'm closing the bug report, since there's nothing that *Python* can do to avoid this problem. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124572&group_id=5470 From noreply@sourceforge.net Wed Dec 6 19:39:44 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 11:39:44 -0800 Subject: [Python-bugs-list] [Bug #124478] make -f Makefile.pre.in boot fails in RedHat 7.0 Message-ID: <200012061939.LAA26400@sf-web1.i.sourceforge.net> Bug #124478, was updated on 2000-Dec-04 18:31 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: rossrizer Assigned to : Nobody Summary: make -f Makefile.pre.in boot fails in RedHat 7.0 Details: Running make -f Makefile.pre.in boot on RedHat 7.0 creates a broken Makefile If you manually add: CCC=g++ to Makefile.pre.in All is well. I do not have the depth of experience with python/make/sed/etc. to determine a more proper fix. Sorry. Follow-Ups: Date: 2000-Dec-06 11:39 By: gvanrossum Comment: Could you explain in what way the resulting Makefile is broken? What are you trying to do with the Makefile? I suspect this is a case of trying to do something that Makefile.pre.in was siply not designed to do. I would recommend looking into distutils for building extensions, instead of messing with Makefile.pre.in. I'll close this bug report unless I hear from you again in a week. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124478&group_id=5470 From noreply@sourceforge.net Wed Dec 6 19:39:44 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 11:39:44 -0800 Subject: [Python-bugs-list] [Bug #124478] make -f Makefile.pre.in boot fails in RedHat 7.0 Message-ID: <200012061939.LAA26403@sf-web1.i.sourceforge.net> Bug #124478, was updated on 2000-Dec-04 18:31 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: rossrizer Assigned to : gvanrossum Summary: make -f Makefile.pre.in boot fails in RedHat 7.0 Details: Running make -f Makefile.pre.in boot on RedHat 7.0 creates a broken Makefile If you manually add: CCC=g++ to Makefile.pre.in All is well. I do not have the depth of experience with python/make/sed/etc. to determine a more proper fix. Sorry. Follow-Ups: Date: 2000-Dec-06 11:39 By: gvanrossum Comment: Could you explain in what way the resulting Makefile is broken? What are you trying to do with the Makefile? I suspect this is a case of trying to do something that Makefile.pre.in was siply not designed to do. I would recommend looking into distutils for building extensions, instead of messing with Makefile.pre.in. I'll close this bug report unless I hear from you again in a week. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124478&group_id=5470 From noreply@sourceforge.net Wed Dec 6 19:50:16 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 11:50:16 -0800 Subject: [Python-bugs-list] [Bug #124478] make -f Makefile.pre.in boot fails in RedHat 7.0 Message-ID: <200012061950.LAA27651@sf-web1.i.sourceforge.net> Bug #124478, was updated on 2000-Dec-04 18:31 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: rossrizer Assigned to : gvanrossum Summary: make -f Makefile.pre.in boot fails in RedHat 7.0 Details: Running make -f Makefile.pre.in boot on RedHat 7.0 creates a broken Makefile If you manually add: CCC=g++ to Makefile.pre.in All is well. I do not have the depth of experience with python/make/sed/etc. to determine a more proper fix. Sorry. Follow-Ups: Date: 2000-Dec-06 11:39 By: gvanrossum Comment: Could you explain in what way the resulting Makefile is broken? What are you trying to do with the Makefile? I suspect this is a case of trying to do something that Makefile.pre.in was siply not designed to do. I would recommend looking into distutils for building extensions, instead of messing with Makefile.pre.in. I'll close this bug report unless I hear from you again in a week. ------------------------------------------------------- Date: 2000-Dec-06 11:50 By: rossrizer Comment: make -f Makefile.pre.in boot runs without reporting an error. Unfortunately the resultant makefile is broken in the CCC is undefined. So typing make results in the following: $ make fpic -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive -Wl,--no-whole-archive -g -O2 -Wall -Wstrict-prototypes -I/usr/local/include/python2.0 -I/usr/local/include/python2.0 -DHAVE_CONFIG_H -c ./SimulationPython.cpp make: fpic: Command not found make: [SimulationPython.o] Error 127 (ignored) gcc -shared SimulationPython.o -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive ../spoon/libspoon.a ../simulator/libsimulation.a -Wl,--no-whole-archive -o simumodule.so gcc: SimulationPython.o: No such file or directory make: *** [simumodule.so] Error 1 Here is complete the Makefile that was generated: # Generated automatically from Makefile.pre by makesetup. # Generated automatically from Makefile.pre.in by sedscript. # Universal Unix Makefile for Python extensions # ============================================= # Short Instructions # ------------------ # 1. Build and install Python (1.5 or newer). # 2. "make -f Makefile.pre.in boot" # 3. "make" # You should now have a shared library. # Long Instructions # ----------------- # Build *and install* the basic Python 1.5 distribution. See the # Python README for instructions. (This version of Makefile.pre.in # only withs with Python 1.5, alpha 3 or newer.) # Create a file Setup.in for your extension. This file follows the # format of the Modules/Setup.in file; see the instructions there. # For a simple module called "spam" on file "spammodule.c", it can # contain a single line: # spam spammodule.c # You can build as many modules as you want in the same directory -- # just have a separate line for each of them in the Setup.in file. # If you want to build your extension as a shared library, insert a # line containing just the string # *shared* # at the top of your Setup.in file. # Note that the build process copies Setup.in to Setup, and then works # with Setup. It doesn't overwrite Setup when Setup.in is changed, so # while you're in the process of debugging your Setup.in file, you may # want to edit Setup instead, and copy it back to Setup.in later. # (All this is done so you can distribute your extension easily and # someone else can select the modules they actually want to build by # commenting out lines in the Setup file, without editing the # original. Editing Setup is also used to specify nonstandard # locations for include or library files.) # Copy this file (Misc/Makefile.pre.in) to the directory containing # your extension. # Run "make -f Makefile.pre.in boot". This creates Makefile # (producing Makefile.pre and sedscript as intermediate files) and # config.c, incorporating the values for sys.prefix, sys.exec_prefix # and sys.version from the installed Python binary. For this to work, # the python binary must be on your path. If this fails, try # make -f Makefile.pre.in Makefile VERSION=1.5 installdir= # where is the prefix used to install Python for installdir # (and possibly similar for exec_installdir=). # Note: "make boot" implies "make clobber" -- it assumes that when you # bootstrap you may have changed platforms so it removes all previous # output files. # If you are building your extension as a shared library (your # Setup.in file starts with *shared*), run "make" or "make sharedmods" # to build the shared library files. If you are building a statically # linked Python binary (the only solution of your platform doesn't # support shared libraries, and sometimes handy if you want to # distribute or install the resulting Python binary), run "make # python". # Note: Each time you edit Makefile.pre.in or Setup, you must run # "make Makefile" before running "make". # Hint: if you want to use VPATH, you can start in an empty # subdirectory and say (e.g.): # make -f ../Makefile.pre.in boot srcdir=.. VPATH=.. # === Bootstrap variables (edited through "make boot") === # The prefix used by "make inclinstall libainstall" of core python installdir= /usr/local # The exec_prefix used by the same exec_installdir=/usr/local # Source directory and VPATH in case you want to use VPATH. # (You will have to edit these two lines yourself -- there is no # automatic support as the Makefile is not generated by # config.status.) srcdir= . VPATH= . # === Variables that you may want to customize (rarely) === # (Static) build target TARGET= python # Installed python binary (used only by boot target) PYTHON= python # Add more -I and -D options here CFLAGS= $(OPT) -I$(INCLUDEPY) -I$(EXECINCLUDEPY) $(DEFS) # These two variables can be set in Setup to merge extensions. # See example[23]. BASELIB= BASESETUP= # === Variables set by makesetup === MODOBJS= MODLIBS= $(LOCALMODLIBS) $(BASEMODLIBS) # === Definitions added by makesetup === LOCALMODLIBS= BASEMODLIBS= SHAREDMODS= simumodule$(SO) TKPATH=:lib-tk GLHACK=-Dclear=__GLclear PYTHONPATH=$(COREPYTHONPATH) COREPYTHONPATH=$(DESTPATH)$(SITEPATH)$(TESTPATH)$(MACHDEPPATH)$(TKPATH) MACHDEPPATH=:plat-$(MACHDEP) TESTPATH= SITEPATH= DESTPATH= MACHDESTLIB=$(BINLIBDEST) DESTLIB=$(LIBDEST) SHITE2=-Wl,--no-whole-archive SHITE1=-Wl,--whole-archive CPPFLAGS= -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator # === Variables from configure (through sedscript) === VERSION= 2.0 CC= gcc LINKCC= $(PURIFY) $(CC) SGI_ABI= OPT= -g -O2 -Wall -Wstrict-prototypes LDFLAGS= LDLAST= DEFS= -DHAVE_CONFIG_H LIBS= -lpthread -ldl -lutil LIBM= -lm LIBC= RANLIB= ranlib MACHDEP= linux2 SO= .so LDSHARED= gcc -shared CCSHARED= -fpic LINKFORSHARED= -Xlinker -export-dynamic # Install prefix for architecture-independent files prefix= /usr/local # Install prefix for architecture-dependent files exec_prefix= ${prefix} # Uncomment the following two lines for AIX #LINKCC= $(LIBPL)/makexp_aix $(LIBPL)/python.exp "" $(LIBRARY); $(PURIFY) $(CC) #LDSHARED= $(LIBPL)/ld_so_aix $(CC) -bI:$(LIBPL)/python.exp # === Fixed definitions === # Shell used by make (some versions default to the login shell, which is bad) SHELL= /bin/sh # Expanded directories BINDIR= $(exec_installdir)/bin LIBDIR= $(exec_prefix)/lib MANDIR= $(installdir)/share/man INCLUDEDIR= $(installdir)/include SCRIPTDIR= $(prefix)/lib # Detailed destination directories BINLIBDEST= $(LIBDIR)/python$(VERSION) LIBDEST= $(SCRIPTDIR)/python$(VERSION) INCLUDEPY= $(INCLUDEDIR)/python$(VERSION) EXECINCLUDEPY= $(exec_installdir)/include/python$(VERSION) LIBP= $(exec_installdir)/lib/python$(VERSION) DESTSHARED= $(BINLIBDEST)/site-packages LIBPL= $(LIBP)/config PYTHONLIBS= $(LIBPL)/libpython$(VERSION).a MAKESETUP= $(LIBPL)/makesetup MAKEFILE= $(LIBPL)/Makefile CONFIGC= $(LIBPL)/config.c CONFIGCIN= $(LIBPL)/config.c.in SETUP= $(LIBPL)/Setup.thread $(LIBPL)/Setup.local $(LIBPL)/Setup SYSLIBS= $(LIBM) $(LIBC) ADDOBJS= $(LIBPL)/python.o config.o # Portable install script (configure doesn't always guess right) INSTALL= $(LIBPL)/install-sh -c # Shared libraries must be installed with executable mode on some systems; # rather than figuring out exactly which, we always give them executable mode. # Also, making them read-only seems to be a good idea... INSTALL_SHARED= ${INSTALL} -m 555 # === Fixed rules === # Default target. This builds shared libraries only default: sharedmods # Build everything all: static sharedmods # Build shared libraries from our extension modules sharedmods: $(SHAREDMODS) # Build a static Python binary containing our extension modules static: $(TARGET) $(TARGET): $(ADDOBJS) lib.a $(PYTHONLIBS) Makefile $(BASELIB) $(LINKCC) $(LDFLAGS) $(LINKFORSHARED) \ $(ADDOBJS) lib.a $(PYTHONLIBS) \ $(LINKPATH) $(BASELIB) $(MODLIBS) $(LIBS) $(SYSLIBS) \ -o $(TARGET) $(LDLAST) install: sharedmods if test ! -d $(DESTSHARED) ; then \ mkdir $(DESTSHARED) ; else true ; fi -for i in X $(SHAREDMODS); do \ if test $$i != X; \ then $(INSTALL_SHARED) $$i $(DESTSHARED)/$$i; \ fi; \ done # Build the library containing our extension modules lib.a: $(MODOBJS) -rm -f lib.a ar cr lib.a $(MODOBJS) -$(RANLIB) lib.a # This runs makesetup *twice* to use the BASESETUP definition from Setup config.c Makefile: Makefile.pre Setup $(BASESETUP) $(MAKESETUP) $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) $(MAKE) -f Makefile do-it-again # Internal target to run makesetup for the second time do-it-again: $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) # Make config.o from the config.c created by makesetup config.o: config.c $(CC) $(CFLAGS) -c config.c # Setup is copied from Setup.in *only* if it doesn't yet exist Setup: cp $(srcdir)/Setup.in Setup # Make the intermediate Makefile.pre from Makefile.pre.in Makefile.pre: Makefile.pre.in sedscript sed -f sedscript $(srcdir)/Makefile.pre.in >Makefile.pre # Shortcuts to make the sed arguments on one line P=prefix E=exec_prefix H=Generated automatically from Makefile.pre.in by sedscript. L=LINKFORSHARED # Make the sed script used to create Makefile.pre from Makefile.pre.in sedscript: $(MAKEFILE) sed -n \ -e '1s/.*/1i\\/p' \ -e '2s%.*%# $H%p' \ -e '/^VERSION=/s/^VERSION=[ ]*\(.*\)/s%@VERSION[@]%\1%/p' \ -e '/^CC=/s/^CC=[ ]*\(.*\)/s%@CC[@]%\1%/p' \ -e '/^CCC=/s/^CCC=[ ]*\(.*\)/s%#@SET_CCC[@]%CCC=\1%/p' \ -e '/^LINKCC=/s/^LINKCC=[ ]*\(.*\)/s%@LINKCC[@]%\1%/p' \ -e '/^OPT=/s/^OPT=[ ]*\(.*\)/s%@OPT[@]%\1%/p' \ -e '/^LDFLAGS=/s/^LDFLAGS=[ ]*\(.*\)/s%@LDFLAGS[@]%\1%/p' \ -e '/^LDLAST=/s/^LDLAST=[ ]*\(.*\)/s%@LDLAST[@]%\1%/p' \ -e '/^DEFS=/s/^DEFS=[ ]*\(.*\)/s%@DEFS[@]%\1%/p' \ -e '/^LIBS=/s/^LIBS=[ ]*\(.*\)/s%@LIBS[@]%\1%/p' \ -e '/^LIBM=/s/^LIBM=[ ]*\(.*\)/s%@LIBM[@]%\1%/p' \ -e '/^LIBC=/s/^LIBC=[ ]*\(.*\)/s%@LIBC[@]%\1%/p' \ -e '/^RANLIB=/s/^RANLIB=[ ]*\(.*\)/s%@RANLIB[@]%\1%/p' \ -e '/^MACHDEP=/s/^MACHDEP=[ ]*\(.*\)/s%@MACHDEP[@]%\1%/p' \ -e '/^SO=/s/^SO=[ ]*\(.*\)/s%@SO[@]%\1%/p' \ -e '/^LDSHARED=/s/^LDSHARED=[ ]*\(.*\)/s%@LDSHARED[@]%\1%/p' \ -e '/^CCSHARED=/s/^CCSHARED=[ ]*\(.*\)/s%@CCSHARED[@]%\1%/p' \ -e '/^SGI_ABI=/s/^SGI_ABI=[ ]*\(.*\)/s%@SGI_ABI[@]%\1%/p' \ -e '/^$L=/s/^$L=[ ]*\(.*\)/s%@$L[@]%\1%/p' \ -e '/^$P=/s/^$P=\(.*\)/s%^$P=.*%$P=\1%/p' \ -e '/^$E=/s/^$E=\(.*\)/s%^$E=.*%$E=\1%/p' \ $(MAKEFILE) >sedscript echo "/^#@SET_CCC@/d" >>sedscript echo "/^installdir=/s%=.*%= $(installdir)%" >>sedscript echo "/^exec_installdir=/s%=.*%=$(exec_installdir)%" >>sedscript echo "/^srcdir=/s%=.*%= $(srcdir)%" >>sedscript echo "/^VPATH=/s%=.*%= $(VPATH)%" >>sedscript echo "/^LINKPATH=/s%=.*%= $(LINKPATH)%" >>sedscript echo "/^BASELIB=/s%=.*%= $(BASELIB)%" >>sedscript echo "/^BASESETUP=/s%=.*%= $(BASESETUP)%" >>sedscript # Bootstrap target boot: clobber VERSION=`$(PYTHON) -c "import sys; print sys.version[:3]"`; \ installdir=`$(PYTHON) -c "import sys; print sys.prefix"`; \ exec_installdir=`$(PYTHON) -c "import sys; print sys.exec_prefix"`; \ $(MAKE) -f $(srcdir)/Makefile.pre.in VPATH=$(VPATH) srcdir=$(srcdir) \ VERSION=$$VERSION \ installdir=$$installdir \ exec_installdir=$$exec_installdir \ Makefile # Handy target to remove intermediate files and backups clean: -rm -f *.o *~ # Handy target to remove everything that is easily regenerated clobber: clean -rm -f *.a tags TAGS config.c Makefile.pre $(TARGET) sedscript -rm -f *.so *.sl so_locations # Handy target to remove everything you don't want to distribute distclean: clobber -rm -f Makefile Setup # Rules appended by makedepend ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124478&group_id=5470 From noreply@sourceforge.net Wed Dec 6 19:58:34 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 11:58:34 -0800 Subject: [Python-bugs-list] [Bug #124344] smtplib quoteaddr() has problems with RFC821 source routing Message-ID: <200012061958.LAA28838@sf-web1.i.sourceforge.net> Bug #124344, was updated on 2000-Dec-04 02:50 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: carey Assigned to : Nobody Summary: smtplib quoteaddr() has problems with RFC821 source routing Details: RFC821 defines source routed SMTP addresses of the form <@USC-ISIE.ARPA:JQP@MIT-AI.ARPA>. RFC1123 (STD3) deprecates these kinds of addresses, but does not forbid them. If an address like this is passed to smtplib.quoteaddr(), the result is '<@USC-ISIE.ARPA>', which is useless, and illegal according to RFC821. smtplib should probably leave the source routing there, assuming anyone using an address like this knows what they're doing, and since any SMTP server "MUST" still accept this syntax. Alternatively, smtplib could just refuse to deliver to an address like this, with some justification. (RFC1123 section 5.2.19.) In any case, this isn't very important at all. I'll probably write a patch when I have some time, using one of the two solutions outlined above. Follow-Ups: Date: 2000-Dec-06 11:58 By: fdrake Comment: Assigned to the mail guy. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124344&group_id=5470 From noreply@sourceforge.net Wed Dec 6 19:58:34 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 11:58:34 -0800 Subject: [Python-bugs-list] [Bug #124344] smtplib quoteaddr() has problems with RFC821 source routing Message-ID: <200012061958.LAA28841@sf-web1.i.sourceforge.net> Bug #124344, was updated on 2000-Dec-04 02:50 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: carey Assigned to : bwarsaw Summary: smtplib quoteaddr() has problems with RFC821 source routing Details: RFC821 defines source routed SMTP addresses of the form <@USC-ISIE.ARPA:JQP@MIT-AI.ARPA>. RFC1123 (STD3) deprecates these kinds of addresses, but does not forbid them. If an address like this is passed to smtplib.quoteaddr(), the result is '<@USC-ISIE.ARPA>', which is useless, and illegal according to RFC821. smtplib should probably leave the source routing there, assuming anyone using an address like this knows what they're doing, and since any SMTP server "MUST" still accept this syntax. Alternatively, smtplib could just refuse to deliver to an address like this, with some justification. (RFC1123 section 5.2.19.) In any case, this isn't very important at all. I'll probably write a patch when I have some time, using one of the two solutions outlined above. Follow-Ups: Date: 2000-Dec-06 11:58 By: fdrake Comment: Assigned to the mail guy. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124344&group_id=5470 From noreply@sourceforge.net Wed Dec 6 20:14:29 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 12:14:29 -0800 Subject: [Python-bugs-list] [Bug #124478] make -f Makefile.pre.in boot fails in RedHat 7.0 Message-ID: <200012062014.MAA16538@sf-web3.vaspecialprojects.com> Bug #124478, was updated on 2000-Dec-04 18:31 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: rossrizer Assigned to : gvanrossum Summary: make -f Makefile.pre.in boot fails in RedHat 7.0 Details: Running make -f Makefile.pre.in boot on RedHat 7.0 creates a broken Makefile If you manually add: CCC=g++ to Makefile.pre.in All is well. I do not have the depth of experience with python/make/sed/etc. to determine a more proper fix. Sorry. Follow-Ups: Date: 2000-Dec-06 11:39 By: gvanrossum Comment: Could you explain in what way the resulting Makefile is broken? What are you trying to do with the Makefile? I suspect this is a case of trying to do something that Makefile.pre.in was siply not designed to do. I would recommend looking into distutils for building extensions, instead of messing with Makefile.pre.in. I'll close this bug report unless I hear from you again in a week. ------------------------------------------------------- Date: 2000-Dec-06 11:50 By: rossrizer Comment: make -f Makefile.pre.in boot runs without reporting an error. Unfortunately the resultant makefile is broken in the CCC is undefined. So typing make results in the following: $ make fpic -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive -Wl,--no-whole-archive -g -O2 -Wall -Wstrict-prototypes -I/usr/local/include/python2.0 -I/usr/local/include/python2.0 -DHAVE_CONFIG_H -c ./SimulationPython.cpp make: fpic: Command not found make: [SimulationPython.o] Error 127 (ignored) gcc -shared SimulationPython.o -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive ../spoon/libspoon.a ../simulator/libsimulation.a -Wl,--no-whole-archive -o simumodule.so gcc: SimulationPython.o: No such file or directory make: *** [simumodule.so] Error 1 Here is complete the Makefile that was generated: # Generated automatically from Makefile.pre by makesetup. # Generated automatically from Makefile.pre.in by sedscript. # Universal Unix Makefile for Python extensions # ============================================= # Short Instructions # ------------------ # 1. Build and install Python (1.5 or newer). # 2. "make -f Makefile.pre.in boot" # 3. "make" # You should now have a shared library. # Long Instructions # ----------------- # Build *and install* the basic Python 1.5 distribution. See the # Python README for instructions. (This version of Makefile.pre.in # only withs with Python 1.5, alpha 3 or newer.) # Create a file Setup.in for your extension. This file follows the # format of the Modules/Setup.in file; see the instructions there. # For a simple module called "spam" on file "spammodule.c", it can # contain a single line: # spam spammodule.c # You can build as many modules as you want in the same directory -- # just have a separate line for each of them in the Setup.in file. # If you want to build your extension as a shared library, insert a # line containing just the string # *shared* # at the top of your Setup.in file. # Note that the build process copies Setup.in to Setup, and then works # with Setup. It doesn't overwrite Setup when Setup.in is changed, so # while you're in the process of debugging your Setup.in file, you may # want to edit Setup instead, and copy it back to Setup.in later. # (All this is done so you can distribute your extension easily and # someone else can select the modules they actually want to build by # commenting out lines in the Setup file, without editing the # original. Editing Setup is also used to specify nonstandard # locations for include or library files.) # Copy this file (Misc/Makefile.pre.in) to the directory containing # your extension. # Run "make -f Makefile.pre.in boot". This creates Makefile # (producing Makefile.pre and sedscript as intermediate files) and # config.c, incorporating the values for sys.prefix, sys.exec_prefix # and sys.version from the installed Python binary. For this to work, # the python binary must be on your path. If this fails, try # make -f Makefile.pre.in Makefile VERSION=1.5 installdir= # where is the prefix used to install Python for installdir # (and possibly similar for exec_installdir=). # Note: "make boot" implies "make clobber" -- it assumes that when you # bootstrap you may have changed platforms so it removes all previous # output files. # If you are building your extension as a shared library (your # Setup.in file starts with *shared*), run "make" or "make sharedmods" # to build the shared library files. If you are building a statically # linked Python binary (the only solution of your platform doesn't # support shared libraries, and sometimes handy if you want to # distribute or install the resulting Python binary), run "make # python". # Note: Each time you edit Makefile.pre.in or Setup, you must run # "make Makefile" before running "make". # Hint: if you want to use VPATH, you can start in an empty # subdirectory and say (e.g.): # make -f ../Makefile.pre.in boot srcdir=.. VPATH=.. # === Bootstrap variables (edited through "make boot") === # The prefix used by "make inclinstall libainstall" of core python installdir= /usr/local # The exec_prefix used by the same exec_installdir=/usr/local # Source directory and VPATH in case you want to use VPATH. # (You will have to edit these two lines yourself -- there is no # automatic support as the Makefile is not generated by # config.status.) srcdir= . VPATH= . # === Variables that you may want to customize (rarely) === # (Static) build target TARGET= python # Installed python binary (used only by boot target) PYTHON= python # Add more -I and -D options here CFLAGS= $(OPT) -I$(INCLUDEPY) -I$(EXECINCLUDEPY) $(DEFS) # These two variables can be set in Setup to merge extensions. # See example[23]. BASELIB= BASESETUP= # === Variables set by makesetup === MODOBJS= MODLIBS= $(LOCALMODLIBS) $(BASEMODLIBS) # === Definitions added by makesetup === LOCALMODLIBS= BASEMODLIBS= SHAREDMODS= simumodule$(SO) TKPATH=:lib-tk GLHACK=-Dclear=__GLclear PYTHONPATH=$(COREPYTHONPATH) COREPYTHONPATH=$(DESTPATH)$(SITEPATH)$(TESTPATH)$(MACHDEPPATH)$(TKPATH) MACHDEPPATH=:plat-$(MACHDEP) TESTPATH= SITEPATH= DESTPATH= MACHDESTLIB=$(BINLIBDEST) DESTLIB=$(LIBDEST) SHITE2=-Wl,--no-whole-archive SHITE1=-Wl,--whole-archive CPPFLAGS= -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator # === Variables from configure (through sedscript) === VERSION= 2.0 CC= gcc LINKCC= $(PURIFY) $(CC) SGI_ABI= OPT= -g -O2 -Wall -Wstrict-prototypes LDFLAGS= LDLAST= DEFS= -DHAVE_CONFIG_H LIBS= -lpthread -ldl -lutil LIBM= -lm LIBC= RANLIB= ranlib MACHDEP= linux2 SO= .so LDSHARED= gcc -shared CCSHARED= -fpic LINKFORSHARED= -Xlinker -export-dynamic # Install prefix for architecture-independent files prefix= /usr/local # Install prefix for architecture-dependent files exec_prefix= ${prefix} # Uncomment the following two lines for AIX #LINKCC= $(LIBPL)/makexp_aix $(LIBPL)/python.exp "" $(LIBRARY); $(PURIFY) $(CC) #LDSHARED= $(LIBPL)/ld_so_aix $(CC) -bI:$(LIBPL)/python.exp # === Fixed definitions === # Shell used by make (some versions default to the login shell, which is bad) SHELL= /bin/sh # Expanded directories BINDIR= $(exec_installdir)/bin LIBDIR= $(exec_prefix)/lib MANDIR= $(installdir)/share/man INCLUDEDIR= $(installdir)/include SCRIPTDIR= $(prefix)/lib # Detailed destination directories BINLIBDEST= $(LIBDIR)/python$(VERSION) LIBDEST= $(SCRIPTDIR)/python$(VERSION) INCLUDEPY= $(INCLUDEDIR)/python$(VERSION) EXECINCLUDEPY= $(exec_installdir)/include/python$(VERSION) LIBP= $(exec_installdir)/lib/python$(VERSION) DESTSHARED= $(BINLIBDEST)/site-packages LIBPL= $(LIBP)/config PYTHONLIBS= $(LIBPL)/libpython$(VERSION).a MAKESETUP= $(LIBPL)/makesetup MAKEFILE= $(LIBPL)/Makefile CONFIGC= $(LIBPL)/config.c CONFIGCIN= $(LIBPL)/config.c.in SETUP= $(LIBPL)/Setup.thread $(LIBPL)/Setup.local $(LIBPL)/Setup SYSLIBS= $(LIBM) $(LIBC) ADDOBJS= $(LIBPL)/python.o config.o # Portable install script (configure doesn't always guess right) INSTALL= $(LIBPL)/install-sh -c # Shared libraries must be installed with executable mode on some systems; # rather than figuring out exactly which, we always give them executable mode. # Also, making them read-only seems to be a good idea... INSTALL_SHARED= ${INSTALL} -m 555 # === Fixed rules === # Default target. This builds shared libraries only default: sharedmods # Build everything all: static sharedmods # Build shared libraries from our extension modules sharedmods: $(SHAREDMODS) # Build a static Python binary containing our extension modules static: $(TARGET) $(TARGET): $(ADDOBJS) lib.a $(PYTHONLIBS) Makefile $(BASELIB) $(LINKCC) $(LDFLAGS) $(LINKFORSHARED) \ $(ADDOBJS) lib.a $(PYTHONLIBS) \ $(LINKPATH) $(BASELIB) $(MODLIBS) $(LIBS) $(SYSLIBS) \ -o $(TARGET) $(LDLAST) install: sharedmods if test ! -d $(DESTSHARED) ; then \ mkdir $(DESTSHARED) ; else true ; fi -for i in X $(SHAREDMODS); do \ if test $$i != X; \ then $(INSTALL_SHARED) $$i $(DESTSHARED)/$$i; \ fi; \ done # Build the library containing our extension modules lib.a: $(MODOBJS) -rm -f lib.a ar cr lib.a $(MODOBJS) -$(RANLIB) lib.a # This runs makesetup *twice* to use the BASESETUP definition from Setup config.c Makefile: Makefile.pre Setup $(BASESETUP) $(MAKESETUP) $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) $(MAKE) -f Makefile do-it-again # Internal target to run makesetup for the second time do-it-again: $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) # Make config.o from the config.c created by makesetup config.o: config.c $(CC) $(CFLAGS) -c config.c # Setup is copied from Setup.in *only* if it doesn't yet exist Setup: cp $(srcdir)/Setup.in Setup # Make the intermediate Makefile.pre from Makefile.pre.in Makefile.pre: Makefile.pre.in sedscript sed -f sedscript $(srcdir)/Makefile.pre.in >Makefile.pre # Shortcuts to make the sed arguments on one line P=prefix E=exec_prefix H=Generated automatically from Makefile.pre.in by sedscript. L=LINKFORSHARED # Make the sed script used to create Makefile.pre from Makefile.pre.in sedscript: $(MAKEFILE) sed -n \ -e '1s/.*/1i\\/p' \ -e '2s%.*%# $H%p' \ -e '/^VERSION=/s/^VERSION=[ ]*\(.*\)/s%@VERSION[@]%\1%/p' \ -e '/^CC=/s/^CC=[ ]*\(.*\)/s%@CC[@]%\1%/p' \ -e '/^CCC=/s/^CCC=[ ]*\(.*\)/s%#@SET_CCC[@]%CCC=\1%/p' \ -e '/^LINKCC=/s/^LINKCC=[ ]*\(.*\)/s%@LINKCC[@]%\1%/p' \ -e '/^OPT=/s/^OPT=[ ]*\(.*\)/s%@OPT[@]%\1%/p' \ -e '/^LDFLAGS=/s/^LDFLAGS=[ ]*\(.*\)/s%@LDFLAGS[@]%\1%/p' \ -e '/^LDLAST=/s/^LDLAST=[ ]*\(.*\)/s%@LDLAST[@]%\1%/p' \ -e '/^DEFS=/s/^DEFS=[ ]*\(.*\)/s%@DEFS[@]%\1%/p' \ -e '/^LIBS=/s/^LIBS=[ ]*\(.*\)/s%@LIBS[@]%\1%/p' \ -e '/^LIBM=/s/^LIBM=[ ]*\(.*\)/s%@LIBM[@]%\1%/p' \ -e '/^LIBC=/s/^LIBC=[ ]*\(.*\)/s%@LIBC[@]%\1%/p' \ -e '/^RANLIB=/s/^RANLIB=[ ]*\(.*\)/s%@RANLIB[@]%\1%/p' \ -e '/^MACHDEP=/s/^MACHDEP=[ ]*\(.*\)/s%@MACHDEP[@]%\1%/p' \ -e '/^SO=/s/^SO=[ ]*\(.*\)/s%@SO[@]%\1%/p' \ -e '/^LDSHARED=/s/^LDSHARED=[ ]*\(.*\)/s%@LDSHARED[@]%\1%/p' \ -e '/^CCSHARED=/s/^CCSHARED=[ ]*\(.*\)/s%@CCSHARED[@]%\1%/p' \ -e '/^SGI_ABI=/s/^SGI_ABI=[ ]*\(.*\)/s%@SGI_ABI[@]%\1%/p' \ -e '/^$L=/s/^$L=[ ]*\(.*\)/s%@$L[@]%\1%/p' \ -e '/^$P=/s/^$P=\(.*\)/s%^$P=.*%$P=\1%/p' \ -e '/^$E=/s/^$E=\(.*\)/s%^$E=.*%$E=\1%/p' \ $(MAKEFILE) >sedscript echo "/^#@SET_CCC@/d" >>sedscript echo "/^installdir=/s%=.*%= $(installdir)%" >>sedscript echo "/^exec_installdir=/s%=.*%=$(exec_installdir)%" >>sedscript echo "/^srcdir=/s%=.*%= $(srcdir)%" >>sedscript echo "/^VPATH=/s%=.*%= $(VPATH)%" >>sedscript echo "/^LINKPATH=/s%=.*%= $(LINKPATH)%" >>sedscript echo "/^BASELIB=/s%=.*%= $(BASELIB)%" >>sedscript echo "/^BASESETUP=/s%=.*%= $(BASESETUP)%" >>sedscript # Bootstrap target boot: clobber VERSION=`$(PYTHON) -c "import sys; print sys.version[:3]"`; \ installdir=`$(PYTHON) -c "import sys; print sys.prefix"`; \ exec_installdir=`$(PYTHON) -c "import sys; print sys.exec_prefix"`; \ $(MAKE) -f $(srcdir)/Makefile.pre.in VPATH=$(VPATH) srcdir=$(srcdir) \ VERSION=$$VERSION \ installdir=$$installdir \ exec_installdir=$$exec_installdir \ Makefile # Handy target to remove intermediate files and backups clean: -rm -f *.o *~ # Handy target to remove everything that is easily regenerated clobber: clean -rm -f *.a tags TAGS config.c Makefile.pre $(TARGET) sedscript -rm -f *.so *.sl so_locations # Handy target to remove everything you don't want to distribute distclean: clobber -rm -f Makefile Setup # Rules appended by makedepend ------------------------------------------------------- Date: 2000-Dec-06 12:14 By: gvanrossum Comment: Aha, I see what you mean now. The brokenness of the Makefile is just that it doesn't define CCC, but uses it when compiling C++ source files. There's definitely a bug here: the makesetup script assumes that various filenames ending in .cc, .c++, .cxx etc. are C++ files and must be compiled with a compiler named $(CCC), but the make variable CCC isn't actually defined. There are scant references to it, but it's all commented out in the configure script. It appears that the correct macro is called CXX these days. Note, however, that the main Python configure script does not attempt to guess a default value for it; rather, you must pass it into configure with "--with-cxx=g++". I'll cook up a patch set, stay tuned. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124478&group_id=5470 From noreply@sourceforge.net Wed Dec 6 20:14:29 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 12:14:29 -0800 Subject: [Python-bugs-list] [Bug #124478] make -f Makefile.pre.in boot fails in RedHat 7.0 Message-ID: <200012062014.MAA16535@sf-web3.vaspecialprojects.com> Bug #124478, was updated on 2000-Dec-04 18:31 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: rossrizer Assigned to : gvanrossum Summary: make -f Makefile.pre.in boot fails in RedHat 7.0 Details: Running make -f Makefile.pre.in boot on RedHat 7.0 creates a broken Makefile If you manually add: CCC=g++ to Makefile.pre.in All is well. I do not have the depth of experience with python/make/sed/etc. to determine a more proper fix. Sorry. Follow-Ups: Date: 2000-Dec-06 11:39 By: gvanrossum Comment: Could you explain in what way the resulting Makefile is broken? What are you trying to do with the Makefile? I suspect this is a case of trying to do something that Makefile.pre.in was siply not designed to do. I would recommend looking into distutils for building extensions, instead of messing with Makefile.pre.in. I'll close this bug report unless I hear from you again in a week. ------------------------------------------------------- Date: 2000-Dec-06 11:50 By: rossrizer Comment: make -f Makefile.pre.in boot runs without reporting an error. Unfortunately the resultant makefile is broken in the CCC is undefined. So typing make results in the following: $ make fpic -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive -Wl,--no-whole-archive -g -O2 -Wall -Wstrict-prototypes -I/usr/local/include/python2.0 -I/usr/local/include/python2.0 -DHAVE_CONFIG_H -c ./SimulationPython.cpp make: fpic: Command not found make: [SimulationPython.o] Error 127 (ignored) gcc -shared SimulationPython.o -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive ../spoon/libspoon.a ../simulator/libsimulation.a -Wl,--no-whole-archive -o simumodule.so gcc: SimulationPython.o: No such file or directory make: *** [simumodule.so] Error 1 Here is complete the Makefile that was generated: # Generated automatically from Makefile.pre by makesetup. # Generated automatically from Makefile.pre.in by sedscript. # Universal Unix Makefile for Python extensions # ============================================= # Short Instructions # ------------------ # 1. Build and install Python (1.5 or newer). # 2. "make -f Makefile.pre.in boot" # 3. "make" # You should now have a shared library. # Long Instructions # ----------------- # Build *and install* the basic Python 1.5 distribution. See the # Python README for instructions. (This version of Makefile.pre.in # only withs with Python 1.5, alpha 3 or newer.) # Create a file Setup.in for your extension. This file follows the # format of the Modules/Setup.in file; see the instructions there. # For a simple module called "spam" on file "spammodule.c", it can # contain a single line: # spam spammodule.c # You can build as many modules as you want in the same directory -- # just have a separate line for each of them in the Setup.in file. # If you want to build your extension as a shared library, insert a # line containing just the string # *shared* # at the top of your Setup.in file. # Note that the build process copies Setup.in to Setup, and then works # with Setup. It doesn't overwrite Setup when Setup.in is changed, so # while you're in the process of debugging your Setup.in file, you may # want to edit Setup instead, and copy it back to Setup.in later. # (All this is done so you can distribute your extension easily and # someone else can select the modules they actually want to build by # commenting out lines in the Setup file, without editing the # original. Editing Setup is also used to specify nonstandard # locations for include or library files.) # Copy this file (Misc/Makefile.pre.in) to the directory containing # your extension. # Run "make -f Makefile.pre.in boot". This creates Makefile # (producing Makefile.pre and sedscript as intermediate files) and # config.c, incorporating the values for sys.prefix, sys.exec_prefix # and sys.version from the installed Python binary. For this to work, # the python binary must be on your path. If this fails, try # make -f Makefile.pre.in Makefile VERSION=1.5 installdir= # where is the prefix used to install Python for installdir # (and possibly similar for exec_installdir=). # Note: "make boot" implies "make clobber" -- it assumes that when you # bootstrap you may have changed platforms so it removes all previous # output files. # If you are building your extension as a shared library (your # Setup.in file starts with *shared*), run "make" or "make sharedmods" # to build the shared library files. If you are building a statically # linked Python binary (the only solution of your platform doesn't # support shared libraries, and sometimes handy if you want to # distribute or install the resulting Python binary), run "make # python". # Note: Each time you edit Makefile.pre.in or Setup, you must run # "make Makefile" before running "make". # Hint: if you want to use VPATH, you can start in an empty # subdirectory and say (e.g.): # make -f ../Makefile.pre.in boot srcdir=.. VPATH=.. # === Bootstrap variables (edited through "make boot") === # The prefix used by "make inclinstall libainstall" of core python installdir= /usr/local # The exec_prefix used by the same exec_installdir=/usr/local # Source directory and VPATH in case you want to use VPATH. # (You will have to edit these two lines yourself -- there is no # automatic support as the Makefile is not generated by # config.status.) srcdir= . VPATH= . # === Variables that you may want to customize (rarely) === # (Static) build target TARGET= python # Installed python binary (used only by boot target) PYTHON= python # Add more -I and -D options here CFLAGS= $(OPT) -I$(INCLUDEPY) -I$(EXECINCLUDEPY) $(DEFS) # These two variables can be set in Setup to merge extensions. # See example[23]. BASELIB= BASESETUP= # === Variables set by makesetup === MODOBJS= MODLIBS= $(LOCALMODLIBS) $(BASEMODLIBS) # === Definitions added by makesetup === LOCALMODLIBS= BASEMODLIBS= SHAREDMODS= simumodule$(SO) TKPATH=:lib-tk GLHACK=-Dclear=__GLclear PYTHONPATH=$(COREPYTHONPATH) COREPYTHONPATH=$(DESTPATH)$(SITEPATH)$(TESTPATH)$(MACHDEPPATH)$(TKPATH) MACHDEPPATH=:plat-$(MACHDEP) TESTPATH= SITEPATH= DESTPATH= MACHDESTLIB=$(BINLIBDEST) DESTLIB=$(LIBDEST) SHITE2=-Wl,--no-whole-archive SHITE1=-Wl,--whole-archive CPPFLAGS= -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator # === Variables from configure (through sedscript) === VERSION= 2.0 CC= gcc LINKCC= $(PURIFY) $(CC) SGI_ABI= OPT= -g -O2 -Wall -Wstrict-prototypes LDFLAGS= LDLAST= DEFS= -DHAVE_CONFIG_H LIBS= -lpthread -ldl -lutil LIBM= -lm LIBC= RANLIB= ranlib MACHDEP= linux2 SO= .so LDSHARED= gcc -shared CCSHARED= -fpic LINKFORSHARED= -Xlinker -export-dynamic # Install prefix for architecture-independent files prefix= /usr/local # Install prefix for architecture-dependent files exec_prefix= ${prefix} # Uncomment the following two lines for AIX #LINKCC= $(LIBPL)/makexp_aix $(LIBPL)/python.exp "" $(LIBRARY); $(PURIFY) $(CC) #LDSHARED= $(LIBPL)/ld_so_aix $(CC) -bI:$(LIBPL)/python.exp # === Fixed definitions === # Shell used by make (some versions default to the login shell, which is bad) SHELL= /bin/sh # Expanded directories BINDIR= $(exec_installdir)/bin LIBDIR= $(exec_prefix)/lib MANDIR= $(installdir)/share/man INCLUDEDIR= $(installdir)/include SCRIPTDIR= $(prefix)/lib # Detailed destination directories BINLIBDEST= $(LIBDIR)/python$(VERSION) LIBDEST= $(SCRIPTDIR)/python$(VERSION) INCLUDEPY= $(INCLUDEDIR)/python$(VERSION) EXECINCLUDEPY= $(exec_installdir)/include/python$(VERSION) LIBP= $(exec_installdir)/lib/python$(VERSION) DESTSHARED= $(BINLIBDEST)/site-packages LIBPL= $(LIBP)/config PYTHONLIBS= $(LIBPL)/libpython$(VERSION).a MAKESETUP= $(LIBPL)/makesetup MAKEFILE= $(LIBPL)/Makefile CONFIGC= $(LIBPL)/config.c CONFIGCIN= $(LIBPL)/config.c.in SETUP= $(LIBPL)/Setup.thread $(LIBPL)/Setup.local $(LIBPL)/Setup SYSLIBS= $(LIBM) $(LIBC) ADDOBJS= $(LIBPL)/python.o config.o # Portable install script (configure doesn't always guess right) INSTALL= $(LIBPL)/install-sh -c # Shared libraries must be installed with executable mode on some systems; # rather than figuring out exactly which, we always give them executable mode. # Also, making them read-only seems to be a good idea... INSTALL_SHARED= ${INSTALL} -m 555 # === Fixed rules === # Default target. This builds shared libraries only default: sharedmods # Build everything all: static sharedmods # Build shared libraries from our extension modules sharedmods: $(SHAREDMODS) # Build a static Python binary containing our extension modules static: $(TARGET) $(TARGET): $(ADDOBJS) lib.a $(PYTHONLIBS) Makefile $(BASELIB) $(LINKCC) $(LDFLAGS) $(LINKFORSHARED) \ $(ADDOBJS) lib.a $(PYTHONLIBS) \ $(LINKPATH) $(BASELIB) $(MODLIBS) $(LIBS) $(SYSLIBS) \ -o $(TARGET) $(LDLAST) install: sharedmods if test ! -d $(DESTSHARED) ; then \ mkdir $(DESTSHARED) ; else true ; fi -for i in X $(SHAREDMODS); do \ if test $$i != X; \ then $(INSTALL_SHARED) $$i $(DESTSHARED)/$$i; \ fi; \ done # Build the library containing our extension modules lib.a: $(MODOBJS) -rm -f lib.a ar cr lib.a $(MODOBJS) -$(RANLIB) lib.a # This runs makesetup *twice* to use the BASESETUP definition from Setup config.c Makefile: Makefile.pre Setup $(BASESETUP) $(MAKESETUP) $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) $(MAKE) -f Makefile do-it-again # Internal target to run makesetup for the second time do-it-again: $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) # Make config.o from the config.c created by makesetup config.o: config.c $(CC) $(CFLAGS) -c config.c # Setup is copied from Setup.in *only* if it doesn't yet exist Setup: cp $(srcdir)/Setup.in Setup # Make the intermediate Makefile.pre from Makefile.pre.in Makefile.pre: Makefile.pre.in sedscript sed -f sedscript $(srcdir)/Makefile.pre.in >Makefile.pre # Shortcuts to make the sed arguments on one line P=prefix E=exec_prefix H=Generated automatically from Makefile.pre.in by sedscript. L=LINKFORSHARED # Make the sed script used to create Makefile.pre from Makefile.pre.in sedscript: $(MAKEFILE) sed -n \ -e '1s/.*/1i\\/p' \ -e '2s%.*%# $H%p' \ -e '/^VERSION=/s/^VERSION=[ ]*\(.*\)/s%@VERSION[@]%\1%/p' \ -e '/^CC=/s/^CC=[ ]*\(.*\)/s%@CC[@]%\1%/p' \ -e '/^CCC=/s/^CCC=[ ]*\(.*\)/s%#@SET_CCC[@]%CCC=\1%/p' \ -e '/^LINKCC=/s/^LINKCC=[ ]*\(.*\)/s%@LINKCC[@]%\1%/p' \ -e '/^OPT=/s/^OPT=[ ]*\(.*\)/s%@OPT[@]%\1%/p' \ -e '/^LDFLAGS=/s/^LDFLAGS=[ ]*\(.*\)/s%@LDFLAGS[@]%\1%/p' \ -e '/^LDLAST=/s/^LDLAST=[ ]*\(.*\)/s%@LDLAST[@]%\1%/p' \ -e '/^DEFS=/s/^DEFS=[ ]*\(.*\)/s%@DEFS[@]%\1%/p' \ -e '/^LIBS=/s/^LIBS=[ ]*\(.*\)/s%@LIBS[@]%\1%/p' \ -e '/^LIBM=/s/^LIBM=[ ]*\(.*\)/s%@LIBM[@]%\1%/p' \ -e '/^LIBC=/s/^LIBC=[ ]*\(.*\)/s%@LIBC[@]%\1%/p' \ -e '/^RANLIB=/s/^RANLIB=[ ]*\(.*\)/s%@RANLIB[@]%\1%/p' \ -e '/^MACHDEP=/s/^MACHDEP=[ ]*\(.*\)/s%@MACHDEP[@]%\1%/p' \ -e '/^SO=/s/^SO=[ ]*\(.*\)/s%@SO[@]%\1%/p' \ -e '/^LDSHARED=/s/^LDSHARED=[ ]*\(.*\)/s%@LDSHARED[@]%\1%/p' \ -e '/^CCSHARED=/s/^CCSHARED=[ ]*\(.*\)/s%@CCSHARED[@]%\1%/p' \ -e '/^SGI_ABI=/s/^SGI_ABI=[ ]*\(.*\)/s%@SGI_ABI[@]%\1%/p' \ -e '/^$L=/s/^$L=[ ]*\(.*\)/s%@$L[@]%\1%/p' \ -e '/^$P=/s/^$P=\(.*\)/s%^$P=.*%$P=\1%/p' \ -e '/^$E=/s/^$E=\(.*\)/s%^$E=.*%$E=\1%/p' \ $(MAKEFILE) >sedscript echo "/^#@SET_CCC@/d" >>sedscript echo "/^installdir=/s%=.*%= $(installdir)%" >>sedscript echo "/^exec_installdir=/s%=.*%=$(exec_installdir)%" >>sedscript echo "/^srcdir=/s%=.*%= $(srcdir)%" >>sedscript echo "/^VPATH=/s%=.*%= $(VPATH)%" >>sedscript echo "/^LINKPATH=/s%=.*%= $(LINKPATH)%" >>sedscript echo "/^BASELIB=/s%=.*%= $(BASELIB)%" >>sedscript echo "/^BASESETUP=/s%=.*%= $(BASESETUP)%" >>sedscript # Bootstrap target boot: clobber VERSION=`$(PYTHON) -c "import sys; print sys.version[:3]"`; \ installdir=`$(PYTHON) -c "import sys; print sys.prefix"`; \ exec_installdir=`$(PYTHON) -c "import sys; print sys.exec_prefix"`; \ $(MAKE) -f $(srcdir)/Makefile.pre.in VPATH=$(VPATH) srcdir=$(srcdir) \ VERSION=$$VERSION \ installdir=$$installdir \ exec_installdir=$$exec_installdir \ Makefile # Handy target to remove intermediate files and backups clean: -rm -f *.o *~ # Handy target to remove everything that is easily regenerated clobber: clean -rm -f *.a tags TAGS config.c Makefile.pre $(TARGET) sedscript -rm -f *.so *.sl so_locations # Handy target to remove everything you don't want to distribute distclean: clobber -rm -f Makefile Setup # Rules appended by makedepend ------------------------------------------------------- Date: 2000-Dec-06 12:14 By: gvanrossum Comment: Aha, I see what you mean now. The brokenness of the Makefile is just that it doesn't define CCC, but uses it when compiling C++ source files. There's definitely a bug here: the makesetup script assumes that various filenames ending in .cc, .c++, .cxx etc. are C++ files and must be compiled with a compiler named $(CCC), but the make variable CCC isn't actually defined. There are scant references to it, but it's all commented out in the configure script. It appears that the correct macro is called CXX these days. Note, however, that the main Python configure script does not attempt to guess a default value for it; rather, you must pass it into configure with "--with-cxx=g++". I'll cook up a patch set, stay tuned. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124478&group_id=5470 From noreply@sourceforge.net Wed Dec 6 20:28:32 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 12:28:32 -0800 Subject: [Python-bugs-list] [Bug #124478] make -f Makefile.pre.in boot fails in RedHat 7.0 Message-ID: <200012062028.MAA29929@sf-web2.i.sourceforge.net> Bug #124478, was updated on 2000-Dec-04 18:31 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: rossrizer Assigned to : gvanrossum Summary: make -f Makefile.pre.in boot fails in RedHat 7.0 Details: Running make -f Makefile.pre.in boot on RedHat 7.0 creates a broken Makefile If you manually add: CCC=g++ to Makefile.pre.in All is well. I do not have the depth of experience with python/make/sed/etc. to determine a more proper fix. Sorry. Follow-Ups: Date: 2000-Dec-06 11:39 By: gvanrossum Comment: Could you explain in what way the resulting Makefile is broken? What are you trying to do with the Makefile? I suspect this is a case of trying to do something that Makefile.pre.in was siply not designed to do. I would recommend looking into distutils for building extensions, instead of messing with Makefile.pre.in. I'll close this bug report unless I hear from you again in a week. ------------------------------------------------------- Date: 2000-Dec-06 11:50 By: rossrizer Comment: make -f Makefile.pre.in boot runs without reporting an error. Unfortunately the resultant makefile is broken in the CCC is undefined. So typing make results in the following: $ make fpic -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive -Wl,--no-whole-archive -g -O2 -Wall -Wstrict-prototypes -I/usr/local/include/python2.0 -I/usr/local/include/python2.0 -DHAVE_CONFIG_H -c ./SimulationPython.cpp make: fpic: Command not found make: [SimulationPython.o] Error 127 (ignored) gcc -shared SimulationPython.o -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive ../spoon/libspoon.a ../simulator/libsimulation.a -Wl,--no-whole-archive -o simumodule.so gcc: SimulationPython.o: No such file or directory make: *** [simumodule.so] Error 1 Here is complete the Makefile that was generated: # Generated automatically from Makefile.pre by makesetup. # Generated automatically from Makefile.pre.in by sedscript. # Universal Unix Makefile for Python extensions # ============================================= # Short Instructions # ------------------ # 1. Build and install Python (1.5 or newer). # 2. "make -f Makefile.pre.in boot" # 3. "make" # You should now have a shared library. # Long Instructions # ----------------- # Build *and install* the basic Python 1.5 distribution. See the # Python README for instructions. (This version of Makefile.pre.in # only withs with Python 1.5, alpha 3 or newer.) # Create a file Setup.in for your extension. This file follows the # format of the Modules/Setup.in file; see the instructions there. # For a simple module called "spam" on file "spammodule.c", it can # contain a single line: # spam spammodule.c # You can build as many modules as you want in the same directory -- # just have a separate line for each of them in the Setup.in file. # If you want to build your extension as a shared library, insert a # line containing just the string # *shared* # at the top of your Setup.in file. # Note that the build process copies Setup.in to Setup, and then works # with Setup. It doesn't overwrite Setup when Setup.in is changed, so # while you're in the process of debugging your Setup.in file, you may # want to edit Setup instead, and copy it back to Setup.in later. # (All this is done so you can distribute your extension easily and # someone else can select the modules they actually want to build by # commenting out lines in the Setup file, without editing the # original. Editing Setup is also used to specify nonstandard # locations for include or library files.) # Copy this file (Misc/Makefile.pre.in) to the directory containing # your extension. # Run "make -f Makefile.pre.in boot". This creates Makefile # (producing Makefile.pre and sedscript as intermediate files) and # config.c, incorporating the values for sys.prefix, sys.exec_prefix # and sys.version from the installed Python binary. For this to work, # the python binary must be on your path. If this fails, try # make -f Makefile.pre.in Makefile VERSION=1.5 installdir= # where is the prefix used to install Python for installdir # (and possibly similar for exec_installdir=). # Note: "make boot" implies "make clobber" -- it assumes that when you # bootstrap you may have changed platforms so it removes all previous # output files. # If you are building your extension as a shared library (your # Setup.in file starts with *shared*), run "make" or "make sharedmods" # to build the shared library files. If you are building a statically # linked Python binary (the only solution of your platform doesn't # support shared libraries, and sometimes handy if you want to # distribute or install the resulting Python binary), run "make # python". # Note: Each time you edit Makefile.pre.in or Setup, you must run # "make Makefile" before running "make". # Hint: if you want to use VPATH, you can start in an empty # subdirectory and say (e.g.): # make -f ../Makefile.pre.in boot srcdir=.. VPATH=.. # === Bootstrap variables (edited through "make boot") === # The prefix used by "make inclinstall libainstall" of core python installdir= /usr/local # The exec_prefix used by the same exec_installdir=/usr/local # Source directory and VPATH in case you want to use VPATH. # (You will have to edit these two lines yourself -- there is no # automatic support as the Makefile is not generated by # config.status.) srcdir= . VPATH= . # === Variables that you may want to customize (rarely) === # (Static) build target TARGET= python # Installed python binary (used only by boot target) PYTHON= python # Add more -I and -D options here CFLAGS= $(OPT) -I$(INCLUDEPY) -I$(EXECINCLUDEPY) $(DEFS) # These two variables can be set in Setup to merge extensions. # See example[23]. BASELIB= BASESETUP= # === Variables set by makesetup === MODOBJS= MODLIBS= $(LOCALMODLIBS) $(BASEMODLIBS) # === Definitions added by makesetup === LOCALMODLIBS= BASEMODLIBS= SHAREDMODS= simumodule$(SO) TKPATH=:lib-tk GLHACK=-Dclear=__GLclear PYTHONPATH=$(COREPYTHONPATH) COREPYTHONPATH=$(DESTPATH)$(SITEPATH)$(TESTPATH)$(MACHDEPPATH)$(TKPATH) MACHDEPPATH=:plat-$(MACHDEP) TESTPATH= SITEPATH= DESTPATH= MACHDESTLIB=$(BINLIBDEST) DESTLIB=$(LIBDEST) SHITE2=-Wl,--no-whole-archive SHITE1=-Wl,--whole-archive CPPFLAGS= -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator # === Variables from configure (through sedscript) === VERSION= 2.0 CC= gcc LINKCC= $(PURIFY) $(CC) SGI_ABI= OPT= -g -O2 -Wall -Wstrict-prototypes LDFLAGS= LDLAST= DEFS= -DHAVE_CONFIG_H LIBS= -lpthread -ldl -lutil LIBM= -lm LIBC= RANLIB= ranlib MACHDEP= linux2 SO= .so LDSHARED= gcc -shared CCSHARED= -fpic LINKFORSHARED= -Xlinker -export-dynamic # Install prefix for architecture-independent files prefix= /usr/local # Install prefix for architecture-dependent files exec_prefix= ${prefix} # Uncomment the following two lines for AIX #LINKCC= $(LIBPL)/makexp_aix $(LIBPL)/python.exp "" $(LIBRARY); $(PURIFY) $(CC) #LDSHARED= $(LIBPL)/ld_so_aix $(CC) -bI:$(LIBPL)/python.exp # === Fixed definitions === # Shell used by make (some versions default to the login shell, which is bad) SHELL= /bin/sh # Expanded directories BINDIR= $(exec_installdir)/bin LIBDIR= $(exec_prefix)/lib MANDIR= $(installdir)/share/man INCLUDEDIR= $(installdir)/include SCRIPTDIR= $(prefix)/lib # Detailed destination directories BINLIBDEST= $(LIBDIR)/python$(VERSION) LIBDEST= $(SCRIPTDIR)/python$(VERSION) INCLUDEPY= $(INCLUDEDIR)/python$(VERSION) EXECINCLUDEPY= $(exec_installdir)/include/python$(VERSION) LIBP= $(exec_installdir)/lib/python$(VERSION) DESTSHARED= $(BINLIBDEST)/site-packages LIBPL= $(LIBP)/config PYTHONLIBS= $(LIBPL)/libpython$(VERSION).a MAKESETUP= $(LIBPL)/makesetup MAKEFILE= $(LIBPL)/Makefile CONFIGC= $(LIBPL)/config.c CONFIGCIN= $(LIBPL)/config.c.in SETUP= $(LIBPL)/Setup.thread $(LIBPL)/Setup.local $(LIBPL)/Setup SYSLIBS= $(LIBM) $(LIBC) ADDOBJS= $(LIBPL)/python.o config.o # Portable install script (configure doesn't always guess right) INSTALL= $(LIBPL)/install-sh -c # Shared libraries must be installed with executable mode on some systems; # rather than figuring out exactly which, we always give them executable mode. # Also, making them read-only seems to be a good idea... INSTALL_SHARED= ${INSTALL} -m 555 # === Fixed rules === # Default target. This builds shared libraries only default: sharedmods # Build everything all: static sharedmods # Build shared libraries from our extension modules sharedmods: $(SHAREDMODS) # Build a static Python binary containing our extension modules static: $(TARGET) $(TARGET): $(ADDOBJS) lib.a $(PYTHONLIBS) Makefile $(BASELIB) $(LINKCC) $(LDFLAGS) $(LINKFORSHARED) \ $(ADDOBJS) lib.a $(PYTHONLIBS) \ $(LINKPATH) $(BASELIB) $(MODLIBS) $(LIBS) $(SYSLIBS) \ -o $(TARGET) $(LDLAST) install: sharedmods if test ! -d $(DESTSHARED) ; then \ mkdir $(DESTSHARED) ; else true ; fi -for i in X $(SHAREDMODS); do \ if test $$i != X; \ then $(INSTALL_SHARED) $$i $(DESTSHARED)/$$i; \ fi; \ done # Build the library containing our extension modules lib.a: $(MODOBJS) -rm -f lib.a ar cr lib.a $(MODOBJS) -$(RANLIB) lib.a # This runs makesetup *twice* to use the BASESETUP definition from Setup config.c Makefile: Makefile.pre Setup $(BASESETUP) $(MAKESETUP) $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) $(MAKE) -f Makefile do-it-again # Internal target to run makesetup for the second time do-it-again: $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) # Make config.o from the config.c created by makesetup config.o: config.c $(CC) $(CFLAGS) -c config.c # Setup is copied from Setup.in *only* if it doesn't yet exist Setup: cp $(srcdir)/Setup.in Setup # Make the intermediate Makefile.pre from Makefile.pre.in Makefile.pre: Makefile.pre.in sedscript sed -f sedscript $(srcdir)/Makefile.pre.in >Makefile.pre # Shortcuts to make the sed arguments on one line P=prefix E=exec_prefix H=Generated automatically from Makefile.pre.in by sedscript. L=LINKFORSHARED # Make the sed script used to create Makefile.pre from Makefile.pre.in sedscript: $(MAKEFILE) sed -n \ -e '1s/.*/1i\\/p' \ -e '2s%.*%# $H%p' \ -e '/^VERSION=/s/^VERSION=[ ]*\(.*\)/s%@VERSION[@]%\1%/p' \ -e '/^CC=/s/^CC=[ ]*\(.*\)/s%@CC[@]%\1%/p' \ -e '/^CCC=/s/^CCC=[ ]*\(.*\)/s%#@SET_CCC[@]%CCC=\1%/p' \ -e '/^LINKCC=/s/^LINKCC=[ ]*\(.*\)/s%@LINKCC[@]%\1%/p' \ -e '/^OPT=/s/^OPT=[ ]*\(.*\)/s%@OPT[@]%\1%/p' \ -e '/^LDFLAGS=/s/^LDFLAGS=[ ]*\(.*\)/s%@LDFLAGS[@]%\1%/p' \ -e '/^LDLAST=/s/^LDLAST=[ ]*\(.*\)/s%@LDLAST[@]%\1%/p' \ -e '/^DEFS=/s/^DEFS=[ ]*\(.*\)/s%@DEFS[@]%\1%/p' \ -e '/^LIBS=/s/^LIBS=[ ]*\(.*\)/s%@LIBS[@]%\1%/p' \ -e '/^LIBM=/s/^LIBM=[ ]*\(.*\)/s%@LIBM[@]%\1%/p' \ -e '/^LIBC=/s/^LIBC=[ ]*\(.*\)/s%@LIBC[@]%\1%/p' \ -e '/^RANLIB=/s/^RANLIB=[ ]*\(.*\)/s%@RANLIB[@]%\1%/p' \ -e '/^MACHDEP=/s/^MACHDEP=[ ]*\(.*\)/s%@MACHDEP[@]%\1%/p' \ -e '/^SO=/s/^SO=[ ]*\(.*\)/s%@SO[@]%\1%/p' \ -e '/^LDSHARED=/s/^LDSHARED=[ ]*\(.*\)/s%@LDSHARED[@]%\1%/p' \ -e '/^CCSHARED=/s/^CCSHARED=[ ]*\(.*\)/s%@CCSHARED[@]%\1%/p' \ -e '/^SGI_ABI=/s/^SGI_ABI=[ ]*\(.*\)/s%@SGI_ABI[@]%\1%/p' \ -e '/^$L=/s/^$L=[ ]*\(.*\)/s%@$L[@]%\1%/p' \ -e '/^$P=/s/^$P=\(.*\)/s%^$P=.*%$P=\1%/p' \ -e '/^$E=/s/^$E=\(.*\)/s%^$E=.*%$E=\1%/p' \ $(MAKEFILE) >sedscript echo "/^#@SET_CCC@/d" >>sedscript echo "/^installdir=/s%=.*%= $(installdir)%" >>sedscript echo "/^exec_installdir=/s%=.*%=$(exec_installdir)%" >>sedscript echo "/^srcdir=/s%=.*%= $(srcdir)%" >>sedscript echo "/^VPATH=/s%=.*%= $(VPATH)%" >>sedscript echo "/^LINKPATH=/s%=.*%= $(LINKPATH)%" >>sedscript echo "/^BASELIB=/s%=.*%= $(BASELIB)%" >>sedscript echo "/^BASESETUP=/s%=.*%= $(BASESETUP)%" >>sedscript # Bootstrap target boot: clobber VERSION=`$(PYTHON) -c "import sys; print sys.version[:3]"`; \ installdir=`$(PYTHON) -c "import sys; print sys.prefix"`; \ exec_installdir=`$(PYTHON) -c "import sys; print sys.exec_prefix"`; \ $(MAKE) -f $(srcdir)/Makefile.pre.in VPATH=$(VPATH) srcdir=$(srcdir) \ VERSION=$$VERSION \ installdir=$$installdir \ exec_installdir=$$exec_installdir \ Makefile # Handy target to remove intermediate files and backups clean: -rm -f *.o *~ # Handy target to remove everything that is easily regenerated clobber: clean -rm -f *.a tags TAGS config.c Makefile.pre $(TARGET) sedscript -rm -f *.so *.sl so_locations # Handy target to remove everything you don't want to distribute distclean: clobber -rm -f Makefile Setup # Rules appended by makedepend ------------------------------------------------------- Date: 2000-Dec-06 12:14 By: gvanrossum Comment: Aha, I see what you mean now. The brokenness of the Makefile is just that it doesn't define CCC, but uses it when compiling C++ source files. There's definitely a bug here: the makesetup script assumes that various filenames ending in .cc, .c++, .cxx etc. are C++ files and must be compiled with a compiler named $(CCC), but the make variable CCC isn't actually defined. There are scant references to it, but it's all commented out in the configure script. It appears that the correct macro is called CXX these days. Note, however, that the main Python configure script does not attempt to guess a default value for it; rather, you must pass it into configure with "--with-cxx=g++". I'll cook up a patch set, stay tuned. ------------------------------------------------------- Date: 2000-Dec-06 12:28 By: gvanrossum Comment: Try this patch: http://sourceforge.net/patch/?func=detailpatch&patch_id=102691&group_id=5470 You must reconfigure Python using the --with-cxx flag, and reinstall it, then use the newly installed Makefile.pre.in. Let me know if this works, then I will check in the patch and close the bug report. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124478&group_id=5470 From noreply@sourceforge.net Wed Dec 6 20:28:32 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 12:28:32 -0800 Subject: [Python-bugs-list] [Bug #124478] make -f Makefile.pre.in boot fails in RedHat 7.0 Message-ID: <200012062028.MAA29926@sf-web2.i.sourceforge.net> Bug #124478, was updated on 2000-Dec-04 18:31 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: rossrizer Assigned to : gvanrossum Summary: make -f Makefile.pre.in boot fails in RedHat 7.0 Details: Running make -f Makefile.pre.in boot on RedHat 7.0 creates a broken Makefile If you manually add: CCC=g++ to Makefile.pre.in All is well. I do not have the depth of experience with python/make/sed/etc. to determine a more proper fix. Sorry. Follow-Ups: Date: 2000-Dec-06 11:39 By: gvanrossum Comment: Could you explain in what way the resulting Makefile is broken? What are you trying to do with the Makefile? I suspect this is a case of trying to do something that Makefile.pre.in was siply not designed to do. I would recommend looking into distutils for building extensions, instead of messing with Makefile.pre.in. I'll close this bug report unless I hear from you again in a week. ------------------------------------------------------- Date: 2000-Dec-06 11:50 By: rossrizer Comment: make -f Makefile.pre.in boot runs without reporting an error. Unfortunately the resultant makefile is broken in the CCC is undefined. So typing make results in the following: $ make fpic -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive -Wl,--no-whole-archive -g -O2 -Wall -Wstrict-prototypes -I/usr/local/include/python2.0 -I/usr/local/include/python2.0 -DHAVE_CONFIG_H -c ./SimulationPython.cpp make: fpic: Command not found make: [SimulationPython.o] Error 127 (ignored) gcc -shared SimulationPython.o -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive ../spoon/libspoon.a ../simulator/libsimulation.a -Wl,--no-whole-archive -o simumodule.so gcc: SimulationPython.o: No such file or directory make: *** [simumodule.so] Error 1 Here is complete the Makefile that was generated: # Generated automatically from Makefile.pre by makesetup. # Generated automatically from Makefile.pre.in by sedscript. # Universal Unix Makefile for Python extensions # ============================================= # Short Instructions # ------------------ # 1. Build and install Python (1.5 or newer). # 2. "make -f Makefile.pre.in boot" # 3. "make" # You should now have a shared library. # Long Instructions # ----------------- # Build *and install* the basic Python 1.5 distribution. See the # Python README for instructions. (This version of Makefile.pre.in # only withs with Python 1.5, alpha 3 or newer.) # Create a file Setup.in for your extension. This file follows the # format of the Modules/Setup.in file; see the instructions there. # For a simple module called "spam" on file "spammodule.c", it can # contain a single line: # spam spammodule.c # You can build as many modules as you want in the same directory -- # just have a separate line for each of them in the Setup.in file. # If you want to build your extension as a shared library, insert a # line containing just the string # *shared* # at the top of your Setup.in file. # Note that the build process copies Setup.in to Setup, and then works # with Setup. It doesn't overwrite Setup when Setup.in is changed, so # while you're in the process of debugging your Setup.in file, you may # want to edit Setup instead, and copy it back to Setup.in later. # (All this is done so you can distribute your extension easily and # someone else can select the modules they actually want to build by # commenting out lines in the Setup file, without editing the # original. Editing Setup is also used to specify nonstandard # locations for include or library files.) # Copy this file (Misc/Makefile.pre.in) to the directory containing # your extension. # Run "make -f Makefile.pre.in boot". This creates Makefile # (producing Makefile.pre and sedscript as intermediate files) and # config.c, incorporating the values for sys.prefix, sys.exec_prefix # and sys.version from the installed Python binary. For this to work, # the python binary must be on your path. If this fails, try # make -f Makefile.pre.in Makefile VERSION=1.5 installdir= # where is the prefix used to install Python for installdir # (and possibly similar for exec_installdir=). # Note: "make boot" implies "make clobber" -- it assumes that when you # bootstrap you may have changed platforms so it removes all previous # output files. # If you are building your extension as a shared library (your # Setup.in file starts with *shared*), run "make" or "make sharedmods" # to build the shared library files. If you are building a statically # linked Python binary (the only solution of your platform doesn't # support shared libraries, and sometimes handy if you want to # distribute or install the resulting Python binary), run "make # python". # Note: Each time you edit Makefile.pre.in or Setup, you must run # "make Makefile" before running "make". # Hint: if you want to use VPATH, you can start in an empty # subdirectory and say (e.g.): # make -f ../Makefile.pre.in boot srcdir=.. VPATH=.. # === Bootstrap variables (edited through "make boot") === # The prefix used by "make inclinstall libainstall" of core python installdir= /usr/local # The exec_prefix used by the same exec_installdir=/usr/local # Source directory and VPATH in case you want to use VPATH. # (You will have to edit these two lines yourself -- there is no # automatic support as the Makefile is not generated by # config.status.) srcdir= . VPATH= . # === Variables that you may want to customize (rarely) === # (Static) build target TARGET= python # Installed python binary (used only by boot target) PYTHON= python # Add more -I and -D options here CFLAGS= $(OPT) -I$(INCLUDEPY) -I$(EXECINCLUDEPY) $(DEFS) # These two variables can be set in Setup to merge extensions. # See example[23]. BASELIB= BASESETUP= # === Variables set by makesetup === MODOBJS= MODLIBS= $(LOCALMODLIBS) $(BASEMODLIBS) # === Definitions added by makesetup === LOCALMODLIBS= BASEMODLIBS= SHAREDMODS= simumodule$(SO) TKPATH=:lib-tk GLHACK=-Dclear=__GLclear PYTHONPATH=$(COREPYTHONPATH) COREPYTHONPATH=$(DESTPATH)$(SITEPATH)$(TESTPATH)$(MACHDEPPATH)$(TKPATH) MACHDEPPATH=:plat-$(MACHDEP) TESTPATH= SITEPATH= DESTPATH= MACHDESTLIB=$(BINLIBDEST) DESTLIB=$(LIBDEST) SHITE2=-Wl,--no-whole-archive SHITE1=-Wl,--whole-archive CPPFLAGS= -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator # === Variables from configure (through sedscript) === VERSION= 2.0 CC= gcc LINKCC= $(PURIFY) $(CC) SGI_ABI= OPT= -g -O2 -Wall -Wstrict-prototypes LDFLAGS= LDLAST= DEFS= -DHAVE_CONFIG_H LIBS= -lpthread -ldl -lutil LIBM= -lm LIBC= RANLIB= ranlib MACHDEP= linux2 SO= .so LDSHARED= gcc -shared CCSHARED= -fpic LINKFORSHARED= -Xlinker -export-dynamic # Install prefix for architecture-independent files prefix= /usr/local # Install prefix for architecture-dependent files exec_prefix= ${prefix} # Uncomment the following two lines for AIX #LINKCC= $(LIBPL)/makexp_aix $(LIBPL)/python.exp "" $(LIBRARY); $(PURIFY) $(CC) #LDSHARED= $(LIBPL)/ld_so_aix $(CC) -bI:$(LIBPL)/python.exp # === Fixed definitions === # Shell used by make (some versions default to the login shell, which is bad) SHELL= /bin/sh # Expanded directories BINDIR= $(exec_installdir)/bin LIBDIR= $(exec_prefix)/lib MANDIR= $(installdir)/share/man INCLUDEDIR= $(installdir)/include SCRIPTDIR= $(prefix)/lib # Detailed destination directories BINLIBDEST= $(LIBDIR)/python$(VERSION) LIBDEST= $(SCRIPTDIR)/python$(VERSION) INCLUDEPY= $(INCLUDEDIR)/python$(VERSION) EXECINCLUDEPY= $(exec_installdir)/include/python$(VERSION) LIBP= $(exec_installdir)/lib/python$(VERSION) DESTSHARED= $(BINLIBDEST)/site-packages LIBPL= $(LIBP)/config PYTHONLIBS= $(LIBPL)/libpython$(VERSION).a MAKESETUP= $(LIBPL)/makesetup MAKEFILE= $(LIBPL)/Makefile CONFIGC= $(LIBPL)/config.c CONFIGCIN= $(LIBPL)/config.c.in SETUP= $(LIBPL)/Setup.thread $(LIBPL)/Setup.local $(LIBPL)/Setup SYSLIBS= $(LIBM) $(LIBC) ADDOBJS= $(LIBPL)/python.o config.o # Portable install script (configure doesn't always guess right) INSTALL= $(LIBPL)/install-sh -c # Shared libraries must be installed with executable mode on some systems; # rather than figuring out exactly which, we always give them executable mode. # Also, making them read-only seems to be a good idea... INSTALL_SHARED= ${INSTALL} -m 555 # === Fixed rules === # Default target. This builds shared libraries only default: sharedmods # Build everything all: static sharedmods # Build shared libraries from our extension modules sharedmods: $(SHAREDMODS) # Build a static Python binary containing our extension modules static: $(TARGET) $(TARGET): $(ADDOBJS) lib.a $(PYTHONLIBS) Makefile $(BASELIB) $(LINKCC) $(LDFLAGS) $(LINKFORSHARED) \ $(ADDOBJS) lib.a $(PYTHONLIBS) \ $(LINKPATH) $(BASELIB) $(MODLIBS) $(LIBS) $(SYSLIBS) \ -o $(TARGET) $(LDLAST) install: sharedmods if test ! -d $(DESTSHARED) ; then \ mkdir $(DESTSHARED) ; else true ; fi -for i in X $(SHAREDMODS); do \ if test $$i != X; \ then $(INSTALL_SHARED) $$i $(DESTSHARED)/$$i; \ fi; \ done # Build the library containing our extension modules lib.a: $(MODOBJS) -rm -f lib.a ar cr lib.a $(MODOBJS) -$(RANLIB) lib.a # This runs makesetup *twice* to use the BASESETUP definition from Setup config.c Makefile: Makefile.pre Setup $(BASESETUP) $(MAKESETUP) $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) $(MAKE) -f Makefile do-it-again # Internal target to run makesetup for the second time do-it-again: $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) # Make config.o from the config.c created by makesetup config.o: config.c $(CC) $(CFLAGS) -c config.c # Setup is copied from Setup.in *only* if it doesn't yet exist Setup: cp $(srcdir)/Setup.in Setup # Make the intermediate Makefile.pre from Makefile.pre.in Makefile.pre: Makefile.pre.in sedscript sed -f sedscript $(srcdir)/Makefile.pre.in >Makefile.pre # Shortcuts to make the sed arguments on one line P=prefix E=exec_prefix H=Generated automatically from Makefile.pre.in by sedscript. L=LINKFORSHARED # Make the sed script used to create Makefile.pre from Makefile.pre.in sedscript: $(MAKEFILE) sed -n \ -e '1s/.*/1i\\/p' \ -e '2s%.*%# $H%p' \ -e '/^VERSION=/s/^VERSION=[ ]*\(.*\)/s%@VERSION[@]%\1%/p' \ -e '/^CC=/s/^CC=[ ]*\(.*\)/s%@CC[@]%\1%/p' \ -e '/^CCC=/s/^CCC=[ ]*\(.*\)/s%#@SET_CCC[@]%CCC=\1%/p' \ -e '/^LINKCC=/s/^LINKCC=[ ]*\(.*\)/s%@LINKCC[@]%\1%/p' \ -e '/^OPT=/s/^OPT=[ ]*\(.*\)/s%@OPT[@]%\1%/p' \ -e '/^LDFLAGS=/s/^LDFLAGS=[ ]*\(.*\)/s%@LDFLAGS[@]%\1%/p' \ -e '/^LDLAST=/s/^LDLAST=[ ]*\(.*\)/s%@LDLAST[@]%\1%/p' \ -e '/^DEFS=/s/^DEFS=[ ]*\(.*\)/s%@DEFS[@]%\1%/p' \ -e '/^LIBS=/s/^LIBS=[ ]*\(.*\)/s%@LIBS[@]%\1%/p' \ -e '/^LIBM=/s/^LIBM=[ ]*\(.*\)/s%@LIBM[@]%\1%/p' \ -e '/^LIBC=/s/^LIBC=[ ]*\(.*\)/s%@LIBC[@]%\1%/p' \ -e '/^RANLIB=/s/^RANLIB=[ ]*\(.*\)/s%@RANLIB[@]%\1%/p' \ -e '/^MACHDEP=/s/^MACHDEP=[ ]*\(.*\)/s%@MACHDEP[@]%\1%/p' \ -e '/^SO=/s/^SO=[ ]*\(.*\)/s%@SO[@]%\1%/p' \ -e '/^LDSHARED=/s/^LDSHARED=[ ]*\(.*\)/s%@LDSHARED[@]%\1%/p' \ -e '/^CCSHARED=/s/^CCSHARED=[ ]*\(.*\)/s%@CCSHARED[@]%\1%/p' \ -e '/^SGI_ABI=/s/^SGI_ABI=[ ]*\(.*\)/s%@SGI_ABI[@]%\1%/p' \ -e '/^$L=/s/^$L=[ ]*\(.*\)/s%@$L[@]%\1%/p' \ -e '/^$P=/s/^$P=\(.*\)/s%^$P=.*%$P=\1%/p' \ -e '/^$E=/s/^$E=\(.*\)/s%^$E=.*%$E=\1%/p' \ $(MAKEFILE) >sedscript echo "/^#@SET_CCC@/d" >>sedscript echo "/^installdir=/s%=.*%= $(installdir)%" >>sedscript echo "/^exec_installdir=/s%=.*%=$(exec_installdir)%" >>sedscript echo "/^srcdir=/s%=.*%= $(srcdir)%" >>sedscript echo "/^VPATH=/s%=.*%= $(VPATH)%" >>sedscript echo "/^LINKPATH=/s%=.*%= $(LINKPATH)%" >>sedscript echo "/^BASELIB=/s%=.*%= $(BASELIB)%" >>sedscript echo "/^BASESETUP=/s%=.*%= $(BASESETUP)%" >>sedscript # Bootstrap target boot: clobber VERSION=`$(PYTHON) -c "import sys; print sys.version[:3]"`; \ installdir=`$(PYTHON) -c "import sys; print sys.prefix"`; \ exec_installdir=`$(PYTHON) -c "import sys; print sys.exec_prefix"`; \ $(MAKE) -f $(srcdir)/Makefile.pre.in VPATH=$(VPATH) srcdir=$(srcdir) \ VERSION=$$VERSION \ installdir=$$installdir \ exec_installdir=$$exec_installdir \ Makefile # Handy target to remove intermediate files and backups clean: -rm -f *.o *~ # Handy target to remove everything that is easily regenerated clobber: clean -rm -f *.a tags TAGS config.c Makefile.pre $(TARGET) sedscript -rm -f *.so *.sl so_locations # Handy target to remove everything you don't want to distribute distclean: clobber -rm -f Makefile Setup # Rules appended by makedepend ------------------------------------------------------- Date: 2000-Dec-06 12:14 By: gvanrossum Comment: Aha, I see what you mean now. The brokenness of the Makefile is just that it doesn't define CCC, but uses it when compiling C++ source files. There's definitely a bug here: the makesetup script assumes that various filenames ending in .cc, .c++, .cxx etc. are C++ files and must be compiled with a compiler named $(CCC), but the make variable CCC isn't actually defined. There are scant references to it, but it's all commented out in the configure script. It appears that the correct macro is called CXX these days. Note, however, that the main Python configure script does not attempt to guess a default value for it; rather, you must pass it into configure with "--with-cxx=g++". I'll cook up a patch set, stay tuned. ------------------------------------------------------- Date: 2000-Dec-06 12:28 By: gvanrossum Comment: Try this patch: http://sourceforge.net/patch/?func=detailpatch&patch_id=102691&group_id=5470 You must reconfigure Python using the --with-cxx flag, and reinstall it, then use the newly installed Makefile.pre.in. Let me know if this works, then I will check in the patch and close the bug report. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124478&group_id=5470 From noreply@sourceforge.net Wed Dec 6 20:39:10 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 12:39:10 -0800 Subject: [Python-bugs-list] [Bug #124758] Bogus error message from os.getlogin() Message-ID: <200012062039.MAA01384@sf-web1.i.sourceforge.net> Bug #124758, was updated on 2000-Dec-06 12:39 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: Nobody Assigned to : Nobody Summary: Bogus error message from os.getlogin() Details: This is happening under Mandrake 7.2, although judging from the Python source, it would happen under any flavor of Unix. When the real getlogin() returns NULL, posix_error() is called with some old value in errno. This causes os.getlogin() to raise OSError with a bogus error message. A bit confusing to see "OSError: [Errno 2] No such file or directory" when attempting os.getlogin() :-) Bob Alexander bobalex@adc.rsv.ricoh.com For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124758&group_id=5470 From noreply@sourceforge.net Wed Dec 6 20:44:20 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 12:44:20 -0800 Subject: [Python-bugs-list] [Bug #124758] Bogus error message from os.getlogin() Message-ID: <200012062044.MAA02535@sf-web1.i.sourceforge.net> Bug #124758, was updated on 2000-Dec-06 12:39 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 3 Submitted by: Nobody Assigned to : fdrake Summary: Bogus error message from os.getlogin() Details: This is happening under Mandrake 7.2, although judging from the Python source, it would happen under any flavor of Unix. When the real getlogin() returns NULL, posix_error() is called with some old value in errno. This causes os.getlogin() to raise OSError with a bogus error message. A bit confusing to see "OSError: [Errno 2] No such file or directory" when attempting os.getlogin() :-) Bob Alexander bobalex@adc.rsv.ricoh.com Follow-Ups: Date: 2000-Dec-06 12:44 By: gvanrossum Comment: Fred, would you mind fixing this? It seems to be your code. I'd reset errno to zero before calling getlogin(). But maybe errno is never set by getlogin() and then it's bogus to call posix_error(). ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124758&group_id=5470 From noreply@sourceforge.net Wed Dec 6 20:44:20 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 12:44:20 -0800 Subject: [Python-bugs-list] [Bug #124758] Bogus error message from os.getlogin() Message-ID: <200012062044.MAA02532@sf-web1.i.sourceforge.net> Bug #124758, was updated on 2000-Dec-06 12:39 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: Nobody Assigned to : Nobody Summary: Bogus error message from os.getlogin() Details: This is happening under Mandrake 7.2, although judging from the Python source, it would happen under any flavor of Unix. When the real getlogin() returns NULL, posix_error() is called with some old value in errno. This causes os.getlogin() to raise OSError with a bogus error message. A bit confusing to see "OSError: [Errno 2] No such file or directory" when attempting os.getlogin() :-) Bob Alexander bobalex@adc.rsv.ricoh.com Follow-Ups: Date: 2000-Dec-06 12:44 By: gvanrossum Comment: Fred, would you mind fixing this? It seems to be your code. I'd reset errno to zero before calling getlogin(). But maybe errno is never set by getlogin() and then it's bogus to call posix_error(). ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124758&group_id=5470 From noreply@sourceforge.net Wed Dec 6 21:21:43 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 13:21:43 -0800 Subject: [Python-bugs-list] [Bug #124764] P_DETACH advertised but not supported Message-ID: <200012062121.NAA24688@sf-web3.vaspecialprojects.com> Bug #124764, was updated on 2000-Dec-06 13:21 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: brey Assigned to : Nobody Summary: P_DETACH advertised but not supported Details: The os documentation (os-process.html) describes a P_DETACH mode for use with spawnv, which it says became available in version 1.52. However, using version 2.0 on Windows, there is no such name in the os module and looking at the code in os.py, P_DETACH doesn't seem to be supported at all. Did that feature go away? ever exist? hiding in an unspecified namespace? In any case, the docs and the code should be in sync. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124764&group_id=5470 From noreply@sourceforge.net Wed Dec 6 21:26:18 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 13:26:18 -0800 Subject: [Python-bugs-list] [Bug #124758] Bogus error message from os.getlogin() Message-ID: <200012062126.NAA24804@sf-web3.vaspecialprojects.com> Bug #124758, was updated on 2000-Dec-06 12:39 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 3 Submitted by: Nobody Assigned to : fdrake Summary: Bogus error message from os.getlogin() Details: This is happening under Mandrake 7.2, although judging from the Python source, it would happen under any flavor of Unix. When the real getlogin() returns NULL, posix_error() is called with some old value in errno. This causes os.getlogin() to raise OSError with a bogus error message. A bit confusing to see "OSError: [Errno 2] No such file or directory" when attempting os.getlogin() :-) Bob Alexander bobalex@adc.rsv.ricoh.com Follow-Ups: Date: 2000-Dec-06 12:44 By: gvanrossum Comment: Fred, would you mind fixing this? It seems to be your code. I'd reset errno to zero before calling getlogin(). But maybe errno is never set by getlogin() and then it's bogus to call posix_error(). ------------------------------------------------------- Date: 2000-Dec-06 13:26 By: fdrake Comment: This is very strange -- how are you getting getlogin() to even fail? The man page on my Mandrake 7.1 box says that if it returns null, there was an error getting the information, does not mention errno at all, but does document one particular errno result code under ERRORS. I can easily patch it to be certain about the errno usage, but I'm not sure what's happening to cause the error (sounds like a system configuration problem; perhaps the fs utmp is on is full, so there's no record?). Since this report was anonymous, I've just checked in the patch; hopefully it fixes everything in a reasonable way. Modules/posixmodule.c revision 2.177. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124758&group_id=5470 From noreply@sourceforge.net Wed Dec 6 21:26:18 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 13:26:18 -0800 Subject: [Python-bugs-list] [Bug #124758] Bogus error message from os.getlogin() Message-ID: <200012062126.NAA24807@sf-web3.vaspecialprojects.com> Bug #124758, was updated on 2000-Dec-06 12:39 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Closed Resolution: Fixed Bug Group: None Priority: 3 Submitted by: Nobody Assigned to : fdrake Summary: Bogus error message from os.getlogin() Details: This is happening under Mandrake 7.2, although judging from the Python source, it would happen under any flavor of Unix. When the real getlogin() returns NULL, posix_error() is called with some old value in errno. This causes os.getlogin() to raise OSError with a bogus error message. A bit confusing to see "OSError: [Errno 2] No such file or directory" when attempting os.getlogin() :-) Bob Alexander bobalex@adc.rsv.ricoh.com Follow-Ups: Date: 2000-Dec-06 12:44 By: gvanrossum Comment: Fred, would you mind fixing this? It seems to be your code. I'd reset errno to zero before calling getlogin(). But maybe errno is never set by getlogin() and then it's bogus to call posix_error(). ------------------------------------------------------- Date: 2000-Dec-06 13:26 By: fdrake Comment: This is very strange -- how are you getting getlogin() to even fail? The man page on my Mandrake 7.1 box says that if it returns null, there was an error getting the information, does not mention errno at all, but does document one particular errno result code under ERRORS. I can easily patch it to be certain about the errno usage, but I'm not sure what's happening to cause the error (sounds like a system configuration problem; perhaps the fs utmp is on is full, so there's no record?). Since this report was anonymous, I've just checked in the patch; hopefully it fixes everything in a reasonable way. Modules/posixmodule.c revision 2.177. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124758&group_id=5470 From noreply@sourceforge.net Wed Dec 6 21:26:29 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 13:26:29 -0800 Subject: [Python-bugs-list] [Bug #124764] P_DETACH advertised but not supported Message-ID: <200012062126.NAA03423@sf-web1.i.sourceforge.net> Bug #124764, was updated on 2000-Dec-06 13:21 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: brey Assigned to : Nobody Summary: P_DETACH advertised but not supported Details: The os documentation (os-process.html) describes a P_DETACH mode for use with spawnv, which it says became available in version 1.52. However, using version 2.0 on Windows, there is no such name in the os module and looking at the code in os.py, P_DETACH doesn't seem to be supported at all. Did that feature go away? ever exist? hiding in an unspecified namespace? In any case, the docs and the code should be in sync. Follow-Ups: Date: 2000-Dec-06 13:26 By: gvanrossum Comment: Don't look at the source code for os.py; look in posixmodule.c. It's really there; I just tried it (import os; print os.P_DETACH prints 4). Which Python version did you download? From where? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124764&group_id=5470 From noreply@sourceforge.net Wed Dec 6 21:26:29 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 13:26:29 -0800 Subject: [Python-bugs-list] [Bug #124764] P_DETACH advertised but not supported Message-ID: <200012062126.NAA03427@sf-web1.i.sourceforge.net> Bug #124764, was updated on 2000-Dec-06 13:21 Here is a current snapshot of the bug. Project: Python Category: Library Status: Closed Resolution: Invalid Bug Group: Platform-specific Priority: 5 Submitted by: brey Assigned to : Nobody Summary: P_DETACH advertised but not supported Details: The os documentation (os-process.html) describes a P_DETACH mode for use with spawnv, which it says became available in version 1.52. However, using version 2.0 on Windows, there is no such name in the os module and looking at the code in os.py, P_DETACH doesn't seem to be supported at all. Did that feature go away? ever exist? hiding in an unspecified namespace? In any case, the docs and the code should be in sync. Follow-Ups: Date: 2000-Dec-06 13:26 By: gvanrossum Comment: Don't look at the source code for os.py; look in posixmodule.c. It's really there; I just tried it (import os; print os.P_DETACH prints 4). Which Python version did you download? From where? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124764&group_id=5470 From noreply@sourceforge.net Wed Dec 6 22:07:34 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 14:07:34 -0800 Subject: [Python-bugs-list] [Bug #124764] P_DETACH advertised but not supported Message-ID: <200012062207.OAA28979@sf-web3.vaspecialprojects.com> Bug #124764, was updated on 2000-Dec-06 13:21 Here is a current snapshot of the bug. Project: Python Category: Library Status: Closed Resolution: Invalid Bug Group: Platform-specific Priority: 5 Submitted by: brey Assigned to : Nobody Summary: P_DETACH advertised but not supported Details: The os documentation (os-process.html) describes a P_DETACH mode for use with spawnv, which it says became available in version 1.52. However, using version 2.0 on Windows, there is no such name in the os module and looking at the code in os.py, P_DETACH doesn't seem to be supported at all. Did that feature go away? ever exist? hiding in an unspecified namespace? In any case, the docs and the code should be in sync. Follow-Ups: Date: 2000-Dec-06 13:26 By: gvanrossum Comment: Don't look at the source code for os.py; look in posixmodule.c. It's really there; I just tried it (import os; print os.P_DETACH prints 4). Which Python version did you download? From where? ------------------------------------------------------- Date: 2000-Dec-06 14:07 By: brey Comment: My mistake. I'm a Python brand-newby, and a newby mistake was mine. I was using P_DETACH where I should have used os.P_DETACH. I probably would have found the problem sooner had I not been confused by the os.py code. Then to confuse me more, I noticed that an os.pyc file was generated, so, even though I had considered there might be another file that had more functionality, I discounted that hyphothesis. Alas, 'twas not the case. I downloaded Python from: http://www.python.org/ftp/python/2.0/BeOpen-Python-2.0.exe I didn't download the Python source code, so when I grepped for P_DETACH, I didn't have posixmodule.c to find it in. Thanks for the quick response. Sorry for the bother. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124764&group_id=5470 From noreply@sourceforge.net Wed Dec 6 23:47:57 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 15:47:57 -0800 Subject: [Python-bugs-list] [Bug #124478] make -f Makefile.pre.in boot fails in RedHat 7.0 Message-ID: <200012062347.PAA01289@sf-web2.i.sourceforge.net> Bug #124478, was updated on 2000-Dec-04 18:31 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: rossrizer Assigned to : gvanrossum Summary: make -f Makefile.pre.in boot fails in RedHat 7.0 Details: Running make -f Makefile.pre.in boot on RedHat 7.0 creates a broken Makefile If you manually add: CCC=g++ to Makefile.pre.in All is well. I do not have the depth of experience with python/make/sed/etc. to determine a more proper fix. Sorry. Follow-Ups: Date: 2000-Dec-06 11:39 By: gvanrossum Comment: Could you explain in what way the resulting Makefile is broken? What are you trying to do with the Makefile? I suspect this is a case of trying to do something that Makefile.pre.in was siply not designed to do. I would recommend looking into distutils for building extensions, instead of messing with Makefile.pre.in. I'll close this bug report unless I hear from you again in a week. ------------------------------------------------------- Date: 2000-Dec-06 11:50 By: rossrizer Comment: make -f Makefile.pre.in boot runs without reporting an error. Unfortunately the resultant makefile is broken in the CCC is undefined. So typing make results in the following: $ make fpic -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive -Wl,--no-whole-archive -g -O2 -Wall -Wstrict-prototypes -I/usr/local/include/python2.0 -I/usr/local/include/python2.0 -DHAVE_CONFIG_H -c ./SimulationPython.cpp make: fpic: Command not found make: [SimulationPython.o] Error 127 (ignored) gcc -shared SimulationPython.o -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive ../spoon/libspoon.a ../simulator/libsimulation.a -Wl,--no-whole-archive -o simumodule.so gcc: SimulationPython.o: No such file or directory make: *** [simumodule.so] Error 1 Here is complete the Makefile that was generated: # Generated automatically from Makefile.pre by makesetup. # Generated automatically from Makefile.pre.in by sedscript. # Universal Unix Makefile for Python extensions # ============================================= # Short Instructions # ------------------ # 1. Build and install Python (1.5 or newer). # 2. "make -f Makefile.pre.in boot" # 3. "make" # You should now have a shared library. # Long Instructions # ----------------- # Build *and install* the basic Python 1.5 distribution. See the # Python README for instructions. (This version of Makefile.pre.in # only withs with Python 1.5, alpha 3 or newer.) # Create a file Setup.in for your extension. This file follows the # format of the Modules/Setup.in file; see the instructions there. # For a simple module called "spam" on file "spammodule.c", it can # contain a single line: # spam spammodule.c # You can build as many modules as you want in the same directory -- # just have a separate line for each of them in the Setup.in file. # If you want to build your extension as a shared library, insert a # line containing just the string # *shared* # at the top of your Setup.in file. # Note that the build process copies Setup.in to Setup, and then works # with Setup. It doesn't overwrite Setup when Setup.in is changed, so # while you're in the process of debugging your Setup.in file, you may # want to edit Setup instead, and copy it back to Setup.in later. # (All this is done so you can distribute your extension easily and # someone else can select the modules they actually want to build by # commenting out lines in the Setup file, without editing the # original. Editing Setup is also used to specify nonstandard # locations for include or library files.) # Copy this file (Misc/Makefile.pre.in) to the directory containing # your extension. # Run "make -f Makefile.pre.in boot". This creates Makefile # (producing Makefile.pre and sedscript as intermediate files) and # config.c, incorporating the values for sys.prefix, sys.exec_prefix # and sys.version from the installed Python binary. For this to work, # the python binary must be on your path. If this fails, try # make -f Makefile.pre.in Makefile VERSION=1.5 installdir= # where is the prefix used to install Python for installdir # (and possibly similar for exec_installdir=). # Note: "make boot" implies "make clobber" -- it assumes that when you # bootstrap you may have changed platforms so it removes all previous # output files. # If you are building your extension as a shared library (your # Setup.in file starts with *shared*), run "make" or "make sharedmods" # to build the shared library files. If you are building a statically # linked Python binary (the only solution of your platform doesn't # support shared libraries, and sometimes handy if you want to # distribute or install the resulting Python binary), run "make # python". # Note: Each time you edit Makefile.pre.in or Setup, you must run # "make Makefile" before running "make". # Hint: if you want to use VPATH, you can start in an empty # subdirectory and say (e.g.): # make -f ../Makefile.pre.in boot srcdir=.. VPATH=.. # === Bootstrap variables (edited through "make boot") === # The prefix used by "make inclinstall libainstall" of core python installdir= /usr/local # The exec_prefix used by the same exec_installdir=/usr/local # Source directory and VPATH in case you want to use VPATH. # (You will have to edit these two lines yourself -- there is no # automatic support as the Makefile is not generated by # config.status.) srcdir= . VPATH= . # === Variables that you may want to customize (rarely) === # (Static) build target TARGET= python # Installed python binary (used only by boot target) PYTHON= python # Add more -I and -D options here CFLAGS= $(OPT) -I$(INCLUDEPY) -I$(EXECINCLUDEPY) $(DEFS) # These two variables can be set in Setup to merge extensions. # See example[23]. BASELIB= BASESETUP= # === Variables set by makesetup === MODOBJS= MODLIBS= $(LOCALMODLIBS) $(BASEMODLIBS) # === Definitions added by makesetup === LOCALMODLIBS= BASEMODLIBS= SHAREDMODS= simumodule$(SO) TKPATH=:lib-tk GLHACK=-Dclear=__GLclear PYTHONPATH=$(COREPYTHONPATH) COREPYTHONPATH=$(DESTPATH)$(SITEPATH)$(TESTPATH)$(MACHDEPPATH)$(TKPATH) MACHDEPPATH=:plat-$(MACHDEP) TESTPATH= SITEPATH= DESTPATH= MACHDESTLIB=$(BINLIBDEST) DESTLIB=$(LIBDEST) SHITE2=-Wl,--no-whole-archive SHITE1=-Wl,--whole-archive CPPFLAGS= -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator # === Variables from configure (through sedscript) === VERSION= 2.0 CC= gcc LINKCC= $(PURIFY) $(CC) SGI_ABI= OPT= -g -O2 -Wall -Wstrict-prototypes LDFLAGS= LDLAST= DEFS= -DHAVE_CONFIG_H LIBS= -lpthread -ldl -lutil LIBM= -lm LIBC= RANLIB= ranlib MACHDEP= linux2 SO= .so LDSHARED= gcc -shared CCSHARED= -fpic LINKFORSHARED= -Xlinker -export-dynamic # Install prefix for architecture-independent files prefix= /usr/local # Install prefix for architecture-dependent files exec_prefix= ${prefix} # Uncomment the following two lines for AIX #LINKCC= $(LIBPL)/makexp_aix $(LIBPL)/python.exp "" $(LIBRARY); $(PURIFY) $(CC) #LDSHARED= $(LIBPL)/ld_so_aix $(CC) -bI:$(LIBPL)/python.exp # === Fixed definitions === # Shell used by make (some versions default to the login shell, which is bad) SHELL= /bin/sh # Expanded directories BINDIR= $(exec_installdir)/bin LIBDIR= $(exec_prefix)/lib MANDIR= $(installdir)/share/man INCLUDEDIR= $(installdir)/include SCRIPTDIR= $(prefix)/lib # Detailed destination directories BINLIBDEST= $(LIBDIR)/python$(VERSION) LIBDEST= $(SCRIPTDIR)/python$(VERSION) INCLUDEPY= $(INCLUDEDIR)/python$(VERSION) EXECINCLUDEPY= $(exec_installdir)/include/python$(VERSION) LIBP= $(exec_installdir)/lib/python$(VERSION) DESTSHARED= $(BINLIBDEST)/site-packages LIBPL= $(LIBP)/config PYTHONLIBS= $(LIBPL)/libpython$(VERSION).a MAKESETUP= $(LIBPL)/makesetup MAKEFILE= $(LIBPL)/Makefile CONFIGC= $(LIBPL)/config.c CONFIGCIN= $(LIBPL)/config.c.in SETUP= $(LIBPL)/Setup.thread $(LIBPL)/Setup.local $(LIBPL)/Setup SYSLIBS= $(LIBM) $(LIBC) ADDOBJS= $(LIBPL)/python.o config.o # Portable install script (configure doesn't always guess right) INSTALL= $(LIBPL)/install-sh -c # Shared libraries must be installed with executable mode on some systems; # rather than figuring out exactly which, we always give them executable mode. # Also, making them read-only seems to be a good idea... INSTALL_SHARED= ${INSTALL} -m 555 # === Fixed rules === # Default target. This builds shared libraries only default: sharedmods # Build everything all: static sharedmods # Build shared libraries from our extension modules sharedmods: $(SHAREDMODS) # Build a static Python binary containing our extension modules static: $(TARGET) $(TARGET): $(ADDOBJS) lib.a $(PYTHONLIBS) Makefile $(BASELIB) $(LINKCC) $(LDFLAGS) $(LINKFORSHARED) \ $(ADDOBJS) lib.a $(PYTHONLIBS) \ $(LINKPATH) $(BASELIB) $(MODLIBS) $(LIBS) $(SYSLIBS) \ -o $(TARGET) $(LDLAST) install: sharedmods if test ! -d $(DESTSHARED) ; then \ mkdir $(DESTSHARED) ; else true ; fi -for i in X $(SHAREDMODS); do \ if test $$i != X; \ then $(INSTALL_SHARED) $$i $(DESTSHARED)/$$i; \ fi; \ done # Build the library containing our extension modules lib.a: $(MODOBJS) -rm -f lib.a ar cr lib.a $(MODOBJS) -$(RANLIB) lib.a # This runs makesetup *twice* to use the BASESETUP definition from Setup config.c Makefile: Makefile.pre Setup $(BASESETUP) $(MAKESETUP) $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) $(MAKE) -f Makefile do-it-again # Internal target to run makesetup for the second time do-it-again: $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) # Make config.o from the config.c created by makesetup config.o: config.c $(CC) $(CFLAGS) -c config.c # Setup is copied from Setup.in *only* if it doesn't yet exist Setup: cp $(srcdir)/Setup.in Setup # Make the intermediate Makefile.pre from Makefile.pre.in Makefile.pre: Makefile.pre.in sedscript sed -f sedscript $(srcdir)/Makefile.pre.in >Makefile.pre # Shortcuts to make the sed arguments on one line P=prefix E=exec_prefix H=Generated automatically from Makefile.pre.in by sedscript. L=LINKFORSHARED # Make the sed script used to create Makefile.pre from Makefile.pre.in sedscript: $(MAKEFILE) sed -n \ -e '1s/.*/1i\\/p' \ -e '2s%.*%# $H%p' \ -e '/^VERSION=/s/^VERSION=[ ]*\(.*\)/s%@VERSION[@]%\1%/p' \ -e '/^CC=/s/^CC=[ ]*\(.*\)/s%@CC[@]%\1%/p' \ -e '/^CCC=/s/^CCC=[ ]*\(.*\)/s%#@SET_CCC[@]%CCC=\1%/p' \ -e '/^LINKCC=/s/^LINKCC=[ ]*\(.*\)/s%@LINKCC[@]%\1%/p' \ -e '/^OPT=/s/^OPT=[ ]*\(.*\)/s%@OPT[@]%\1%/p' \ -e '/^LDFLAGS=/s/^LDFLAGS=[ ]*\(.*\)/s%@LDFLAGS[@]%\1%/p' \ -e '/^LDLAST=/s/^LDLAST=[ ]*\(.*\)/s%@LDLAST[@]%\1%/p' \ -e '/^DEFS=/s/^DEFS=[ ]*\(.*\)/s%@DEFS[@]%\1%/p' \ -e '/^LIBS=/s/^LIBS=[ ]*\(.*\)/s%@LIBS[@]%\1%/p' \ -e '/^LIBM=/s/^LIBM=[ ]*\(.*\)/s%@LIBM[@]%\1%/p' \ -e '/^LIBC=/s/^LIBC=[ ]*\(.*\)/s%@LIBC[@]%\1%/p' \ -e '/^RANLIB=/s/^RANLIB=[ ]*\(.*\)/s%@RANLIB[@]%\1%/p' \ -e '/^MACHDEP=/s/^MACHDEP=[ ]*\(.*\)/s%@MACHDEP[@]%\1%/p' \ -e '/^SO=/s/^SO=[ ]*\(.*\)/s%@SO[@]%\1%/p' \ -e '/^LDSHARED=/s/^LDSHARED=[ ]*\(.*\)/s%@LDSHARED[@]%\1%/p' \ -e '/^CCSHARED=/s/^CCSHARED=[ ]*\(.*\)/s%@CCSHARED[@]%\1%/p' \ -e '/^SGI_ABI=/s/^SGI_ABI=[ ]*\(.*\)/s%@SGI_ABI[@]%\1%/p' \ -e '/^$L=/s/^$L=[ ]*\(.*\)/s%@$L[@]%\1%/p' \ -e '/^$P=/s/^$P=\(.*\)/s%^$P=.*%$P=\1%/p' \ -e '/^$E=/s/^$E=\(.*\)/s%^$E=.*%$E=\1%/p' \ $(MAKEFILE) >sedscript echo "/^#@SET_CCC@/d" >>sedscript echo "/^installdir=/s%=.*%= $(installdir)%" >>sedscript echo "/^exec_installdir=/s%=.*%=$(exec_installdir)%" >>sedscript echo "/^srcdir=/s%=.*%= $(srcdir)%" >>sedscript echo "/^VPATH=/s%=.*%= $(VPATH)%" >>sedscript echo "/^LINKPATH=/s%=.*%= $(LINKPATH)%" >>sedscript echo "/^BASELIB=/s%=.*%= $(BASELIB)%" >>sedscript echo "/^BASESETUP=/s%=.*%= $(BASESETUP)%" >>sedscript # Bootstrap target boot: clobber VERSION=`$(PYTHON) -c "import sys; print sys.version[:3]"`; \ installdir=`$(PYTHON) -c "import sys; print sys.prefix"`; \ exec_installdir=`$(PYTHON) -c "import sys; print sys.exec_prefix"`; \ $(MAKE) -f $(srcdir)/Makefile.pre.in VPATH=$(VPATH) srcdir=$(srcdir) \ VERSION=$$VERSION \ installdir=$$installdir \ exec_installdir=$$exec_installdir \ Makefile # Handy target to remove intermediate files and backups clean: -rm -f *.o *~ # Handy target to remove everything that is easily regenerated clobber: clean -rm -f *.a tags TAGS config.c Makefile.pre $(TARGET) sedscript -rm -f *.so *.sl so_locations # Handy target to remove everything you don't want to distribute distclean: clobber -rm -f Makefile Setup # Rules appended by makedepend ------------------------------------------------------- Date: 2000-Dec-06 12:14 By: gvanrossum Comment: Aha, I see what you mean now. The brokenness of the Makefile is just that it doesn't define CCC, but uses it when compiling C++ source files. There's definitely a bug here: the makesetup script assumes that various filenames ending in .cc, .c++, .cxx etc. are C++ files and must be compiled with a compiler named $(CCC), but the make variable CCC isn't actually defined. There are scant references to it, but it's all commented out in the configure script. It appears that the correct macro is called CXX these days. Note, however, that the main Python configure script does not attempt to guess a default value for it; rather, you must pass it into configure with "--with-cxx=g++". I'll cook up a patch set, stay tuned. ------------------------------------------------------- Date: 2000-Dec-06 12:28 By: gvanrossum Comment: Try this patch: http://sourceforge.net/patch/?func=detailpatch&patch_id=102691&group_id=5470 You must reconfigure Python using the --with-cxx flag, and reinstall it, then use the newly installed Makefile.pre.in. Let me know if this works, then I will check in the patch and close the bug report. ------------------------------------------------------- Date: 2000-Dec-06 15:47 By: gvanrossum Comment: The patch works for customer. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124478&group_id=5470 From noreply@sourceforge.net Wed Dec 6 23:47:57 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 15:47:57 -0800 Subject: [Python-bugs-list] [Bug #124478] make -f Makefile.pre.in boot fails in RedHat 7.0 Message-ID: <200012062347.PAA01292@sf-web2.i.sourceforge.net> Bug #124478, was updated on 2000-Dec-04 18:31 Here is a current snapshot of the bug. Project: Python Category: Build Status: Closed Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: rossrizer Assigned to : gvanrossum Summary: make -f Makefile.pre.in boot fails in RedHat 7.0 Details: Running make -f Makefile.pre.in boot on RedHat 7.0 creates a broken Makefile If you manually add: CCC=g++ to Makefile.pre.in All is well. I do not have the depth of experience with python/make/sed/etc. to determine a more proper fix. Sorry. Follow-Ups: Date: 2000-Dec-06 11:39 By: gvanrossum Comment: Could you explain in what way the resulting Makefile is broken? What are you trying to do with the Makefile? I suspect this is a case of trying to do something that Makefile.pre.in was siply not designed to do. I would recommend looking into distutils for building extensions, instead of messing with Makefile.pre.in. I'll close this bug report unless I hear from you again in a week. ------------------------------------------------------- Date: 2000-Dec-06 11:50 By: rossrizer Comment: make -f Makefile.pre.in boot runs without reporting an error. Unfortunately the resultant makefile is broken in the CCC is undefined. So typing make results in the following: $ make fpic -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive -Wl,--no-whole-archive -g -O2 -Wall -Wstrict-prototypes -I/usr/local/include/python2.0 -I/usr/local/include/python2.0 -DHAVE_CONFIG_H -c ./SimulationPython.cpp make: fpic: Command not found make: [SimulationPython.o] Error 127 (ignored) gcc -shared SimulationPython.o -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator -Wl,--whole-archive ../spoon/libspoon.a ../simulator/libsimulation.a -Wl,--no-whole-archive -o simumodule.so gcc: SimulationPython.o: No such file or directory make: *** [simumodule.so] Error 1 Here is complete the Makefile that was generated: # Generated automatically from Makefile.pre by makesetup. # Generated automatically from Makefile.pre.in by sedscript. # Universal Unix Makefile for Python extensions # ============================================= # Short Instructions # ------------------ # 1. Build and install Python (1.5 or newer). # 2. "make -f Makefile.pre.in boot" # 3. "make" # You should now have a shared library. # Long Instructions # ----------------- # Build *and install* the basic Python 1.5 distribution. See the # Python README for instructions. (This version of Makefile.pre.in # only withs with Python 1.5, alpha 3 or newer.) # Create a file Setup.in for your extension. This file follows the # format of the Modules/Setup.in file; see the instructions there. # For a simple module called "spam" on file "spammodule.c", it can # contain a single line: # spam spammodule.c # You can build as many modules as you want in the same directory -- # just have a separate line for each of them in the Setup.in file. # If you want to build your extension as a shared library, insert a # line containing just the string # *shared* # at the top of your Setup.in file. # Note that the build process copies Setup.in to Setup, and then works # with Setup. It doesn't overwrite Setup when Setup.in is changed, so # while you're in the process of debugging your Setup.in file, you may # want to edit Setup instead, and copy it back to Setup.in later. # (All this is done so you can distribute your extension easily and # someone else can select the modules they actually want to build by # commenting out lines in the Setup file, without editing the # original. Editing Setup is also used to specify nonstandard # locations for include or library files.) # Copy this file (Misc/Makefile.pre.in) to the directory containing # your extension. # Run "make -f Makefile.pre.in boot". This creates Makefile # (producing Makefile.pre and sedscript as intermediate files) and # config.c, incorporating the values for sys.prefix, sys.exec_prefix # and sys.version from the installed Python binary. For this to work, # the python binary must be on your path. If this fails, try # make -f Makefile.pre.in Makefile VERSION=1.5 installdir= # where is the prefix used to install Python for installdir # (and possibly similar for exec_installdir=). # Note: "make boot" implies "make clobber" -- it assumes that when you # bootstrap you may have changed platforms so it removes all previous # output files. # If you are building your extension as a shared library (your # Setup.in file starts with *shared*), run "make" or "make sharedmods" # to build the shared library files. If you are building a statically # linked Python binary (the only solution of your platform doesn't # support shared libraries, and sometimes handy if you want to # distribute or install the resulting Python binary), run "make # python". # Note: Each time you edit Makefile.pre.in or Setup, you must run # "make Makefile" before running "make". # Hint: if you want to use VPATH, you can start in an empty # subdirectory and say (e.g.): # make -f ../Makefile.pre.in boot srcdir=.. VPATH=.. # === Bootstrap variables (edited through "make boot") === # The prefix used by "make inclinstall libainstall" of core python installdir= /usr/local # The exec_prefix used by the same exec_installdir=/usr/local # Source directory and VPATH in case you want to use VPATH. # (You will have to edit these two lines yourself -- there is no # automatic support as the Makefile is not generated by # config.status.) srcdir= . VPATH= . # === Variables that you may want to customize (rarely) === # (Static) build target TARGET= python # Installed python binary (used only by boot target) PYTHON= python # Add more -I and -D options here CFLAGS= $(OPT) -I$(INCLUDEPY) -I$(EXECINCLUDEPY) $(DEFS) # These two variables can be set in Setup to merge extensions. # See example[23]. BASELIB= BASESETUP= # === Variables set by makesetup === MODOBJS= MODLIBS= $(LOCALMODLIBS) $(BASEMODLIBS) # === Definitions added by makesetup === LOCALMODLIBS= BASEMODLIBS= SHAREDMODS= simumodule$(SO) TKPATH=:lib-tk GLHACK=-Dclear=__GLclear PYTHONPATH=$(COREPYTHONPATH) COREPYTHONPATH=$(DESTPATH)$(SITEPATH)$(TESTPATH)$(MACHDEPPATH)$(TKPATH) MACHDEPPATH=:plat-$(MACHDEP) TESTPATH= SITEPATH= DESTPATH= MACHDESTLIB=$(BINLIBDEST) DESTLIB=$(LIBDEST) SHITE2=-Wl,--no-whole-archive SHITE1=-Wl,--whole-archive CPPFLAGS= -Wall -D_DEBUG -D__WXGTK__ -I.. -I../spoon -I../simulator # === Variables from configure (through sedscript) === VERSION= 2.0 CC= gcc LINKCC= $(PURIFY) $(CC) SGI_ABI= OPT= -g -O2 -Wall -Wstrict-prototypes LDFLAGS= LDLAST= DEFS= -DHAVE_CONFIG_H LIBS= -lpthread -ldl -lutil LIBM= -lm LIBC= RANLIB= ranlib MACHDEP= linux2 SO= .so LDSHARED= gcc -shared CCSHARED= -fpic LINKFORSHARED= -Xlinker -export-dynamic # Install prefix for architecture-independent files prefix= /usr/local # Install prefix for architecture-dependent files exec_prefix= ${prefix} # Uncomment the following two lines for AIX #LINKCC= $(LIBPL)/makexp_aix $(LIBPL)/python.exp "" $(LIBRARY); $(PURIFY) $(CC) #LDSHARED= $(LIBPL)/ld_so_aix $(CC) -bI:$(LIBPL)/python.exp # === Fixed definitions === # Shell used by make (some versions default to the login shell, which is bad) SHELL= /bin/sh # Expanded directories BINDIR= $(exec_installdir)/bin LIBDIR= $(exec_prefix)/lib MANDIR= $(installdir)/share/man INCLUDEDIR= $(installdir)/include SCRIPTDIR= $(prefix)/lib # Detailed destination directories BINLIBDEST= $(LIBDIR)/python$(VERSION) LIBDEST= $(SCRIPTDIR)/python$(VERSION) INCLUDEPY= $(INCLUDEDIR)/python$(VERSION) EXECINCLUDEPY= $(exec_installdir)/include/python$(VERSION) LIBP= $(exec_installdir)/lib/python$(VERSION) DESTSHARED= $(BINLIBDEST)/site-packages LIBPL= $(LIBP)/config PYTHONLIBS= $(LIBPL)/libpython$(VERSION).a MAKESETUP= $(LIBPL)/makesetup MAKEFILE= $(LIBPL)/Makefile CONFIGC= $(LIBPL)/config.c CONFIGCIN= $(LIBPL)/config.c.in SETUP= $(LIBPL)/Setup.thread $(LIBPL)/Setup.local $(LIBPL)/Setup SYSLIBS= $(LIBM) $(LIBC) ADDOBJS= $(LIBPL)/python.o config.o # Portable install script (configure doesn't always guess right) INSTALL= $(LIBPL)/install-sh -c # Shared libraries must be installed with executable mode on some systems; # rather than figuring out exactly which, we always give them executable mode. # Also, making them read-only seems to be a good idea... INSTALL_SHARED= ${INSTALL} -m 555 # === Fixed rules === # Default target. This builds shared libraries only default: sharedmods # Build everything all: static sharedmods # Build shared libraries from our extension modules sharedmods: $(SHAREDMODS) # Build a static Python binary containing our extension modules static: $(TARGET) $(TARGET): $(ADDOBJS) lib.a $(PYTHONLIBS) Makefile $(BASELIB) $(LINKCC) $(LDFLAGS) $(LINKFORSHARED) \ $(ADDOBJS) lib.a $(PYTHONLIBS) \ $(LINKPATH) $(BASELIB) $(MODLIBS) $(LIBS) $(SYSLIBS) \ -o $(TARGET) $(LDLAST) install: sharedmods if test ! -d $(DESTSHARED) ; then \ mkdir $(DESTSHARED) ; else true ; fi -for i in X $(SHAREDMODS); do \ if test $$i != X; \ then $(INSTALL_SHARED) $$i $(DESTSHARED)/$$i; \ fi; \ done # Build the library containing our extension modules lib.a: $(MODOBJS) -rm -f lib.a ar cr lib.a $(MODOBJS) -$(RANLIB) lib.a # This runs makesetup *twice* to use the BASESETUP definition from Setup config.c Makefile: Makefile.pre Setup $(BASESETUP) $(MAKESETUP) $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) $(MAKE) -f Makefile do-it-again # Internal target to run makesetup for the second time do-it-again: $(MAKESETUP) \ -m Makefile.pre -c $(CONFIGCIN) Setup -n $(BASESETUP) $(SETUP) # Make config.o from the config.c created by makesetup config.o: config.c $(CC) $(CFLAGS) -c config.c # Setup is copied from Setup.in *only* if it doesn't yet exist Setup: cp $(srcdir)/Setup.in Setup # Make the intermediate Makefile.pre from Makefile.pre.in Makefile.pre: Makefile.pre.in sedscript sed -f sedscript $(srcdir)/Makefile.pre.in >Makefile.pre # Shortcuts to make the sed arguments on one line P=prefix E=exec_prefix H=Generated automatically from Makefile.pre.in by sedscript. L=LINKFORSHARED # Make the sed script used to create Makefile.pre from Makefile.pre.in sedscript: $(MAKEFILE) sed -n \ -e '1s/.*/1i\\/p' \ -e '2s%.*%# $H%p' \ -e '/^VERSION=/s/^VERSION=[ ]*\(.*\)/s%@VERSION[@]%\1%/p' \ -e '/^CC=/s/^CC=[ ]*\(.*\)/s%@CC[@]%\1%/p' \ -e '/^CCC=/s/^CCC=[ ]*\(.*\)/s%#@SET_CCC[@]%CCC=\1%/p' \ -e '/^LINKCC=/s/^LINKCC=[ ]*\(.*\)/s%@LINKCC[@]%\1%/p' \ -e '/^OPT=/s/^OPT=[ ]*\(.*\)/s%@OPT[@]%\1%/p' \ -e '/^LDFLAGS=/s/^LDFLAGS=[ ]*\(.*\)/s%@LDFLAGS[@]%\1%/p' \ -e '/^LDLAST=/s/^LDLAST=[ ]*\(.*\)/s%@LDLAST[@]%\1%/p' \ -e '/^DEFS=/s/^DEFS=[ ]*\(.*\)/s%@DEFS[@]%\1%/p' \ -e '/^LIBS=/s/^LIBS=[ ]*\(.*\)/s%@LIBS[@]%\1%/p' \ -e '/^LIBM=/s/^LIBM=[ ]*\(.*\)/s%@LIBM[@]%\1%/p' \ -e '/^LIBC=/s/^LIBC=[ ]*\(.*\)/s%@LIBC[@]%\1%/p' \ -e '/^RANLIB=/s/^RANLIB=[ ]*\(.*\)/s%@RANLIB[@]%\1%/p' \ -e '/^MACHDEP=/s/^MACHDEP=[ ]*\(.*\)/s%@MACHDEP[@]%\1%/p' \ -e '/^SO=/s/^SO=[ ]*\(.*\)/s%@SO[@]%\1%/p' \ -e '/^LDSHARED=/s/^LDSHARED=[ ]*\(.*\)/s%@LDSHARED[@]%\1%/p' \ -e '/^CCSHARED=/s/^CCSHARED=[ ]*\(.*\)/s%@CCSHARED[@]%\1%/p' \ -e '/^SGI_ABI=/s/^SGI_ABI=[ ]*\(.*\)/s%@SGI_ABI[@]%\1%/p' \ -e '/^$L=/s/^$L=[ ]*\(.*\)/s%@$L[@]%\1%/p' \ -e '/^$P=/s/^$P=\(.*\)/s%^$P=.*%$P=\1%/p' \ -e '/^$E=/s/^$E=\(.*\)/s%^$E=.*%$E=\1%/p' \ $(MAKEFILE) >sedscript echo "/^#@SET_CCC@/d" >>sedscript echo "/^installdir=/s%=.*%= $(installdir)%" >>sedscript echo "/^exec_installdir=/s%=.*%=$(exec_installdir)%" >>sedscript echo "/^srcdir=/s%=.*%= $(srcdir)%" >>sedscript echo "/^VPATH=/s%=.*%= $(VPATH)%" >>sedscript echo "/^LINKPATH=/s%=.*%= $(LINKPATH)%" >>sedscript echo "/^BASELIB=/s%=.*%= $(BASELIB)%" >>sedscript echo "/^BASESETUP=/s%=.*%= $(BASESETUP)%" >>sedscript # Bootstrap target boot: clobber VERSION=`$(PYTHON) -c "import sys; print sys.version[:3]"`; \ installdir=`$(PYTHON) -c "import sys; print sys.prefix"`; \ exec_installdir=`$(PYTHON) -c "import sys; print sys.exec_prefix"`; \ $(MAKE) -f $(srcdir)/Makefile.pre.in VPATH=$(VPATH) srcdir=$(srcdir) \ VERSION=$$VERSION \ installdir=$$installdir \ exec_installdir=$$exec_installdir \ Makefile # Handy target to remove intermediate files and backups clean: -rm -f *.o *~ # Handy target to remove everything that is easily regenerated clobber: clean -rm -f *.a tags TAGS config.c Makefile.pre $(TARGET) sedscript -rm -f *.so *.sl so_locations # Handy target to remove everything you don't want to distribute distclean: clobber -rm -f Makefile Setup # Rules appended by makedepend ------------------------------------------------------- Date: 2000-Dec-06 12:14 By: gvanrossum Comment: Aha, I see what you mean now. The brokenness of the Makefile is just that it doesn't define CCC, but uses it when compiling C++ source files. There's definitely a bug here: the makesetup script assumes that various filenames ending in .cc, .c++, .cxx etc. are C++ files and must be compiled with a compiler named $(CCC), but the make variable CCC isn't actually defined. There are scant references to it, but it's all commented out in the configure script. It appears that the correct macro is called CXX these days. Note, however, that the main Python configure script does not attempt to guess a default value for it; rather, you must pass it into configure with "--with-cxx=g++". I'll cook up a patch set, stay tuned. ------------------------------------------------------- Date: 2000-Dec-06 12:28 By: gvanrossum Comment: Try this patch: http://sourceforge.net/patch/?func=detailpatch&patch_id=102691&group_id=5470 You must reconfigure Python using the --with-cxx flag, and reinstall it, then use the newly installed Makefile.pre.in. Let me know if this works, then I will check in the patch and close the bug report. ------------------------------------------------------- Date: 2000-Dec-06 15:47 By: gvanrossum Comment: The patch works for customer. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124478&group_id=5470 From noreply@sourceforge.net Wed Dec 6 23:50:23 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 15:50:23 -0800 Subject: [Python-bugs-list] [Bug #124782] configure should attempt to find a default C++ compiler. Message-ID: <200012062350.PAA01317@sf-web2.i.sourceforge.net> Bug #124782, was updated on 2000-Dec-06 15:50 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: gvanrossum Assigned to : Nobody Summary: configure should attempt to find a default C++ compiler. Details: It's annoying that C++ isn't better supported by default. Currently, you must specify the C++ compiler with the --with-gxx=... flag. The configure script could easily set CXX to g++ if that exists and if we are using GCC, for example. (But why does using the --with-gxx flag automatically create a main program compiled with C++?) For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124782&group_id=5470 From noreply@sourceforge.net Wed Dec 6 23:50:37 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 15:50:37 -0800 Subject: [Python-bugs-list] [Bug #124782] configure should attempt to find a default C++ compiler. Message-ID: <200012062350.PAA01323@sf-web2.i.sourceforge.net> Bug #124782, was updated on 2000-Dec-06 15:50 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: None Priority: 3 Submitted by: gvanrossum Assigned to : Nobody Summary: configure should attempt to find a default C++ compiler. Details: It's annoying that C++ isn't better supported by default. Currently, you must specify the C++ compiler with the --with-gxx=... flag. The configure script could easily set CXX to g++ if that exists and if we are using GCC, for example. (But why does using the --with-gxx flag automatically create a main program compiled with C++?) For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124782&group_id=5470 From noreply@sourceforge.net Thu Dec 7 01:04:12 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 17:04:12 -0800 Subject: [Python-bugs-list] [Bug #124791] math.modf, math.floor, math.ceil give misleading result Message-ID: <200012070104.RAA16885@sf-web1.i.sourceforge.net> Bug #124791, was updated on 2000-Dec-06 17:04 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: Nobody Assigned to : Nobody Summary: math.modf, math.floor, math.ceil give misleading result Details: c=64 e=1.0/3.0 x=pow(c,e) We are calculating the cube root of 64 x 4.0 ...which is 4 math.modf(x) (1.0,3.0) This is misleading. Should be (0.0,4.0) This happens in version 1.5.2 on my 486. Note the following y=4.0 math.modf(y) (0.0,4.0) All as it should be. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124791&group_id=5470 From noreply@sourceforge.net Thu Dec 7 01:27:44 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 17:27:44 -0800 Subject: [Python-bugs-list] [Bug #124791] math.modf, math.floor, math.ceil give misleading result Message-ID: <200012070127.RAA22749@sf-web3.vaspecialprojects.com> Bug #124791, was updated on 2000-Dec-06 17:04 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: Nobody Assigned to : Nobody Summary: math.modf, math.floor, math.ceil give misleading result Details: c=64 e=1.0/3.0 x=pow(c,e) We are calculating the cube root of 64 x 4.0 ...which is 4 math.modf(x) (1.0,3.0) This is misleading. Should be (0.0,4.0) This happens in version 1.5.2 on my 486. Note the following y=4.0 math.modf(y) (0.0,4.0) All as it should be. Follow-Ups: Date: 2000-Dec-06 17:27 By: gvanrossum Comment: Trying this in Python 2.0, I get this: >>> c=64 >>> e=1.0/3.0 >>> x=pow(c,e) >>> x 3.9999999999999996 >>> import math >>> math.modf(x) (0.99999999999999956, 3.0) >>> e 0.33333333333333331 >>> In other words, e is a little less than 1/3, and x is a little less than 4. In Python 1.5.2, these values are printed after some rounding, which caused the confusion. See also the entry on Floating Point in the Python 2.0 FAQ: http://www.python.org/cgi-bin/moinmoin/FrequentlyAskedQuestions ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124791&group_id=5470 From noreply@sourceforge.net Thu Dec 7 01:27:44 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 17:27:44 -0800 Subject: [Python-bugs-list] [Bug #124791] math.modf, math.floor, math.ceil give misleading result Message-ID: <200012070127.RAA22752@sf-web3.vaspecialprojects.com> Bug #124791, was updated on 2000-Dec-06 17:04 Here is a current snapshot of the bug. Project: Python Category: Library Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: Nobody Assigned to : Nobody Summary: math.modf, math.floor, math.ceil give misleading result Details: c=64 e=1.0/3.0 x=pow(c,e) We are calculating the cube root of 64 x 4.0 ...which is 4 math.modf(x) (1.0,3.0) This is misleading. Should be (0.0,4.0) This happens in version 1.5.2 on my 486. Note the following y=4.0 math.modf(y) (0.0,4.0) All as it should be. Follow-Ups: Date: 2000-Dec-06 17:27 By: gvanrossum Comment: Trying this in Python 2.0, I get this: >>> c=64 >>> e=1.0/3.0 >>> x=pow(c,e) >>> x 3.9999999999999996 >>> import math >>> math.modf(x) (0.99999999999999956, 3.0) >>> e 0.33333333333333331 >>> In other words, e is a little less than 1/3, and x is a little less than 4. In Python 1.5.2, these values are printed after some rounding, which caused the confusion. See also the entry on Floating Point in the Python 2.0 FAQ: http://www.python.org/cgi-bin/moinmoin/FrequentlyAskedQuestions ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124791&group_id=5470 From noreply@sourceforge.net Thu Dec 7 02:33:44 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 18:33:44 -0800 Subject: [Python-bugs-list] [Bug #124791] math.modf, math.floor, math.ceil give misleading result Message-ID: <200012070233.SAA23286@sf-web1.i.sourceforge.net> Bug #124791, was updated on 2000-Dec-06 17:04 Here is a current snapshot of the bug. Project: Python Category: Library Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: Nobody Assigned to : Nobody Summary: math.modf, math.floor, math.ceil give misleading result Details: c=64 e=1.0/3.0 x=pow(c,e) We are calculating the cube root of 64 x 4.0 ...which is 4 math.modf(x) (1.0,3.0) This is misleading. Should be (0.0,4.0) This happens in version 1.5.2 on my 486. Note the following y=4.0 math.modf(y) (0.0,4.0) All as it should be. Follow-Ups: Date: 2000-Dec-06 17:27 By: gvanrossum Comment: Trying this in Python 2.0, I get this: >>> c=64 >>> e=1.0/3.0 >>> x=pow(c,e) >>> x 3.9999999999999996 >>> import math >>> math.modf(x) (0.99999999999999956, 3.0) >>> e 0.33333333333333331 >>> In other words, e is a little less than 1/3, and x is a little less than 4. In Python 1.5.2, these values are printed after some rounding, which caused the confusion. See also the entry on Floating Point in the Python 2.0 FAQ: http://www.python.org/cgi-bin/moinmoin/FrequentlyAskedQuestions ------------------------------------------------------- Date: 2000-Dec-06 18:33 By: tim_one Comment: Just jumping in to clarify that what changed between 1.5.2 and 2.0 is how floats get displayed, not what gets computed. Even under 1.5.2 you should see something like this: >>> pow(64, 1./3) == 4 0 >>> pow(64, 1./3) - 4 -4.4408920985006262e-016 >>> although the last line will display fewer digits under 1.5.2. BTW, why are math.floor and math.ceil mentioned in the bug report title? They aren't mentioned in the bug report. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124791&group_id=5470 From noreply@sourceforge.net Thu Dec 7 02:33:44 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 6 Dec 2000 18:33:44 -0800 Subject: [Python-bugs-list] [Bug #124791] math.modf, math.floor, math.ceil give misleading result Message-ID: <200012070233.SAA23291@sf-web1.i.sourceforge.net> Bug #124791, was updated on 2000-Dec-06 17:04 Here is a current snapshot of the bug. Project: Python Category: Library Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: Nobody Assigned to : Nobody Summary: math.modf, math.floor, math.ceil give misleading result Details: c=64 e=1.0/3.0 x=pow(c,e) We are calculating the cube root of 64 x 4.0 ...which is 4 math.modf(x) (1.0,3.0) This is misleading. Should be (0.0,4.0) This happens in version 1.5.2 on my 486. Note the following y=4.0 math.modf(y) (0.0,4.0) All as it should be. Follow-Ups: Date: 2000-Dec-06 17:27 By: gvanrossum Comment: Trying this in Python 2.0, I get this: >>> c=64 >>> e=1.0/3.0 >>> x=pow(c,e) >>> x 3.9999999999999996 >>> import math >>> math.modf(x) (0.99999999999999956, 3.0) >>> e 0.33333333333333331 >>> In other words, e is a little less than 1/3, and x is a little less than 4. In Python 1.5.2, these values are printed after some rounding, which caused the confusion. See also the entry on Floating Point in the Python 2.0 FAQ: http://www.python.org/cgi-bin/moinmoin/FrequentlyAskedQuestions ------------------------------------------------------- Date: 2000-Dec-06 18:33 By: tim_one Comment: Just jumping in to clarify that what changed between 1.5.2 and 2.0 is how floats get displayed, not what gets computed. Even under 1.5.2 you should see something like this: >>> pow(64, 1./3) == 4 0 >>> pow(64, 1./3) - 4 -4.4408920985006262e-016 >>> although the last line will display fewer digits under 1.5.2. BTW, why are math.floor and math.ceil mentioned in the bug report title? They aren't mentioned in the bug report. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124791&group_id=5470 From noreply@sourceforge.net Thu Dec 7 10:38:14 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 7 Dec 2000 02:38:14 -0800 Subject: [Python-bugs-list] [Bug #124051] ndiff bug: "?" lines are out-of-sync Message-ID: <200012071038.CAA24209@sf-web1.i.sourceforge.net> Bug #124051, was updated on 2000-Dec-01 07:17 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: flight Assigned to : tim_one Summary: ndiff bug: "?" lines are out-of-sync Details: I wonder if this result (the "?" line) of ndiff is intentional: clapton:1> cat a Millionen für so 'n Kamelrennen sind clapton:2> cat b Millionen für so "n Kamelrennen sind clapton:3> /tmp/ndiff.py -q a b - Millionen für so 'n Kamelrennen sind + Millionen für so "n Kamelrennen sind ? ^ clapton:4> cat c Millionen deren für so "n Kamelrennen sind clapton:5> /tmp/ndiff.py -q a c - Millionen für so 'n Kamelrennen sind + Millionen deren für so "n Kamelrennen sind ? ++++++ - + Instead of a - and a subsequent +, I would expect to find here a ^, too. Follow-Ups: Date: 2000-Dec-03 19:11 By: tim_one Comment: A caret means that the character in the line two above and in the same column was replaced by the character in the line one above and in the same column. That's why you get a caret in the first example but not the second: the replacement involves two distinct columns. If you did get a caret in the second example, where would it go? If under the single quote from the line two above, it would look the single quote got replaced by the ü in für; if under the double quote from the line one above, like the first e in Kamelrennen got replaced by a double quote. Both readings would be wrong. Edit sequences aren't unique, and in the absence of an obvious and non-ambiguous way to show replacements across columns, ndiff settles for a *correct* sequence ("deren " was inserted, "'" was deleted, '"' was inserted). In this respect ndiff is functioning as designed, so it's not a bug. ------------------------------------------------------- Date: 2000-Dec-07 02:38 By: flight Comment: [Is such a long comment still appropriate for the SF BTS ?] Tim, could you please explain the meaning of the remaining symbols (plus, minus) as well ? I think their meaning is far from being intuitive, then. > A caret means that the character in the line two above and in the same > column was replaced by the character in the line one above and in the same > column. How about this example, then ? Why is there a caret ? freefly;44> cat a 1 2 3 5 freefly;45> cat b 1 3 4 5 freefly;46> ./ndiff.py -q a b - 1 2 3 5 + 1 3 4 5 ? -^+ Sorry, but i have the impression that the format used in the edit lines is indeed ambigous by definition. > That's why you get a caret in the first example but not the > second: the replacement involves two distinct columns. > Edit sequences aren't unique, and in the absence of an obvious and > non-ambiguous way to show replacements across columns, ndiff settles for a > *correct* sequence ("deren " was inserted, "'" was deleted, '"' was > inserted). In this respect ndiff is functioning as designed, so it's not a > bug. Please describe the intended meaning of '+' and '-', and I will give you an counter-example that ndiff.py doesn't output a correct sequence for. I think it's especially annoying that the edit line doesn't reflect the information that the algorithm used in fancy_replace generates (if you run my first example, the algorithm will in fact record an 'replace' event, but the output routine will degenerate this into an 'insert' and a 'delete' event. Resp. uniqueness and ambiguity: It depends on the definition of an edit line. You won't find a definition that keeps the edit line in sync (column-wise) with both the pre and the post lines. If you try to keep the edit line in sync (column-wise) with the pre line, that's fine for '^' (meaning: character in this column has been changed) and '-' (meaning: character in this column has been removed), but you won't be able to record '+' events, since there's no column in the pre line where a '+' event might be recorded. (Similarly, if you tried to keep the edit line in sync with the post line.) - one two three four five six seven + one three fxur 123456 five 987 six seven ? ---- + +^+++++ ++++ One way to work around this would be to output two edit lines: A pre-edit line would be synced (column-wise) with the pre line, and it would record all '-' and '^' events. A post-edit line would record all '+' and '^' events, and would be in sync with the post line. Unambigous and quite intuitive: - one two three four five six seven ? ---- ^ + one three fxur 123456 five 987 six seven ? ^ +++++++ ++++ A second way to define an unambigous edit line format (but not really friendly to eyeball inspection) would be to use the pre-edit line described above, and, in a second step to merge the '+' sequences at the respective places. This format would allow for easy automatic extraction of all the information generated by fancy_replace. In fact this is what I expected too see. - one two three four five six seven + one three fxur 123456 five 987 six seven ? ---- ^ +++++++ ++++ A third way would be to insert spaces or some other placeholder in the pre line in the columns with 'insert' events and in the post line in the columns with 'delete' events. Easy for eyeball inspection, but it doesn't ouput the original lines. - one two three four_______ five____ six seven + one three fxur 123456 five 987 six seven ? ------ ^ +++++++ ++++ A final way would be to use a format like wdiff, where the insert and replace tags are placed in the line: one[- two-] three four{+ 123456+} five{+ 987+} six seven If you ask me, either of these formats is better than the one currently used, which is only reliable for short lines with small differences. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124051&group_id=5470 From noreply@sourceforge.net Thu Dec 7 23:04:40 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 7 Dec 2000 15:04:40 -0800 Subject: [Python-bugs-list] [Bug #124051] ndiff bug: "?" lines are out-of-sync Message-ID: <200012072304.PAA25597@sf-web2.i.sourceforge.net> Bug #124051, was updated on 2000-Dec-01 07:17 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: flight Assigned to : tim_one Summary: ndiff bug: "?" lines are out-of-sync Details: I wonder if this result (the "?" line) of ndiff is intentional: clapton:1> cat a Millionen für so 'n Kamelrennen sind clapton:2> cat b Millionen für so "n Kamelrennen sind clapton:3> /tmp/ndiff.py -q a b - Millionen für so 'n Kamelrennen sind + Millionen für so "n Kamelrennen sind ? ^ clapton:4> cat c Millionen deren für so "n Kamelrennen sind clapton:5> /tmp/ndiff.py -q a c - Millionen für so 'n Kamelrennen sind + Millionen deren für so "n Kamelrennen sind ? ++++++ - + Instead of a - and a subsequent +, I would expect to find here a ^, too. Follow-Ups: Date: 2000-Dec-03 19:11 By: tim_one Comment: A caret means that the character in the line two above and in the same column was replaced by the character in the line one above and in the same column. That's why you get a caret in the first example but not the second: the replacement involves two distinct columns. If you did get a caret in the second example, where would it go? If under the single quote from the line two above, it would look the single quote got replaced by the ü in für; if under the double quote from the line one above, like the first e in Kamelrennen got replaced by a double quote. Both readings would be wrong. Edit sequences aren't unique, and in the absence of an obvious and non-ambiguous way to show replacements across columns, ndiff settles for a *correct* sequence ("deren " was inserted, "'" was deleted, '"' was inserted). In this respect ndiff is functioning as designed, so it's not a bug. ------------------------------------------------------- Date: 2000-Dec-07 02:38 By: flight Comment: [Is such a long comment still appropriate for the SF BTS ?] Tim, could you please explain the meaning of the remaining symbols (plus, minus) as well ? I think their meaning is far from being intuitive, then. > A caret means that the character in the line two above and in the same > column was replaced by the character in the line one above and in the same > column. How about this example, then ? Why is there a caret ? freefly;44> cat a 1 2 3 5 freefly;45> cat b 1 3 4 5 freefly;46> ./ndiff.py -q a b - 1 2 3 5 + 1 3 4 5 ? -^+ Sorry, but i have the impression that the format used in the edit lines is indeed ambigous by definition. > That's why you get a caret in the first example but not the > second: the replacement involves two distinct columns. > Edit sequences aren't unique, and in the absence of an obvious and > non-ambiguous way to show replacements across columns, ndiff settles for a > *correct* sequence ("deren " was inserted, "'" was deleted, '"' was > inserted). In this respect ndiff is functioning as designed, so it's not a > bug. Please describe the intended meaning of '+' and '-', and I will give you an counter-example that ndiff.py doesn't output a correct sequence for. I think it's especially annoying that the edit line doesn't reflect the information that the algorithm used in fancy_replace generates (if you run my first example, the algorithm will in fact record an 'replace' event, but the output routine will degenerate this into an 'insert' and a 'delete' event. Resp. uniqueness and ambiguity: It depends on the definition of an edit line. You won't find a definition that keeps the edit line in sync (column-wise) with both the pre and the post lines. If you try to keep the edit line in sync (column-wise) with the pre line, that's fine for '^' (meaning: character in this column has been changed) and '-' (meaning: character in this column has been removed), but you won't be able to record '+' events, since there's no column in the pre line where a '+' event might be recorded. (Similarly, if you tried to keep the edit line in sync with the post line.) - one two three four five six seven + one three fxur 123456 five 987 six seven ? ---- + +^+++++ ++++ One way to work around this would be to output two edit lines: A pre-edit line would be synced (column-wise) with the pre line, and it would record all '-' and '^' events. A post-edit line would record all '+' and '^' events, and would be in sync with the post line. Unambigous and quite intuitive: - one two three four five six seven ? ---- ^ + one three fxur 123456 five 987 six seven ? ^ +++++++ ++++ A second way to define an unambigous edit line format (but not really friendly to eyeball inspection) would be to use the pre-edit line described above, and, in a second step to merge the '+' sequences at the respective places. This format would allow for easy automatic extraction of all the information generated by fancy_replace. In fact this is what I expected too see. - one two three four five six seven + one three fxur 123456 five 987 six seven ? ---- ^ +++++++ ++++ A third way would be to insert spaces or some other placeholder in the pre line in the columns with 'insert' events and in the post line in the columns with 'delete' events. Easy for eyeball inspection, but it doesn't ouput the original lines. - one two three four_______ five____ six seven + one three fxur 123456 five 987 six seven ? ------ ^ +++++++ ++++ A final way would be to use a format like wdiff, where the insert and replace tags are placed in the line: one[- two-] three four{+ 123456+} five{+ 987+} six seven If you ask me, either of these formats is better than the one currently used, which is only reliable for short lines with small differences. ------------------------------------------------------- Date: 2000-Dec-07 15:04 By: tim_one Comment: I suggest you're over-thinking this: as the docs say, "Lines beginning with "? " attempt to guide the eye to intraline differences, and were not present in either input file." "Guide the eye" is all they're designed to do. I find them very effective for that purpose. > could you please explain the meaning of the remaining > symbols (plus, minus) as well ? I think their meaning > is far from being intuitive, then. They're not documented because they're not important: if they manage to jerk your eyeball to the parts of the lines that changed, I'm happy. In fact, a "-" means the character in the same column two lines above was deleted, and a "+" means the character in the same column one line above was inserted (although it says nothing about *where* it was inserted wrt the line two lines above). This works great for the usual cases: somebody deletes a word or two (and gets a "?" line with a bunch of ----- under the position(s) of the deleted word(s)), or adds a word or two (and gets a "?" line with a bunch of +++++ under the position(s) of the inserted word(s)). > Sorry, but i have the impression that the format used in > the edit lines is indeed ambigous by definition. Sure. It's ambiguous in that it gives no clue as to where insertions took place wrt to the "before" line. What you're missing is that I don't care . > How about this example, then ? Why is there a caret ? > > freefly;44> cat a > 1 2 3 5 > freefly;45> cat b > 1 3 4 5 > freefly;46> ./ndiff.py -q a b > - 1 2 3 5 > + 1 3 4 5 > ? -^+ The caret is an artifact of that ndiff refuses to match on "junk" characters unless they're adjacent to a non-junk match, and that a blank is considered to be a junk character for intraline marking. In other words, ndiff doesn't "see" that the blanks match here. You can step thru the code to see how that happens. The sequence is nevertheless correct, although it indicates a replacement of a blank by a blank (which is legit but unnecessary). I wouldn't object to adding code to suppress the caret in this case. About synching, ndiff isn't trying to keep the edit line in synch with either the "before" or "after" lines. "Guide the eye" is all it's after. Your format with two "?" lines is attractive at first sight. I'm not sure how well people would like it in practice (I have a lot of feedback on how ndiff actually works today, and I don't want to damage it in favor of an untested-in-practice hypothetical). I can easily predict that people would object to otherwise-empty "?" lines in the cases where a word was simply inserted, or simply deleted. They will also object to having two "?" lines when a single character is merely changed. But if cases like those get special-cased to cut it back to one "?" line, then people will get confused by that very special-casing. It's straightforward cases like these where ndiff works best as-is, and I don't want to lose that since the straightforward cases are the most common. Your format that isn't friendly to eyeball inspection is a non-starter (ndiff's *purpose* is to be friendly to human eyeballs! it's not a goal of ndiff output to be friendly to machine processing, except to allow trivially easy exact reconstruction of both "before" and "after" files). Ditto for the format that doesn't reproduce the original source lines exactly. > If you ask me, either of these formats is better than > the one currently used, which is only reliable for > short lines with small differences. "Reliability" in your sense was not one of ndiff's design goals. For purposes of guiding the eye to changes in the common cases, the length of the line doesn't matter, and this whole subsystem won't trigger at all unless a line pair has a "similarity score" of at least 0.75 (which ensures that changes are "small" relative to the length of the line). ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124051&group_id=5470 From noreply@sourceforge.net Thu Dec 7 23:04:40 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 7 Dec 2000 15:04:40 -0800 Subject: [Python-bugs-list] [Bug #124051] ndiff bug: "?" lines are out-of-sync Message-ID: <200012072304.PAA25600@sf-web2.i.sourceforge.net> Bug #124051, was updated on 2000-Dec-01 07:17 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: flight Assigned to : tim_one Summary: ndiff bug: "?" lines are out-of-sync Details: I wonder if this result (the "?" line) of ndiff is intentional: clapton:1> cat a Millionen für so 'n Kamelrennen sind clapton:2> cat b Millionen für so "n Kamelrennen sind clapton:3> /tmp/ndiff.py -q a b - Millionen für so 'n Kamelrennen sind + Millionen für so "n Kamelrennen sind ? ^ clapton:4> cat c Millionen deren für so "n Kamelrennen sind clapton:5> /tmp/ndiff.py -q a c - Millionen für so 'n Kamelrennen sind + Millionen deren für so "n Kamelrennen sind ? ++++++ - + Instead of a - and a subsequent +, I would expect to find here a ^, too. Follow-Ups: Date: 2000-Dec-03 19:11 By: tim_one Comment: A caret means that the character in the line two above and in the same column was replaced by the character in the line one above and in the same column. That's why you get a caret in the first example but not the second: the replacement involves two distinct columns. If you did get a caret in the second example, where would it go? If under the single quote from the line two above, it would look the single quote got replaced by the ü in für; if under the double quote from the line one above, like the first e in Kamelrennen got replaced by a double quote. Both readings would be wrong. Edit sequences aren't unique, and in the absence of an obvious and non-ambiguous way to show replacements across columns, ndiff settles for a *correct* sequence ("deren " was inserted, "'" was deleted, '"' was inserted). In this respect ndiff is functioning as designed, so it's not a bug. ------------------------------------------------------- Date: 2000-Dec-07 02:38 By: flight Comment: [Is such a long comment still appropriate for the SF BTS ?] Tim, could you please explain the meaning of the remaining symbols (plus, minus) as well ? I think their meaning is far from being intuitive, then. > A caret means that the character in the line two above and in the same > column was replaced by the character in the line one above and in the same > column. How about this example, then ? Why is there a caret ? freefly;44> cat a 1 2 3 5 freefly;45> cat b 1 3 4 5 freefly;46> ./ndiff.py -q a b - 1 2 3 5 + 1 3 4 5 ? -^+ Sorry, but i have the impression that the format used in the edit lines is indeed ambigous by definition. > That's why you get a caret in the first example but not the > second: the replacement involves two distinct columns. > Edit sequences aren't unique, and in the absence of an obvious and > non-ambiguous way to show replacements across columns, ndiff settles for a > *correct* sequence ("deren " was inserted, "'" was deleted, '"' was > inserted). In this respect ndiff is functioning as designed, so it's not a > bug. Please describe the intended meaning of '+' and '-', and I will give you an counter-example that ndiff.py doesn't output a correct sequence for. I think it's especially annoying that the edit line doesn't reflect the information that the algorithm used in fancy_replace generates (if you run my first example, the algorithm will in fact record an 'replace' event, but the output routine will degenerate this into an 'insert' and a 'delete' event. Resp. uniqueness and ambiguity: It depends on the definition of an edit line. You won't find a definition that keeps the edit line in sync (column-wise) with both the pre and the post lines. If you try to keep the edit line in sync (column-wise) with the pre line, that's fine for '^' (meaning: character in this column has been changed) and '-' (meaning: character in this column has been removed), but you won't be able to record '+' events, since there's no column in the pre line where a '+' event might be recorded. (Similarly, if you tried to keep the edit line in sync with the post line.) - one two three four five six seven + one three fxur 123456 five 987 six seven ? ---- + +^+++++ ++++ One way to work around this would be to output two edit lines: A pre-edit line would be synced (column-wise) with the pre line, and it would record all '-' and '^' events. A post-edit line would record all '+' and '^' events, and would be in sync with the post line. Unambigous and quite intuitive: - one two three four five six seven ? ---- ^ + one three fxur 123456 five 987 six seven ? ^ +++++++ ++++ A second way to define an unambigous edit line format (but not really friendly to eyeball inspection) would be to use the pre-edit line described above, and, in a second step to merge the '+' sequences at the respective places. This format would allow for easy automatic extraction of all the information generated by fancy_replace. In fact this is what I expected too see. - one two three four five six seven + one three fxur 123456 five 987 six seven ? ---- ^ +++++++ ++++ A third way would be to insert spaces or some other placeholder in the pre line in the columns with 'insert' events and in the post line in the columns with 'delete' events. Easy for eyeball inspection, but it doesn't ouput the original lines. - one two three four_______ five____ six seven + one three fxur 123456 five 987 six seven ? ------ ^ +++++++ ++++ A final way would be to use a format like wdiff, where the insert and replace tags are placed in the line: one[- two-] three four{+ 123456+} five{+ 987+} six seven If you ask me, either of these formats is better than the one currently used, which is only reliable for short lines with small differences. ------------------------------------------------------- Date: 2000-Dec-07 15:04 By: tim_one Comment: I suggest you're over-thinking this: as the docs say, "Lines beginning with "? " attempt to guide the eye to intraline differences, and were not present in either input file." "Guide the eye" is all they're designed to do. I find them very effective for that purpose. > could you please explain the meaning of the remaining > symbols (plus, minus) as well ? I think their meaning > is far from being intuitive, then. They're not documented because they're not important: if they manage to jerk your eyeball to the parts of the lines that changed, I'm happy. In fact, a "-" means the character in the same column two lines above was deleted, and a "+" means the character in the same column one line above was inserted (although it says nothing about *where* it was inserted wrt the line two lines above). This works great for the usual cases: somebody deletes a word or two (and gets a "?" line with a bunch of ----- under the position(s) of the deleted word(s)), or adds a word or two (and gets a "?" line with a bunch of +++++ under the position(s) of the inserted word(s)). > Sorry, but i have the impression that the format used in > the edit lines is indeed ambigous by definition. Sure. It's ambiguous in that it gives no clue as to where insertions took place wrt to the "before" line. What you're missing is that I don't care . > How about this example, then ? Why is there a caret ? > > freefly;44> cat a > 1 2 3 5 > freefly;45> cat b > 1 3 4 5 > freefly;46> ./ndiff.py -q a b > - 1 2 3 5 > + 1 3 4 5 > ? -^+ The caret is an artifact of that ndiff refuses to match on "junk" characters unless they're adjacent to a non-junk match, and that a blank is considered to be a junk character for intraline marking. In other words, ndiff doesn't "see" that the blanks match here. You can step thru the code to see how that happens. The sequence is nevertheless correct, although it indicates a replacement of a blank by a blank (which is legit but unnecessary). I wouldn't object to adding code to suppress the caret in this case. About synching, ndiff isn't trying to keep the edit line in synch with either the "before" or "after" lines. "Guide the eye" is all it's after. Your format with two "?" lines is attractive at first sight. I'm not sure how well people would like it in practice (I have a lot of feedback on how ndiff actually works today, and I don't want to damage it in favor of an untested-in-practice hypothetical). I can easily predict that people would object to otherwise-empty "?" lines in the cases where a word was simply inserted, or simply deleted. They will also object to having two "?" lines when a single character is merely changed. But if cases like those get special-cased to cut it back to one "?" line, then people will get confused by that very special-casing. It's straightforward cases like these where ndiff works best as-is, and I don't want to lose that since the straightforward cases are the most common. Your format that isn't friendly to eyeball inspection is a non-starter (ndiff's *purpose* is to be friendly to human eyeballs! it's not a goal of ndiff output to be friendly to machine processing, except to allow trivially easy exact reconstruction of both "before" and "after" files). Ditto for the format that doesn't reproduce the original source lines exactly. > If you ask me, either of these formats is better than > the one currently used, which is only reliable for > short lines with small differences. "Reliability" in your sense was not one of ndiff's design goals. For purposes of guiding the eye to changes in the common cases, the length of the line doesn't matter, and this whole subsystem won't trigger at all unless a line pair has a "similarity score" of at least 0.75 (which ensures that changes are "small" relative to the length of the line). ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124051&group_id=5470 From noreply@sourceforge.net Fri Dec 8 01:05:34 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 7 Dec 2000 17:05:34 -0800 Subject: [Python-bugs-list] [Bug #124943] NumPy URL update Message-ID: <200012080105.RAA19585@sf-web3.vaspecialprojects.com> Bug #124943, was updated on 2000-Dec-07 17:05 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: lee_taylor Assigned to : Nobody Summary: NumPy URL update Details: Section 5.6 of the Library manual states: "The Numeric Python extension (NumPy) defines another array type; see The Numerical Python Manual for additional information (available online at ftp://ftp-icf.llnl.gov/pub/python/numericalpython.pdf)." The document is now at http://numpy.sourceforge.net/numdoc/HTML/numdoc.html or as PDF at http://numpy.sourceforge.net/numdoc/numdoc.pdf For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124943&group_id=5470 From noreply@sourceforge.net Fri Dec 8 07:25:46 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 7 Dec 2000 23:25:46 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: <200012080725.XAA02873@sf-web2.i.sourceforge.net> Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : Nobody Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Fri Dec 8 07:28:28 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 7 Dec 2000 23:28:28 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: <200012080728.XAA02925@sf-web2.i.sourceforge.net> Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : Nobody Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error Follow-Ups: Date: 2000-Dec-07 23:28 By: abo Comment: Argh... SF killed all my indents... sorry about that. You should be able to figure it out, but if not email me and I can send a copy. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Fri Dec 8 15:38:27 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 8 Dec 2000 07:38:27 -0800 Subject: [Python-bugs-list] [Bug #125003] Extension manual: Windows extension info needs update Message-ID: <200012081538.HAA10946@sf-web2.i.sourceforge.net> Bug #125003, was updated on 2000-Dec-08 07:38 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 4 Submitted by: fdrake Assigned to : fdrake Summary: Extension manual: Windows extension info needs update Details: The section on building extensions on Windows needs to be updated. A single section, shared for Unix & Windows, needs to point out the distutils approach and point to the appropriate manual. Information about linking, DLLs/shared libraries, and use of C++ needs to be reviewed and updated. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125003&group_id=5470 From noreply@sourceforge.net Fri Dec 8 15:44:01 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 8 Dec 2000 07:44:01 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: <200012081544.HAA18120@sf-web1.i.sourceforge.net> Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : Nobody Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error Follow-Ups: Date: 2000-Dec-07 23:28 By: abo Comment: Argh... SF killed all my indents... sorry about that. You should be able to figure it out, but if not email me and I can send a copy. ------------------------------------------------------- Date: 2000-Dec-08 07:44 By: gvanrossum Comment: I *think* this may have been fixed in Python 2.0. I'm assigning this to Andrew who can confirm that and close the bug report (if it is fixed). ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Fri Dec 8 15:44:01 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 8 Dec 2000 07:44:01 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: <200012081544.HAA18123@sf-web1.i.sourceforge.net> Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : akuchling Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error Follow-Ups: Date: 2000-Dec-07 23:28 By: abo Comment: Argh... SF killed all my indents... sorry about that. You should be able to figure it out, but if not email me and I can send a copy. ------------------------------------------------------- Date: 2000-Dec-08 07:44 By: gvanrossum Comment: I *think* this may have been fixed in Python 2.0. I'm assigning this to Andrew who can confirm that and close the bug report (if it is fixed). ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Fri Dec 8 15:55:40 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 8 Dec 2000 07:55:40 -0800 Subject: [Python-bugs-list] [Bug #119556] Python 2.0 httplib doesn't like ICY status lines Message-ID: <200012081555.HAA11224@sf-web2.i.sourceforge.net> Bug #119556, was updated on 2000-Oct-27 10:24 Here is a current snapshot of the bug. Project: Python Category: Library Status: Closed Resolution: None Bug Group: Not a Bug Priority: 5 Submitted by: kal Assigned to : jhylton Summary: Python 2.0 httplib doesn't like ICY status lines Details: I have a Python script that captures streaming audio on the Internet. The homepage is at http://beam-back.sourceforge.net It works fine with Python 1.5.2 and Python 1.6. One of my users noticed the script is broken under Python 2.0. In the getreply function of httplib.py in Python 1.5.2, even if ver[:5] != 'HTTP/', the connection is left open. I depend on this behavior because streaming audio links found at http://www.shoutcast.com/ return a status line like this: ICY 200 OK Under Python 1.6 and 1.5.2, I can happily go on and use getfile() to obtain the data on the connection. In the getreply function of httplib.py in Python 2.0, the BadStatusLine exception (raised by ver[:5] != 'HTTP/' in begin) causes the connection to be closed. I'm writing to find out how I could go about discussing a possible return to the previous behavior in future releases of Python. If that is not feasible, I would be appreciative of any advice on how I should go about porting the script to Python 2.0. Regards, --Kal Follow-Ups: Date: 2000-Oct-28 18:12 By: loewis Comment: The response of the server clearly violates RFC 2616, section 6.1. Where is the documentation for the protocol that this server implements? If you need to support this protocol (which clearly is not HTTP), you need to implement your own response class (perhaps inheriting from HTTPResponse if possible). ------------------------------------------------------- Date: 2000-Oct-30 13:31 By: kal Comment: As far as I know, there is no published doc for the protocol. I'm working from the xmms source code and just trial and error. You're right. I should write a new response class for supporting ICY instead of latching onto httplib. Thanks. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119556&group_id=5470 From noreply@sourceforge.net Fri Dec 8 17:26:52 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 8 Dec 2000 09:26:52 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: <200012081726.JAA22892@sf-web3.vaspecialprojects.com> Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : akuchling Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error Follow-Ups: Date: 2000-Dec-07 23:28 By: abo Comment: Argh... SF killed all my indents... sorry about that. You should be able to figure it out, but if not email me and I can send a copy. ------------------------------------------------------- Date: 2000-Dec-08 07:44 By: gvanrossum Comment: I *think* this may have been fixed in Python 2.0. I'm assigning this to Andrew who can confirm that and close the bug report (if it is fixed). ------------------------------------------------------- Date: 2000-Dec-08 09:26 By: akuchling Comment: Python 2.0 demonstrates the problem, too. I'm not sure what this is: a zlibmodule bug/oversight or simply problems with zlib's API. Looking at zlib.h, it implies that you'd have to call inflate() with the flush parameter set to Z_SYNC_FLUSH to get the remaining data. Unfortunately this doesn't seem to help -- .flush() method doesn't support an argument, but when I patch zlibmodule.c to allow one, .flush(Z_SYNC_FLUSH) always fails with a -5: buffer error, perhaps because it expects there to be some new data. (The DEFAULTALLOC constant in zlibmodule.c is 16K, but this seems to be unrelated to the problem showing up with more than 16K of data, since changing DEFAULTALLOC to 32K or 1K makes no difference to the size of data at which the bug shows up.) In short, I have no idea what's at fault, or if it can or should be fixed. Unless you or someone else submits a patch, I'll just leave it alone, and mark this bug as closed and "Won't fix". ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Fri Dec 8 17:26:52 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 8 Dec 2000 09:26:52 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: <200012081726.JAA22895@sf-web3.vaspecialprojects.com> Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : akuchling Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error Follow-Ups: Date: 2000-Dec-07 23:28 By: abo Comment: Argh... SF killed all my indents... sorry about that. You should be able to figure it out, but if not email me and I can send a copy. ------------------------------------------------------- Date: 2000-Dec-08 07:44 By: gvanrossum Comment: I *think* this may have been fixed in Python 2.0. I'm assigning this to Andrew who can confirm that and close the bug report (if it is fixed). ------------------------------------------------------- Date: 2000-Dec-08 09:26 By: akuchling Comment: Python 2.0 demonstrates the problem, too. I'm not sure what this is: a zlibmodule bug/oversight or simply problems with zlib's API. Looking at zlib.h, it implies that you'd have to call inflate() with the flush parameter set to Z_SYNC_FLUSH to get the remaining data. Unfortunately this doesn't seem to help -- .flush() method doesn't support an argument, but when I patch zlibmodule.c to allow one, .flush(Z_SYNC_FLUSH) always fails with a -5: buffer error, perhaps because it expects there to be some new data. (The DEFAULTALLOC constant in zlibmodule.c is 16K, but this seems to be unrelated to the problem showing up with more than 16K of data, since changing DEFAULTALLOC to 32K or 1K makes no difference to the size of data at which the bug shows up.) In short, I have no idea what's at fault, or if it can or should be fixed. Unless you or someone else submits a patch, I'll just leave it alone, and mark this bug as closed and "Won't fix". ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Sat Dec 9 07:30:05 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 8 Dec 2000 23:30:05 -0800 Subject: [Python-bugs-list] [Bug #124051] ndiff bug: "?" lines are out-of-sync Message-ID: <200012090730.XAA08617@usw-sf-web1.sourceforge.net> Bug #124051, was updated on 2000-Dec-01 07:17 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: flight Assigned to : tim_one Summary: ndiff bug: "?" lines are out-of-sync Details: I wonder if this result (the "?" line) of ndiff is intentional: clapton:1> cat a Millionen für so 'n Kamelrennen sind clapton:2> cat b Millionen für so "n Kamelrennen sind clapton:3> /tmp/ndiff.py -q a b - Millionen für so 'n Kamelrennen sind + Millionen für so "n Kamelrennen sind ? ^ clapton:4> cat c Millionen deren für so "n Kamelrennen sind clapton:5> /tmp/ndiff.py -q a c - Millionen für so 'n Kamelrennen sind + Millionen deren für so "n Kamelrennen sind ? ++++++ - + Instead of a - and a subsequent +, I would expect to find here a ^, too. Follow-Ups: Date: 2000-Dec-08 23:30 By: nobody Comment: I implented the two-"?" line idea and liked it a lot after using it. Checked in (ndiff.py CVS rev 1.6). Thanks for suggesting it! "?" lines can still be confusing, but I think not as badly or as often. Note that I'm suppressing "?" lines that would consist of all blanks, so the simple "delete one word" or "add one word" cases don't bloat. Since the format of "?" lines was and remains undocumented, nobody can gripe that they changed <0.7 wink>. ------------------------------------------------------- Date: 2000-Dec-07 15:04 By: tim_one Comment: I suggest you're over-thinking this: as the docs say, "Lines beginning with "? " attempt to guide the eye to intraline differences, and were not present in either input file." "Guide the eye" is all they're designed to do. I find them very effective for that purpose. > could you please explain the meaning of the remaining > symbols (plus, minus) as well ? I think their meaning > is far from being intuitive, then. They're not documented because they're not important: if they manage to jerk your eyeball to the parts of the lines that changed, I'm happy. In fact, a "-" means the character in the same column two lines above was deleted, and a "+" means the character in the same column one line above was inserted (although it says nothing about *where* it was inserted wrt the line two lines above). This works great for the usual cases: somebody deletes a word or two (and gets a "?" line with a bunch of ----- under the position(s) of the deleted word(s)), or adds a word or two (and gets a "?" line with a bunch of +++++ under the position(s) of the inserted word(s)). > Sorry, but i have the impression that the format used in > the edit lines is indeed ambigous by definition. Sure. It's ambiguous in that it gives no clue as to where insertions took place wrt to the "before" line. What you're missing is that I don't care . > How about this example, then ? Why is there a caret ? > > freefly;44> cat a > 1 2 3 5 > freefly;45> cat b > 1 3 4 5 > freefly;46> ./ndiff.py -q a b > - 1 2 3 5 > + 1 3 4 5 > ? -^+ The caret is an artifact of that ndiff refuses to match on "junk" characters unless they're adjacent to a non-junk match, and that a blank is considered to be a junk character for intraline marking. In other words, ndiff doesn't "see" that the blanks match here. You can step thru the code to see how that happens. The sequence is nevertheless correct, although it indicates a replacement of a blank by a blank (which is legit but unnecessary). I wouldn't object to adding code to suppress the caret in this case. About synching, ndiff isn't trying to keep the edit line in synch with either the "before" or "after" lines. "Guide the eye" is all it's after. Your format with two "?" lines is attractive at first sight. I'm not sure how well people would like it in practice (I have a lot of feedback on how ndiff actually works today, and I don't want to damage it in favor of an untested-in-practice hypothetical). I can easily predict that people would object to otherwise-empty "?" lines in the cases where a word was simply inserted, or simply deleted. They will also object to having two "?" lines when a single character is merely changed. But if cases like those get special-cased to cut it back to one "?" line, then people will get confused by that very special-casing. It's straightforward cases like these where ndiff works best as-is, and I don't want to lose that since the straightforward cases are the most common. Your format that isn't friendly to eyeball inspection is a non-starter (ndiff's *purpose* is to be friendly to human eyeballs! it's not a goal of ndiff output to be friendly to machine processing, except to allow trivially easy exact reconstruction of both "before" and "after" files). Ditto for the format that doesn't reproduce the original source lines exactly. > If you ask me, either of these formats is better than > the one currently used, which is only reliable for > short lines with small differences. "Reliability" in your sense was not one of ndiff's design goals. For purposes of guiding the eye to changes in the common cases, the length of the line doesn't matter, and this whole subsystem won't trigger at all unless a line pair has a "similarity score" of at least 0.75 (which ensures that changes are "small" relative to the length of the line). ------------------------------------------------------- Date: 2000-Dec-07 02:38 By: flight Comment: [Is such a long comment still appropriate for the SF BTS ?] Tim, could you please explain the meaning of the remaining symbols (plus, minus) as well ? I think their meaning is far from being intuitive, then. > A caret means that the character in the line two above and in the same > column was replaced by the character in the line one above and in the same > column. How about this example, then ? Why is there a caret ? freefly;44> cat a 1 2 3 5 freefly;45> cat b 1 3 4 5 freefly;46> ./ndiff.py -q a b - 1 2 3 5 + 1 3 4 5 ? -^+ Sorry, but i have the impression that the format used in the edit lines is indeed ambigous by definition. > That's why you get a caret in the first example but not the > second: the replacement involves two distinct columns. > Edit sequences aren't unique, and in the absence of an obvious and > non-ambiguous way to show replacements across columns, ndiff settles for a > *correct* sequence ("deren " was inserted, "'" was deleted, '"' was > inserted). In this respect ndiff is functioning as designed, so it's not a > bug. Please describe the intended meaning of '+' and '-', and I will give you an counter-example that ndiff.py doesn't output a correct sequence for. I think it's especially annoying that the edit line doesn't reflect the information that the algorithm used in fancy_replace generates (if you run my first example, the algorithm will in fact record an 'replace' event, but the output routine will degenerate this into an 'insert' and a 'delete' event. Resp. uniqueness and ambiguity: It depends on the definition of an edit line. You won't find a definition that keeps the edit line in sync (column-wise) with both the pre and the post lines. If you try to keep the edit line in sync (column-wise) with the pre line, that's fine for '^' (meaning: character in this column has been changed) and '-' (meaning: character in this column has been removed), but you won't be able to record '+' events, since there's no column in the pre line where a '+' event might be recorded. (Similarly, if you tried to keep the edit line in sync with the post line.) - one two three four five six seven + one three fxur 123456 five 987 six seven ? ---- + +^+++++ ++++ One way to work around this would be to output two edit lines: A pre-edit line would be synced (column-wise) with the pre line, and it would record all '-' and '^' events. A post-edit line would record all '+' and '^' events, and would be in sync with the post line. Unambigous and quite intuitive: - one two three four five six seven ? ---- ^ + one three fxur 123456 five 987 six seven ? ^ +++++++ ++++ A second way to define an unambigous edit line format (but not really friendly to eyeball inspection) would be to use the pre-edit line described above, and, in a second step to merge the '+' sequences at the respective places. This format would allow for easy automatic extraction of all the information generated by fancy_replace. In fact this is what I expected too see. - one two three four five six seven + one three fxur 123456 five 987 six seven ? ---- ^ +++++++ ++++ A third way would be to insert spaces or some other placeholder in the pre line in the columns with 'insert' events and in the post line in the columns with 'delete' events. Easy for eyeball inspection, but it doesn't ouput the original lines. - one two three four_______ five____ six seven + one three fxur 123456 five 987 six seven ? ------ ^ +++++++ ++++ A final way would be to use a format like wdiff, where the insert and replace tags are placed in the line: one[- two-] three four{+ 123456+} five{+ 987+} six seven If you ask me, either of these formats is better than the one currently used, which is only reliable for short lines with small differences. ------------------------------------------------------- Date: 2000-Dec-03 19:11 By: tim_one Comment: A caret means that the character in the line two above and in the same column was replaced by the character in the line one above and in the same column. That's why you get a caret in the first example but not the second: the replacement involves two distinct columns. If you did get a caret in the second example, where would it go? If under the single quote from the line two above, it would look the single quote got replaced by the ü in für; if under the double quote from the line one above, like the first e in Kamelrennen got replaced by a double quote. Both readings would be wrong. Edit sequences aren't unique, and in the absence of an obvious and non-ambiguous way to show replacements across columns, ndiff settles for a *correct* sequence ("deren " was inserted, "'" was deleted, '"' was inserted). In this respect ndiff is functioning as designed, so it's not a bug. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124051&group_id=5470 From noreply@sourceforge.net Sat Dec 9 13:11:30 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 9 Dec 2000 05:11:30 -0800 Subject: [Python-bugs-list] [Bug #121121] Dynamic loading on Solaris does not work Message-ID: <200012091311.FAA16391@usw-sf-web3.sourceforge.net> Bug #121121, was updated on 2000-Nov-02 08:33 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: Platform-specific Priority: 4 Submitted by: tww-account Assigned to : gvanrossum Summary: Dynamic loading on Solaris does not work Details: Dynamic loading of shared libraries (Python/dynload_shlib) does not work under Solaris. This is due to a bug in the autoconf script. The patch at ftp://ftp.thewrittenword.com/outgoing/pub/python-2.0-solaris-dynload.patch fixes it. The problem is that AC_CHECK_LIB(dl, dlopen) will never define HAVE_DLOPEN (AC_CHECK_FUNCS(dlopen) does that) which in turn will never define $ac_cv_func_dlopen. Anyway, using internal autoconf macros is icky. Redo the autoconf test because it will cache the results. -- albert chin (china@thewrittenword.com) Follow-Ups: Date: 2000-Dec-09 05:11 By: tww-account Comment: Tried 2.0.1 from CVS. Everything works now. You can close this bug. Thanks! ------------------------------------------------------- Date: 2000-Nov-13 12:54 By: gvanrossum Comment: Albert, would you be so kind to try again with the CVS version? We didn't follow your suggestions (I can't find your patch on SF -- what's the patch id?) but we did change a few things. According to Greg Ward it now should work on Solaris. I can't test that beucase I have no acccess to a Solaris machine. ------------------------------------------------------- Date: 2000-Nov-02 10:14 By: tww-account Comment: Ok, patch uploaded to the SourceForge patch manager. ------------------------------------------------------- Date: 2000-Nov-02 10:08 By: gvanrossum Comment: Thanks for the patch; but would you be so kind to submit the patch to the SourceForge patch manager? See http://sourceforge.net/patch/?group_id=5470 ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=121121&group_id=5470 From noreply@sourceforge.net Sun Dec 10 00:43:01 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 9 Dec 2000 16:43:01 -0800 Subject: [Python-bugs-list] [Bug #119645] distutils.sysconfig.LINKFORSHARED is undefined Message-ID: <200012100043.QAA30648@usw-sf-web2.sourceforge.net> Bug #119645, was updated on 2000-Oct-28 17:24 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: beroul Assigned to : fdrake Summary: distutils.sysconfig.LINKFORSHARED is undefined Details: The current documentation (in http://www.python.org/doc/current/ext/link-reqs.html) says to check the value of distutils.sysconfig.LINKFORSHARED in order to get compiler options for embedding Python. However, this value is undefined in Python 2.0, installed on a RedHat Linux system from the RPMs on www.python.org: >>> import distutils.sysconfig >>> distutils.sysconfig.LINKFORSHARED Traceback (most recent call last): File "", line 1, in ? AttributeError: LINKFORSHARED Follow-Ups: Date: 2000-Dec-09 16:43 By: nobody Comment: I just tried this "fixed" version on python 2.0 and it still does not work. >>> import distutils >>> distutils.get_config_var("LINKFORSHARED") Traceback (most recent call last): File "", line 1, in ? AttributeError: get_config_var This is a very annoying problem for us people trying to ship applications that embed python. PLEASE see that it gets fixed properly (and in the documentation on the website, which still has not been updated!) ------------------------------------------------------- Date: 2000-Nov-02 13:52 By: fdrake Comment: Fixed in Doc/ext/ext.tex revision 1.88. ------------------------------------------------------- Date: 2000-Oct-30 18:25 By: gward Comment: This is a documentation bug, but I plead partly guilty (since I changed the interface of the distutils.sysconfig module without thinking to see if it had been documented anywhere, implicitly or not). Oops. This patch fixes the doc bug: --- ext.tex 2000/10/26 17:19:58 1.87 +++ ext.tex 2000/10/31 02:22:58 @@ -2138,7 +2138,7 @@ \begin{verbatim} >>> import distutils.sysconfig ->>> distutils.sysconfig.LINKFORSHARED +>>> distutils.sysconfig.get_config_var("LINKFORSHARED") '-Xlinker -export-dynamic' \end{verbatim} \refstmodindex{distutils.sysconfig} Fred, I can check this in if you want... ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119645&group_id=5470 From noreply@sourceforge.net Sun Dec 10 00:44:40 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 9 Dec 2000 16:44:40 -0800 Subject: [Python-bugs-list] [Bug #119645] distutils.sysconfig.LINKFORSHARED is undefined Message-ID: <200012100044.QAA30685@usw-sf-web2.sourceforge.net> Bug #119645, was updated on 2000-Oct-28 17:24 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: beroul Assigned to : fdrake Summary: distutils.sysconfig.LINKFORSHARED is undefined Details: The current documentation (in http://www.python.org/doc/current/ext/link-reqs.html) says to check the value of distutils.sysconfig.LINKFORSHARED in order to get compiler options for embedding Python. However, this value is undefined in Python 2.0, installed on a RedHat Linux system from the RPMs on www.python.org: >>> import distutils.sysconfig >>> distutils.sysconfig.LINKFORSHARED Traceback (most recent call last): File "", line 1, in ? AttributeError: LINKFORSHARED Follow-Ups: Date: 2000-Dec-09 16:44 By: nobody Comment: Sorry, that should read: >>> distutils.sysconfig.get_config_var("LINKFORSHARED") Traceback (most recent call last): File "", line 1, in ? AttributeError: sysconfig (I was playing around and copy/pasted the wrong thing) ------------------------------------------------------- Date: 2000-Dec-09 16:43 By: nobody Comment: I just tried this "fixed" version on python 2.0 and it still does not work. >>> import distutils >>> distutils.get_config_var("LINKFORSHARED") Traceback (most recent call last): File "", line 1, in ? AttributeError: get_config_var This is a very annoying problem for us people trying to ship applications that embed python. PLEASE see that it gets fixed properly (and in the documentation on the website, which still has not been updated!) ------------------------------------------------------- Date: 2000-Nov-02 13:52 By: fdrake Comment: Fixed in Doc/ext/ext.tex revision 1.88. ------------------------------------------------------- Date: 2000-Oct-30 18:25 By: gward Comment: This is a documentation bug, but I plead partly guilty (since I changed the interface of the distutils.sysconfig module without thinking to see if it had been documented anywhere, implicitly or not). Oops. This patch fixes the doc bug: --- ext.tex 2000/10/26 17:19:58 1.87 +++ ext.tex 2000/10/31 02:22:58 @@ -2138,7 +2138,7 @@ \begin{verbatim} >>> import distutils.sysconfig ->>> distutils.sysconfig.LINKFORSHARED +>>> distutils.sysconfig.get_config_var("LINKFORSHARED") '-Xlinker -export-dynamic' \end{verbatim} \refstmodindex{distutils.sysconfig} Fred, I can check this in if you want... ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119645&group_id=5470 From noreply@sourceforge.net Sun Dec 10 06:12:29 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 9 Dec 2000 22:12:29 -0800 Subject: [Python-bugs-list] [Bug #125217] urllib2.py and proxies (Python 2.0) Message-ID: <200012100612.WAA04896@usw-sf-web3.sourceforge.net> Bug #125217, was updated on 2000-Dec-09 22:12 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: tww-china Assigned to : nobody Summary: urllib2.py and proxies (Python 2.0) Details: Consider the following program: import os, sys, urllib2 from urllib2 import urlopen os.environ['http_proxy'] = 'http://[HOST]:5865' authinfo = urllib2.HTTPBasicAuthHandler () authinfo.add_password ('[REALM]', 'http://[URL]', '[login]', '[password]') opener = urllib2.build_opener (authinfo) urllib2.install_opener (opener) url = urlopen ('http://[URL]/') print url.info () url.close () Urllib2.py does not work if we wish to do BASIC authentication to a URL through a proxy. Chances are it also will not work if the proxy requires BASIC authentication too and the URL requires BASIC authentication. Here's the error I receive (Solaris 7/SPARC but platform does not matter): File "/tmp/a.py", line 15, in ? url = urlopen ('http://updates.thewrittenword.com/') File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 137, in urlopen return _opener.open(url, data) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 325, in open '_open', req) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 304, in _call_chain result = func(*args) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 764, in http_open return self.parent.error('http', req, fp, code, msg, hdrs) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 351, in error return self._call_chain(*args) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 304, in _call_chain result = func(*args) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 430, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 401: Authorization Required For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125217&group_id=5470 From noreply@sourceforge.net Mon Dec 11 07:17:18 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Dec 2000 23:17:18 -0800 Subject: [Python-bugs-list] [Bug #125297] select([], [], [], 1) on Windows raises exception Message-ID: <200012110717.XAA05953@usw-sf-web3.sourceforge.net> Bug #125297, was updated on 2000-Dec-10 23:17 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: jrosdahl Assigned to : nobody Summary: select([], [], [], 1) on Windows raises exception Details: The documentation for the select module says that "empty lists are allowed" for the select() call, but Python 2.0 on Windows 95 has troubles with that: Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import select >>> select.select([], [], [], 1) Traceback (innermost last): File "", line 1, in ? select.select([], [], [], 1) error: (0, 'Error') Real bug or documentation bug? Regards, Joel For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125297&group_id=5470 From noreply@sourceforge.net Mon Dec 11 14:34:43 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 06:34:43 -0800 Subject: [Python-bugs-list] [Bug #125297] select([], [], [], 1) on Windows raises exception Message-ID: <200012111434.GAA19163@usw-sf-web2.sourceforge.net> Bug #125297, was updated on 2000-Dec-10 23:17 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: jrosdahl Assigned to : fdrake Summary: select([], [], [], 1) on Windows raises exception Details: The documentation for the select module says that "empty lists are allowed" for the select() call, but Python 2.0 on Windows 95 has troubles with that: Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import select >>> select.select([], [], [], 1) Traceback (innermost last): File "", line 1, in ? select.select([], [], [], 1) error: (0, 'Error') Real bug or documentation bug? Regards, Joel Follow-Ups: Date: 2000-Dec-11 06:34 By: gvanrossum Comment: Confirmed -- empty lists are *not* allowed on Windows. I suggest changing the documentation saying: "Whether three empty lists are allowed or not is platform dependent; it is known to work on Unix but not on Windows." ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125297&group_id=5470 From noreply@sourceforge.net Mon Dec 11 15:47:33 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 07:47:33 -0800 Subject: [Python-bugs-list] [Bug #121060] memory leak in python 2.0 Message-ID: <200012111547.HAA17385@usw-sf-web3.sourceforge.net> Bug #121060, was updated on 2000-Nov-01 17:01 Here is a current snapshot of the bug. Project: Python Category: Core Status: Closed Resolution: Works For Me Bug Group: None Priority: 5 Submitted by: xyld Assigned to : bwarsaw Summary: memory leak in python 2.0 Details: I seem to have stumbled on a memory leak that only seems to occur in Python 2.0, it doesn't happen in Python 1.5.2, and I've been told (but haven't verified) that it doesn't happen in 1.6 either. It seems to happen when what I'd call a 'second order' import occurs, a really simple test case that leaks memory pretty fast - ------ in a directory called Shared, I have a file called test.py, with contents - --- import time --- and an __init__.py so that I can import it. in the directory a level up I have a file called atest.py with contents - --- from Shared import test pass --- and finally I have a file called tester.py with contents - --- while 1: execfile('atest.py') --- ------ running tester.py with python 2.0 leaks memory, running it with python 1.5.2 remains at a constant usage. Follow-Ups: Date: 2000-Dec-11 07:47 By: akuchling Comment: This memory leak seems to be fixed in the CVS tree as of Dec. 12, so I'm closing this bug report. (Barry, did you fix it and forget to close it?) ------------------------------------------------------- Date: 2000-Nov-02 07:42 By: gvanrossum Comment: For Barry. I can indeed reproduce this! ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=121060&group_id=5470 From noreply@sourceforge.net Mon Dec 11 15:51:48 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 07:51:48 -0800 Subject: [Python-bugs-list] [Bug #125297] select([], [], [], 1) on Windows raises exception Message-ID: <200012111551.HAA20979@usw-sf-web2.sourceforge.net> Bug #125297, was updated on 2000-Dec-10 23:17 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: Platform-specific Priority: 5 Submitted by: jrosdahl Assigned to : fdrake Summary: select([], [], [], 1) on Windows raises exception Details: The documentation for the select module says that "empty lists are allowed" for the select() call, but Python 2.0 on Windows 95 has troubles with that: Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import select >>> select.select([], [], [], 1) Traceback (innermost last): File "", line 1, in ? select.select([], [], [], 1) error: (0, 'Error') Real bug or documentation bug? Regards, Joel Follow-Ups: Date: 2000-Dec-11 07:51 By: fdrake Comment: Added clarification to the documentation in Doc/lib/libselect.tex revision 1.17. ------------------------------------------------------- Date: 2000-Dec-11 06:34 By: gvanrossum Comment: Confirmed -- empty lists are *not* allowed on Windows. I suggest changing the documentation saying: "Whether three empty lists are allowed or not is platform dependent; it is known to work on Unix but not on Windows." ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125297&group_id=5470 From noreply@sourceforge.net Mon Dec 11 17:28:36 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 09:28:36 -0800 Subject: [Python-bugs-list] [Bug #121060] memory leak in python 2.0 Message-ID: <200012111728.JAA19972@usw-sf-web3.sourceforge.net> Bug #121060, was updated on 2000-Nov-01 17:01 Here is a current snapshot of the bug. Project: Python Category: Core Status: Closed Resolution: Works For Me Bug Group: None Priority: 5 Submitted by: xyld Assigned to : bwarsaw Summary: memory leak in python 2.0 Details: I seem to have stumbled on a memory leak that only seems to occur in Python 2.0, it doesn't happen in Python 1.5.2, and I've been told (but haven't verified) that it doesn't happen in 1.6 either. It seems to happen when what I'd call a 'second order' import occurs, a really simple test case that leaks memory pretty fast - ------ in a directory called Shared, I have a file called test.py, with contents - --- import time --- and an __init__.py so that I can import it. in the directory a level up I have a file called atest.py with contents - --- from Shared import test pass --- and finally I have a file called tester.py with contents - --- while 1: execfile('atest.py') --- ------ running tester.py with python 2.0 leaks memory, running it with python 1.5.2 remains at a constant usage. Follow-Ups: Date: 2000-Dec-11 09:28 By: bwarsaw Comment: Good, thanks for closing this. ------------------------------------------------------- Date: 2000-Dec-11 07:47 By: akuchling Comment: This memory leak seems to be fixed in the CVS tree as of Dec. 12, so I'm closing this bug report. (Barry, did you fix it and forget to close it?) ------------------------------------------------------- Date: 2000-Nov-02 07:42 By: gvanrossum Comment: For Barry. I can indeed reproduce this! ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=121060&group_id=5470 From noreply@sourceforge.net Mon Dec 11 18:40:38 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 10:40:38 -0800 Subject: [Python-bugs-list] [Bug #119862] Memory leak in python 2.0 and below Message-ID: <200012111840.KAA25154@usw-sf-web2.sourceforge.net> Bug #119862, was updated on 2000-Oct-31 04:22 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : bwarsaw Summary: Memory leak in python 2.0 and below Details: The reference count of the item returned by the PyMapping_GetItemString function in call inside the function vgetargskeywords is not decremented. This can result in a memory leak. Follow-Ups: Date: 2000-Dec-11 10:40 By: bwarsaw Comment: Here's a program that should trigger the leak in Python 2.0. I'm investigating further. ----- snip snip ----- import sha sha.new(string='hello') ----- snip snip ----- ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119862&group_id=5470 From noreply@sourceforge.net Mon Dec 11 19:37:46 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 11:37:46 -0800 Subject: [Python-bugs-list] [Bug #119862] Memory leak in python 2.0 and below Message-ID: <200012111937.LAA23272@usw-sf-web3.sourceforge.net> Bug #119862, was updated on 2000-Oct-31 04:22 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: Fixed Bug Group: None Priority: 5 Submitted by: nobody Assigned to : gvanrossum Summary: Memory leak in python 2.0 and below Details: The reference count of the item returned by the PyMapping_GetItemString function in call inside the function vgetargskeywords is not decremented. This can result in a memory leak. Follow-Ups: Date: 2000-Dec-11 11:37 By: bwarsaw Comment: Here's a proposed patch. Index: getargs.c =================================================================== RCS file: /cvsroot/python/python/dist/src/Python/getargs.c,v retrieving revision 2.50 diff -u -r2.50 getargs.c --- getargs.c 2000/12/01 12:59:05 2.50 +++ getargs.c 2000/12/11 19:36:34 @@ -1123,6 +1123,7 @@ return 0; } converted++; + Py_DECREF(item); } else { PyErr_Clear(); ------------------------------------------------------- Date: 2000-Dec-11 10:40 By: bwarsaw Comment: Here's a program that should trigger the leak in Python 2.0. I'm investigating further. ----- snip snip ----- import sha sha.new(string='hello') ----- snip snip ----- ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119862&group_id=5470 From noreply@sourceforge.net Mon Dec 11 20:01:35 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 12:01:35 -0800 Subject: [Python-bugs-list] [Bug #119862] Memory leak in python 2.0 and below Message-ID: <200012112001.MAA18682@usw-sf-web1.sourceforge.net> Bug #119862, was updated on 2000-Oct-31 04:22 Here is a current snapshot of the bug. Project: Python Category: Core Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: nobody Assigned to : bwarsaw Summary: Memory leak in python 2.0 and below Details: The reference count of the item returned by the PyMapping_GetItemString function in call inside the function vgetargskeywords is not decremented. This can result in a memory leak. Follow-Ups: Date: 2000-Dec-11 12:01 By: bwarsaw Comment: Patch approved by Guido and checked in, getargs.c 2.51. ------------------------------------------------------- Date: 2000-Dec-11 11:37 By: bwarsaw Comment: Here's a proposed patch. Index: getargs.c =================================================================== RCS file: /cvsroot/python/python/dist/src/Python/getargs.c,v retrieving revision 2.50 diff -u -r2.50 getargs.c --- getargs.c 2000/12/01 12:59:05 2.50 +++ getargs.c 2000/12/11 19:36:34 @@ -1123,6 +1123,7 @@ return 0; } converted++; + Py_DECREF(item); } else { PyErr_Clear(); ------------------------------------------------------- Date: 2000-Dec-11 10:40 By: bwarsaw Comment: Here's a program that should trigger the leak in Python 2.0. I'm investigating further. ----- snip snip ----- import sha sha.new(string='hello') ----- snip snip ----- ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119862&group_id=5470 From noreply@sourceforge.net Mon Dec 11 20:04:37 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 12:04:37 -0800 Subject: [Python-bugs-list] [Bug #125375] parser.tuple2ast() failure on valid parse tree Message-ID: <200012112004.MAA23933@usw-sf-web3.sourceforge.net> Bug #125375, was updated on 2000-Dec-11 12:04 Here is a current snapshot of the bug. Project: Python Category: Parser/Compiler Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: jepler Assigned to : nobody Summary: parser.tuple2ast() failure on valid parse tree Details: parser.tuple2ast() fails on the parse tree produced for function definitions which include "*args, **kw" or "*args, * *kw" Versions 1.5.2, 2.0 Python 1.5.2 (#1, Sep 17 1999, 20:15:36) [GCC egcs-2.91.66 19990314/Linux (egcs - on linux-i386 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> import parser >>> parser.tuple2ast(parser.expr("lambda x, *y, **z: 0").totuple()) Traceback (innermost last): File "", line 1, in ? parser.ParserError: Expected node type 16, got 36. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125375&group_id=5470 From noreply@sourceforge.net Mon Dec 11 20:32:45 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 12:32:45 -0800 Subject: [Python-bugs-list] [Bug #123924] Windows - using OpenSSL, problem with socket in httplib.py Message-ID: <200012112032.MAA27928@usw-sf-web2.sourceforge.net> Bug #123924, was updated on 2000-Nov-30 06:11 Here is a current snapshot of the bug. Project: Python Category: Library Status: Closed Resolution: Fixed Bug Group: Platform-specific Priority: 5 Submitted by: ottobehrens Assigned to : gvanrossum Summary: Windows - using OpenSSL, problem with socket in httplib.py Details: We found that when compiling python with USE_SSL on Windows, an exception occurred on the line: ssl = socket.ssl(sock, self.key_file, self.cert_file) The socket.ssl function expected arg 1 to be a socket object and not an instance of a class. We changed it to the following, which resolved the problem. However, this is not a generic solution and breaks again under Linux. on class HTTPSConnection: def connect(self): "Connect to a host on a given (SSL) port." sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((self.host, self.port)) ssl = socket.ssl(sock._sock, self.key_file, self.cert_file) self.sock = FakeSocket(sock, ssl) Follow-Ups: Date: 2000-Dec-11 12:32 By: gvanrossum Comment: Checked in as revision 1.24. Now let's hope that this works -- the submitter never wrote back. ------------------------------------------------------- Date: 2000-Nov-30 06:15 By: gvanrossum Comment: Try this patch instead: Index: httplib.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/httplib.py,v retrieving revision 1.24 diff -c -r1.24 httplib.py *** httplib.py 2000/10/12 19:58:36 1.24 --- httplib.py 2000/11/30 14:14:43 *************** *** 613,619 **** sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((self.host, self.port)) ! ssl = socket.ssl(sock, self.key_file, self.cert_file) self.sock = FakeSocket(sock, ssl) --- 613,622 ---- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((self.host, self.port)) ! realsock = sock ! if hasattr(sock, "_sock"): ! realsock = sock._sock ! ssl = socket.ssl(realsock, self.key_file, self.cert_file) self.sock = FakeSocket(sock, ssl) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=123924&group_id=5470 From noreply@sourceforge.net Mon Dec 11 20:34:24 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 12:34:24 -0800 Subject: [Python-bugs-list] [Bug #125297] select([], [], [], 1) on Windows raises exception Message-ID: <200012112034.MAA24696@usw-sf-web3.sourceforge.net> Bug #125297, was updated on 2000-Dec-10 23:17 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: Platform-specific Priority: 5 Submitted by: jrosdahl Assigned to : fdrake Summary: select([], [], [], 1) on Windows raises exception Details: The documentation for the select module says that "empty lists are allowed" for the select() call, but Python 2.0 on Windows 95 has troubles with that: Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. >>> import select >>> select.select([], [], [], 1) Traceback (innermost last): File "", line 1, in ? select.select([], [], [], 1) error: (0, 'Error') Real bug or documentation bug? Regards, Joel Follow-Ups: Date: 2000-Dec-11 12:34 By: tim_one Comment: Note that this is limitation #2 listed in http://support.microsoft.com/support/kb/articles/Q147/7/14.asp """ 2. Calling select() with three empty FD_SETs and a valid TIMEOUT structure as a delay function. Reason: The select() function is intended as a network function, not a general purpose timer. Workaround: Use a legitimate system timer service. """ ------------------------------------------------------- Date: 2000-Dec-11 07:51 By: fdrake Comment: Added clarification to the documentation in Doc/lib/libselect.tex revision 1.17. ------------------------------------------------------- Date: 2000-Dec-11 06:34 By: gvanrossum Comment: Confirmed -- empty lists are *not* allowed on Windows. I suggest changing the documentation saying: "Whether three empty lists are allowed or not is platform dependent; it is known to work on Unix but not on Windows." ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125297&group_id=5470 From noreply@sourceforge.net Mon Dec 11 20:35:03 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 12:35:03 -0800 Subject: [Python-bugs-list] [Bug #125375] parser.tuple2ast() failure on valid parse tree Message-ID: <200012112035.MAA24719@usw-sf-web3.sourceforge.net> Bug #125375, was updated on 2000-Dec-11 12:04 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: jepler Assigned to : fdrake Summary: parser.tuple2ast() failure on valid parse tree Details: parser.tuple2ast() fails on the parse tree produced for function definitions which include "*args, **kw" or "*args, * *kw" Versions 1.5.2, 2.0 Python 1.5.2 (#1, Sep 17 1999, 20:15:36) [GCC egcs-2.91.66 19990314/Linux (egcs - on linux-i386 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> import parser >>> parser.tuple2ast(parser.expr("lambda x, *y, **z: 0").totuple()) Traceback (innermost last): File "", line 1, in ? parser.ParserError: Expected node type 16, got 36. Follow-Ups: Date: 2000-Dec-11 12:35 By: fdrake Comment: This appears to be a bug in the validation of argument lists. This is not, however, a bug in the code parser/compiler code, so I'm re-labelling it as a "Modules" bug. Assigned to me since I wrote the code. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125375&group_id=5470 From noreply@sourceforge.net Mon Dec 11 20:46:21 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 12:46:21 -0800 Subject: [Python-bugs-list] [Bug #124943] NumPy URL update Message-ID: <200012112046.MAA20136@usw-sf-web1.sourceforge.net> Bug #124943, was updated on 2000-Dec-07 17:05 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: lee_taylor Assigned to : fdrake Summary: NumPy URL update Details: Section 5.6 of the Library manual states: "The Numeric Python extension (NumPy) defines another array type; see The Numerical Python Manual for additional information (available online at ftp://ftp-icf.llnl.gov/pub/python/numericalpython.pdf)." The document is now at http://numpy.sourceforge.net/numdoc/HTML/numdoc.html or as PDF at http://numpy.sourceforge.net/numdoc/numdoc.pdf For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124943&group_id=5470 From noreply@sourceforge.net Mon Dec 11 20:47:26 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 12:47:26 -0800 Subject: [Python-bugs-list] [Bug #124782] configure should attempt to find a default C++ compiler. Message-ID: <200012112047.MAA20162@usw-sf-web1.sourceforge.net> Bug #124782, was updated on 2000-Dec-06 15:50 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: None Priority: 3 Submitted by: gvanrossum Assigned to : loewis Summary: configure should attempt to find a default C++ compiler. Details: It's annoying that C++ isn't better supported by default. Currently, you must specify the C++ compiler with the --with-gxx=... flag. The configure script could easily set CXX to g++ if that exists and if we are using GCC, for example. (But why does using the --with-gxx flag automatically create a main program compiled with C++?) Follow-Ups: Date: 2000-Dec-11 12:47 By: gvanrossum Comment: Martin, do you happen to be a C++ user? Maybe you have an idea what to do with this? If not, assign it back to me or to Nobody. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124782&group_id=5470 From noreply@sourceforge.net Mon Dec 11 20:48:43 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 12:48:43 -0800 Subject: [Python-bugs-list] [Bug #125217] urllib2.py and proxies (Python 2.0) Message-ID: <200012112048.MAA25170@usw-sf-web3.sourceforge.net> Bug #125217, was updated on 2000-Dec-09 22:12 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: tww-china Assigned to : jhylton Summary: urllib2.py and proxies (Python 2.0) Details: Consider the following program: import os, sys, urllib2 from urllib2 import urlopen os.environ['http_proxy'] = 'http://[HOST]:5865' authinfo = urllib2.HTTPBasicAuthHandler () authinfo.add_password ('[REALM]', 'http://[URL]', '[login]', '[password]') opener = urllib2.build_opener (authinfo) urllib2.install_opener (opener) url = urlopen ('http://[URL]/') print url.info () url.close () Urllib2.py does not work if we wish to do BASIC authentication to a URL through a proxy. Chances are it also will not work if the proxy requires BASIC authentication too and the URL requires BASIC authentication. Here's the error I receive (Solaris 7/SPARC but platform does not matter): File "/tmp/a.py", line 15, in ? url = urlopen ('http://updates.thewrittenword.com/') File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 137, in urlopen return _opener.open(url, data) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 325, in open '_open', req) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 304, in _call_chain result = func(*args) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 764, in http_open return self.parent.error('http', req, fp, code, msg, hdrs) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 351, in error return self._call_chain(*args) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 304, in _call_chain result = func(*args) File "/opt/TWWfsw/pkgutils12/lib/python20/lib/python2.0/urllib2.py", line 430, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 401: Authorization Required For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125217&group_id=5470 From noreply@sourceforge.net Mon Dec 11 20:57:41 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 12:57:41 -0800 Subject: [Python-bugs-list] [Bug #124943] NumPy URL update Message-ID: <200012112057.MAA20418@usw-sf-web1.sourceforge.net> Bug #124943, was updated on 2000-Dec-07 17:05 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: lee_taylor Assigned to : fdrake Summary: NumPy URL update Details: Section 5.6 of the Library manual states: "The Numeric Python extension (NumPy) defines another array type; see The Numerical Python Manual for additional information (available online at ftp://ftp-icf.llnl.gov/pub/python/numericalpython.pdf)." The document is now at http://numpy.sourceforge.net/numdoc/HTML/numdoc.html or as PDF at http://numpy.sourceforge.net/numdoc/numdoc.pdf Follow-Ups: Date: 2000-Dec-11 12:57 By: fdrake Comment: Updated links in Doc/lib/libarray.tex revision 1.28. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124943&group_id=5470 From noreply@sourceforge.net Mon Dec 11 21:43:26 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 13:43:26 -0800 Subject: [Python-bugs-list] [Bug #125391] Associativity of exponentiation documented incorrectly Message-ID: <200012112143.NAA21568@usw-sf-web1.sourceforge.net> Bug #125391, was updated on 2000-Dec-11 13:43 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 6 Submitted by: tim_one Assigned to : fdrake Summary: Associativity of exponentiation documented incorrectly Details: http://www.python.org/doc/current/ref/summary.html#l2h-332 says that exponentation (**) groups to the left. This is incorrect: >>> 2**2**3 256 >>> That would have printed 64 (4**3) if ** were left-associative. ** is right-associative in Python (as well as in all other languages ). For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125391&group_id=5470 From noreply@sourceforge.net Mon Dec 11 22:13:17 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 14:13:17 -0800 Subject: [Python-bugs-list] [Bug #125375] parser.tuple2ast() failure on valid parse tree Message-ID: <200012112213.OAA30493@usw-sf-web2.sourceforge.net> Bug #125375, was updated on 2000-Dec-11 12:04 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: jepler Assigned to : fdrake Summary: parser.tuple2ast() failure on valid parse tree Details: parser.tuple2ast() fails on the parse tree produced for function definitions which include "*args, **kw" or "*args, * *kw" Versions 1.5.2, 2.0 Python 1.5.2 (#1, Sep 17 1999, 20:15:36) [GCC egcs-2.91.66 19990314/Linux (egcs - on linux-i386 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> import parser >>> parser.tuple2ast(parser.expr("lambda x, *y, **z: 0").totuple()) Traceback (innermost last): File "", line 1, in ? parser.ParserError: Expected node type 16, got 36. Follow-Ups: Date: 2000-Dec-11 14:13 By: fdrake Comment: Fixed in Modules/parsermodule.c revision 2.59. Added appropriate test cases to the regression test to avoid re-introducing problems in validate_varargslist(). ------------------------------------------------------- Date: 2000-Dec-11 12:35 By: fdrake Comment: This appears to be a bug in the validation of argument lists. This is not, however, a bug in the code parser/compiler code, so I'm re-labelling it as a "Modules" bug. Assigned to me since I wrote the code. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125375&group_id=5470 From noreply@sourceforge.net Mon Dec 11 22:39:44 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 14:39:44 -0800 Subject: [Python-bugs-list] [Bug #125391] Associativity of exponentiation documented incorrectly Message-ID: <200012112239.OAA27980@usw-sf-web3.sourceforge.net> Bug #125391, was updated on 2000-Dec-11 13:43 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: None Priority: 6 Submitted by: tim_one Assigned to : fdrake Summary: Associativity of exponentiation documented incorrectly Details: http://www.python.org/doc/current/ref/summary.html#l2h-332 says that exponentation (**) groups to the left. This is incorrect: >>> 2**2**3 256 >>> That would have printed 64 (4**3) if ** were left-associative. ** is right-associative in Python (as well as in all other languages ). Follow-Ups: Date: 2000-Dec-11 14:39 By: fdrake Comment: Fixed in Doc/ref/ref5.tex revision 1.40. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125391&group_id=5470 From noreply@sourceforge.net Mon Dec 11 22:55:46 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 14:55:46 -0800 Subject: [Python-bugs-list] [Bug #122684] Memory leak creating sub-interpreters Message-ID: <200012112255.OAA31528@usw-sf-web2.sourceforge.net> Bug #122684, was updated on 2000-Nov-17 04:42 Here is a current snapshot of the bug. Project: Python Category: Core Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: jslaughter Assigned to : bwarsaw Summary: Memory leak creating sub-interpreters Details: Instantiating a sub-interpreter in Python 2.0 (compiled with Visual C++ 6 SP4) allocates memory that is never released. The problem can be reproduced using the following sample code: /* For unmodified Python 2.0 (#8, Oct 16 2000, 17:27:58) */ #include "Python.H" int main() { Py_Initialize(); PyEval_InitThreads(); PyThreadState* mainThread = PyEval_SaveThread(); for (;;) { PyEval_AcquireLock(); Py_EndInterpreter(Py_NewInterpreter()); PyEval_ReleaseLock(); } PyEval_AcquireThread(mainThread); Py_Finalize(); return 0; } Follow-Ups: Date: 2000-Dec-11 14:55 By: bwarsaw Comment: Fixed with the com_import_stmt() patch for leaks associated with "from foo import blah". ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=122684&group_id=5470 From noreply@sourceforge.net Mon Dec 11 23:14:57 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 15:14:57 -0800 Subject: [Python-bugs-list] [Bug #117178] Documentation missing for __iadd__, __isub__, etc. Message-ID: <200012112314.PAA28847@usw-sf-web3.sourceforge.net> Bug #117178, was updated on 2000-Oct-18 06:48 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: nobody Assigned to : fdrake Summary: Documentation missing for __iadd__, __isub__, etc. Details: I understand that __iadd__, __isub__, etc. are the functions you have to implement in a class for it to support augmented assignment, but there is no documentation for them under the 3.3 Special Method Names section. I searched under the Doc directory for __isub__ and found nothing at all. Follow-Ups: Date: 2000-Dec-11 15:14 By: twouters Comment: Apologies again for the delay. New docs have been approved and checked in, revision 1.54 of Doc/ref/ref3.tex. ------------------------------------------------------- Date: 2000-Nov-17 11:57 By: fdrake Comment: Sent mail to Thomas asking if he expects to make progress on the patch anytime soon. ------------------------------------------------------- Date: 2000-Nov-08 22:34 By: fdrake Comment: The right patch number is #102169. I've sent the patch back to Thomas for revision, but there's been no activity on it. I'm adding this note so I don't lose track of this. ------------------------------------------------------- Date: 2000-Oct-30 05:03 By: twouters Comment: Sorry, my fault. I wasn't thorough enough in my attempts to write documentation and finding the right place to add it in the current layout. I've submitted a patch, #110216, to fix this. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117178&group_id=5470 From noreply@sourceforge.net Tue Dec 12 01:19:05 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 17:19:05 -0800 Subject: [Python-bugs-list] [Bug #110843] Low FD_SETSIZE limit on Win32 (PR#41) Message-ID: <200012120119.RAA26724@usw-sf-web1.sourceforge.net> Bug #110843, was updated on 2000-Aug-01 14:15 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Closed Resolution: Fixed Bug Group: Feature Request Priority: 3 Submitted by: nobody Assigned to : tim_one Summary: Low FD_SETSIZE limit on Win32 (PR#41) Details: Jitterbug-Id: 41 Submitted-By: brian@digicool.com Date: Fri, 30 Jul 1999 10:10:49 -0400 (EDT) Version: 1.5.2 OS: NT It appears that win32 has a default limit of 64 descriptors that can be handed to the select() function. This is pretty low for any serious async servers, and causes them to raise lots of errors under very moderate loads. We at DC ran into this using Medusa as a basis for ZServer, which serves Zope sites. It turns out that you can actually add a define when compiling the python15.dll for windows to bump the default fd limit to a more reasonable level. The approach _I_ took was to add the define: FD_SETSIZE=1024 to the preprocessor options in the MSVC project settings for python15.dll, though I imagine you could also roll the define into config.h or something (so long as it's defined before windows.h or any of the select / socket include files are referenced). It would make life much easier for win32 server developers if this define could find its way into the next official python release :^) Thanks! Brian Lloyd brian@digicool.com Software Engineer 540.371.6909 Digital Creations http://www.digicool.com ==================================================================== Audit trail: Fri Jul 30 10:43:41 1999 guido moved from incoming to request Follow-Ups: Date: 2000-Dec-11 17:19 By: tim_one Comment: Boosted the Windows default to 512, in selectmodule.c rev 2.49. ------------------------------------------------------- Date: 2000-Nov-27 15:43 By: tim_one Comment: Reassigned from MarkH to me. Unclear what the new value should be (nothing is free ...). ------------------------------------------------------- Date: 2000-Nov-27 13:22 By: gvanrossum Comment: Tim -- it's time to commit on this. I recommend 512 as a compromise. <0.5 wink> ------------------------------------------------------- Date: 2000-Nov-10 15:27 By: nobody Comment: I recently raised this in the help desk for python. I am running into this in the WInsock arena and I really want to get over this hump. Can I get a Python dll with 1024 sockets in 1.5.2? I would be happy to test this in the Win2k arena for you (like that is a major gold star). Really even 256 would be ok for me .. but 1024 is a spot more attractive (as I wouldn't have to keep watch on this all the time). Many thanks to Martin von Loewis and Tim Peters for thier help. ------------------------------------------------------- Date: 2000-Nov-10 12:51 By: tim_one Comment: Mark, Guido is agreeable to Python adding its own #ifndef FD_SETSIZE #define FD_SETSIZE ??? #endif block. If other people are doing the define-this-thing-on-the-cmdline trick, fine, such a block won't interfere with their beliefs. So the primary remaining question is what "???" should be. Is 1024 enough? Someone else just bumped into the 64 limit (Python-Help), but didn't commit to a specific amount. ------------------------------------------------------- Date: 2000-Oct-05 21:11 By: mhammond Comment: Brian has agreed to help with a specific patch that will remain local to the Python build. Dropping priority to reflect that it wont affect most users, and wont make 2.0. ------------------------------------------------------- Date: 2000-Sep-21 21:16 By: tim_one Comment: Changed summary to say "Win32" instead of "NT", as this is a general Win32 issue. Mark, did you email your question directly to Brian? (This bug got moved over from Jitterbug, so he didn't see your note otherwise.) I certainly agree Python can't go changing the MS default value in any way visible from Python.h (which #includes config.h). ------------------------------------------------------- Date: 2000-Aug-30 23:19 By: mhammond Comment: I am a little worried that adding it to config.h may have side-effects when Python is embedded in other projects with their own socket config (eg, Mozilla :-) Now that socket and select are external .pyd modules, will it be sufficient to only add it to these extension modules? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=110843&group_id=5470 From noreply@sourceforge.net Tue Dec 12 01:59:19 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 17:59:19 -0800 Subject: [Python-bugs-list] [Bug #122780] msvcrt: locking constants aren't defined. Message-ID: <200012120159.RAA03226@usw-sf-web2.sourceforge.net> Bug #122780, was updated on 2000-Nov-18 10:07 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: kirill_simonov Assigned to : fdrake Summary: msvcrt: locking constants aren't defined. Details: msvcrt.locking(fd, mode, nbytes): mode must be one of the following constants: LK_UNLOCK = 0 # Unlock LK_LOCK = 1 # Lock LK_NBLCK = 2 # Non-blocking lock LK_RLCK = 3 # Lock for read-only LK_NBRLCK = 4 # Non-blocking lock for read-only I think that constants should be defined in msvcrt and written in the docs. Follow-Ups: Date: 2000-Dec-11 17:59 By: tim_one Comment: I added the constants to msvcrtmodule.c, rev 1.6. Reassigned to Fred for docs. Fred, I've never used this function and am not sure why Guido accepted it. Nevertheless, the bug report is correct that the locking() function is unusable without these constants or their docs. The MS docs follow. The Python constants have the same names but do *not* have the leading underscore (e.g., LK_LOCK in Python). """ The _locking function locks or unlocks nbytes bytes of the file specified by handle. Locking bytes in a file prevents access to those bytes by other processes. All locking or unlocking begins at the current position of the file pointer and proceeds for the next nbytes bytes. It is possible to lock bytes past end of file. mode must be one of the following manifest constants, which are defined in LOCKING.H: _LK_LOCK Locks the specified bytes. If the bytes cannot be locked, the program immediately tries again after 1 second. If, after 10 attempts, the bytes cannot be locked, the constant returns an error. _LK_NBLCK Locks the specified bytes. If the bytes cannot be locked, the constant returns an error. _LK_NBRLCK Same as _LK_NBLCK. _LK_RLCK Same as _LK_LOCK. _LK_UNLCK Unlocks the specified bytes, which must have been previously locked. Multiple regions of a file that do not overlap can be locked. A region being unlocked must have been previously locked. _locking does not merge adjacent regions; if two locked regions are adjacent, each region must be unlocked separately. Regions should be locked only briefly and should be unlocked before closing a file or exiting the program. """ ------------------------------------------------------- Date: 2000-Nov-21 10:48 By: tim_one Comment: Assigned to me. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=122780&group_id=5470 From noreply@sourceforge.net Tue Dec 12 07:12:03 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Dec 2000 23:12:03 -0800 Subject: [Python-bugs-list] [Bug #125452] shlex.shlex hangs when parsing an unclosed quoted string Message-ID: <200012120712.XAA09665@usw-sf-web2.sourceforge.net> Bug #125452, was updated on 2000-Dec-11 23:12 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: shlex.shlex hangs when parsing an unclosed quoted string Details: import StringIO import shlex s = shlex.shlex(StringIO.StringIO("hello 'world")) you'll see that get_token doesn't test for EOF when it's in the ' state. Just adding that test should fix the problem. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125452&group_id=5470 From noreply@sourceforge.net Tue Dec 12 13:23:47 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 05:23:47 -0800 Subject: [Python-bugs-list] [Bug #125452] shlex.shlex hangs when parsing an unclosed quoted string Message-ID: <200012121323.FAA14060@usw-sf-web3.sourceforge.net> Bug #125452, was updated on 2000-Dec-11 23:12 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : esr Summary: shlex.shlex hangs when parsing an unclosed quoted string Details: import StringIO import shlex s = shlex.shlex(StringIO.StringIO("hello 'world")) you'll see that get_token doesn't test for EOF when it's in the ' state. Just adding that test should fix the problem. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125452&group_id=5470 From noreply@sourceforge.net Tue Dec 12 14:44:12 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 06:44:12 -0800 Subject: [Python-bugs-list] [Bug #125473] Typo in LICENSE: developement Message-ID: <200012121444.GAA16168@usw-sf-web3.sourceforge.net> Bug #125473, was updated on 2000-Dec-12 06:44 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: flight Assigned to : nobody Summary: Typo in LICENSE: developement Details: LICENSE says in line 12: "the Python core developement team moved to BeOpen.com". I I guess you could change that to "development" without changing the terms of the license, could you ? ;-) Shameless ad: Brought to you by the miraculous Debian package lint tool "lintian" (http://package.debian.org/lintian), which includes a spellchecker for common typos in control files of packages... You see, we're so paranoid that we even have automatic tools that keep monitoring license files ;-) Gregor For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125473&group_id=5470 From noreply@sourceforge.net Tue Dec 12 14:51:24 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 06:51:24 -0800 Subject: [Python-bugs-list] [Bug #125476] Codec naming scheme and aliasing support Message-ID: <200012121451.GAA11445@usw-sf-web1.sourceforge.net> Bug #125476, was updated on 2000-Dec-12 06:51 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: Feature Request Priority: 5 Submitted by: lemburg Assigned to : nobody Summary: Codec naming scheme and aliasing support Details: The docs should contain a note about the codec naming scheme, the use of codec packages and how to address them in the encoding name and some notes about the aliasing support which is available for codecs which are found by the standard codec search function in the encodings package. Here's a starter (actually a posting to python-dev, but it has all the needed details): """ I just wanted to inform you of a change I plan for the standard encodings search function to enable better support for aliasing of encoding names. The current implementation caches the aliases returned from the codecs .getaliases() function in the encodings lookup cache rather than in the alias cache. As a consequence, the hyphen to underscore mapping is not applied to the aliases. A codec would have to return a list of all combinations of names with hyphens and underscores in order to emulate the standard lookup behaviour. I have a ptach which fixes this and also assures that aliases cannot be overwritten by codecs which register at some later point in time. This assures that we won't run into situations where a codec import suddenly overrides behaviour of previously active codecs. [The patch was checked into CVS on 2000-12-12.] I would also like to propose the use of a new naming scheme for codecs which enables drop-in installation. As discussed on the i18n-sig list, people would like to install codecs without having the users to call a codec registration function or to touch site.py. The standard search function in the encodings package has a nice property (which I only noticed after the fact ;) which allows using Python package names in the encoding names, e.g. you can install a package 'japanese' and the access the codecs in that package using 'japanese.shiftjis' without having to bother registering a new codec search function for the package -- the encodings package search function will redirect the lookup to the 'japanese' package. Using package names in the encoding name has several advantages: * you know where the codec comes from * you can have mutliple codecs for the same encoding * drop-in installation without registration is possible * the need for a non-default encoding package is visible in the source code * you no longer need to drop new codecs into the Python standard lib """ For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125476&group_id=5470 From noreply@sourceforge.net Tue Dec 12 15:25:22 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 07:25:22 -0800 Subject: [Python-bugs-list] [Bug #125473] Typo in LICENSE: developement Message-ID: <200012121525.HAA12181@usw-sf-web1.sourceforge.net> Bug #125473, was updated on 2000-Dec-12 06:44 Here is a current snapshot of the bug. Project: Python Category: None Status: Closed Resolution: None Bug Group: None Priority: 5 Submitted by: flight Assigned to : nobody Summary: Typo in LICENSE: developement Details: LICENSE says in line 12: "the Python core developement team moved to BeOpen.com". I I guess you could change that to "development" without changing the terms of the license, could you ? ;-) Shameless ad: Brought to you by the miraculous Debian package lint tool "lintian" (http://package.debian.org/lintian), which includes a spellchecker for common typos in control files of packages... You see, we're so paranoid that we even have automatic tools that keep monitoring license files ;-) Gregor Follow-Ups: Date: 2000-Dec-12 07:25 By: gvanrossum Comment: Done. The things you spend time on! (And force others to :-) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125473&group_id=5470 From noreply@sourceforge.net Tue Dec 12 15:25:52 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 07:25:52 -0800 Subject: [Python-bugs-list] [Bug #125476] Codec naming scheme and aliasing support Message-ID: <200012121525.HAA20707@usw-sf-web2.sourceforge.net> Bug #125476, was updated on 2000-Dec-12 06:51 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: Feature Request Priority: 5 Submitted by: lemburg Assigned to : lemburg Summary: Codec naming scheme and aliasing support Details: The docs should contain a note about the codec naming scheme, the use of codec packages and how to address them in the encoding name and some notes about the aliasing support which is available for codecs which are found by the standard codec search function in the encodings package. Here's a starter (actually a posting to python-dev, but it has all the needed details): """ I just wanted to inform you of a change I plan for the standard encodings search function to enable better support for aliasing of encoding names. The current implementation caches the aliases returned from the codecs .getaliases() function in the encodings lookup cache rather than in the alias cache. As a consequence, the hyphen to underscore mapping is not applied to the aliases. A codec would have to return a list of all combinations of names with hyphens and underscores in order to emulate the standard lookup behaviour. I have a ptach which fixes this and also assures that aliases cannot be overwritten by codecs which register at some later point in time. This assures that we won't run into situations where a codec import suddenly overrides behaviour of previously active codecs. [The patch was checked into CVS on 2000-12-12.] I would also like to propose the use of a new naming scheme for codecs which enables drop-in installation. As discussed on the i18n-sig list, people would like to install codecs without having the users to call a codec registration function or to touch site.py. The standard search function in the encodings package has a nice property (which I only noticed after the fact ;) which allows using Python package names in the encoding names, e.g. you can install a package 'japanese' and the access the codecs in that package using 'japanese.shiftjis' without having to bother registering a new codec search function for the package -- the encodings package search function will redirect the lookup to the 'japanese' package. Using package names in the encoding name has several advantages: * you know where the codec comes from * you can have mutliple codecs for the same encoding * drop-in installation without registration is possible * the need for a non-default encoding package is visible in the source code * you no longer need to drop new codecs into the Python standard lib """ For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125476&group_id=5470 From noreply@sourceforge.net Tue Dec 12 17:15:01 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 09:15:01 -0800 Subject: [Python-bugs-list] [Bug #123924] Windows - using OpenSSL, problem with socket in httplib.py Message-ID: <200012121715.JAA15010@usw-sf-web1.sourceforge.net> Bug #123924, was updated on 2000-Nov-30 06:11 Here is a current snapshot of the bug. Project: Python Category: Library Status: Closed Resolution: Fixed Bug Group: Platform-specific Priority: 5 Submitted by: ottobehrens Assigned to : gvanrossum Summary: Windows - using OpenSSL, problem with socket in httplib.py Details: We found that when compiling python with USE_SSL on Windows, an exception occurred on the line: ssl = socket.ssl(sock, self.key_file, self.cert_file) The socket.ssl function expected arg 1 to be a socket object and not an instance of a class. We changed it to the following, which resolved the problem. However, this is not a generic solution and breaks again under Linux. on class HTTPSConnection: def connect(self): "Connect to a host on a given (SSL) port." sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((self.host, self.port)) ssl = socket.ssl(sock._sock, self.key_file, self.cert_file) self.sock = FakeSocket(sock, ssl) Follow-Ups: Date: 2000-Dec-12 09:15 By: ottobehrens Comment: Thanks, the solution did work. Could the same problem not repeat where SSL is used in Windows, though? This is specifically httplib.py. I suppose not many people out there are doing other things with SSL besides using it to securely transfer HTTP? ------------------------------------------------------- Date: 2000-Dec-11 12:32 By: gvanrossum Comment: Checked in as revision 1.24. Now let's hope that this works -- the submitter never wrote back. ------------------------------------------------------- Date: 2000-Nov-30 06:15 By: gvanrossum Comment: Try this patch instead: Index: httplib.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/httplib.py,v retrieving revision 1.24 diff -c -r1.24 httplib.py *** httplib.py 2000/10/12 19:58:36 1.24 --- httplib.py 2000/11/30 14:14:43 *************** *** 613,619 **** sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((self.host, self.port)) ! ssl = socket.ssl(sock, self.key_file, self.cert_file) self.sock = FakeSocket(sock, ssl) --- 613,622 ---- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((self.host, self.port)) ! realsock = sock ! if hasattr(sock, "_sock"): ! realsock = sock._sock ! ssl = socket.ssl(realsock, self.key_file, self.cert_file) self.sock = FakeSocket(sock, ssl) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=123924&group_id=5470 From noreply@sourceforge.net Tue Dec 12 19:20:27 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 11:20:27 -0800 Subject: [Python-bugs-list] [Bug #125531] sre Scanner.scan typo (?) Message-ID: <200012121920.LAA18294@usw-sf-web1.sourceforge.net> Bug #125531, was updated on 2000-Dec-12 11:20 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: sre Scanner.scan typo (?) Details: I believe there's a small glitch in the Scanner.scan method in the sre module (up through ver. 1.25). Here's what the scan method does when it finds a match: if callable(action): self.match = match action = action(self, m.group()) the local variable match in the above is a reference to the match routine from the internal scanner object in the C DLL. I think the intention of the above was probably to set self.match to m -- the Match object returned from the successful search. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125531&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:26:18 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:26:18 -0800 Subject: [Python-bugs-list] [Bug #125531] sre Scanner.scan typo (?) Message-ID: <200012122026.MAA28267@usw-sf-web2.sourceforge.net> Bug #125531, was updated on 2000-Dec-12 11:20 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : effbot Summary: sre Scanner.scan typo (?) Details: I believe there's a small glitch in the Scanner.scan method in the sre module (up through ver. 1.25). Here's what the scan method does when it finds a match: if callable(action): self.match = match action = action(self, m.group()) the local variable match in the above is a reference to the match routine from the internal scanner object in the C DLL. I think the intention of the above was probably to set self.match to m -- the Match object returned from the successful search. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125531&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:46:34 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:46:34 -0800 Subject: [Python-bugs-list] [Bug #117608] test_largefile crashes or IRIX 6 Message-ID: <200012122046.MAA28783@usw-sf-web2.sourceforge.net> Bug #117608, was updated on 2000-Oct-24 08:51 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: Platform-specific Priority: 3 Submitted by: bbaxter Assigned to : bwarsaw Summary: test_largefile crashes or IRIX 6 Details: During "make test", test_largefile caused an error. Here's the result in python: % python python2.0/test/test_largefile.py create large file via seek (may be sparse file) ... Traceback (most recent call last): File "python2.0/test/test_largefile.py", line 60, in ? f.flush() IOError: [Errno 22] Invalid argument Here's the version I'm running: Python 2.0 (#5, Oct 24 2000, 09:51:57) [C] on irix6 For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117608&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:49:57 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:49:57 -0800 Subject: [Python-bugs-list] [Bug #110631] Debugger does not understand packages (PR#283) Message-ID: <200012122049.MAA28850@usw-sf-web2.sourceforge.net> Bug #110631, was updated on 2000-Jul-31 14:09 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: Feature Request Priority: 3 Submitted by: nobody Assigned to : nobody Summary: Debugger does not understand packages (PR#283) Details: Jitterbug-Id: 283 Submitted-By: musingattheruins@yahoo.com Date: Mon, 10 Apr 2000 12:30:44 -0400 (EDT) Version: 1.5.2 OS: Win32 The python debugger (both Idle and PythonWin) does not undertand packages. Can run scripts from the command line that cannot be run in the debugger... Create package 'Test' in the directory "My Modules", add an __init__.py (empty) to the directory "My modules\Test", create file testfile.py with the contents... class TheTest: def __init__(self): self.i = 1 def go(self): return self.i Add the path to the Python path with the following file (test.reg)... REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\1.5\PythonPath\TheTest] @="C:\\My modules" then try the following at the python prompt: import Test.testfile j = Test.testfile.TheTest() k = j.go runs fine right? Yes it does, now step through it in the debugger and you get... import Test.testfile j = Test.testfile.TheTest() #exception: attribute 'TheTest' k = j.go Does not appear to be realted to the class (you can change it to a 'function in a module' instead of a 'method in a class in a module' and you get the a similar result.) Debugger does not understand packages. ==================================================================== Audit trail: Tue Jul 11 08:29:15 2000 guido moved from incoming to open Follow-Ups: Date: 2000-Dec-12 12:49 By: gvanrossum Comment: I've added this feature request to PEP 42. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=110631&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:51:05 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:51:05 -0800 Subject: [Python-bugs-list] [Bug #110637] ihooks on windows and pythoncom (PR#294) Message-ID: <200012122051.MAA28874@usw-sf-web2.sourceforge.net> Bug #110637, was updated on 2000-Jul-31 14:09 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: nobody Assigned to : mhammond Summary: ihooks on windows and pythoncom (PR#294) Details: Jitterbug-Id: 294 Submitted-By: mak@mikroplan.com.pl Date: Thu, 13 Apr 2000 04:09:35 -0400 (EDT) Version: cvs OS: windows Hi, Python module ihooks is not so compatible with builtin imp while importing modules whose name is stored in registry eg. pythoncom/pywintypes. import ihooks ihooks.install() import pythoncom This code will fail inside pythonwin ide too ! ==================================================================== Audit trail: Tue Jul 11 08:29:17 2000 guido moved from incoming to open Follow-Ups: Date: 2000-Aug-30 23:23 By: mhammond Comment: Leaving open, but moving down the priority and resolution lists. A patch would help bump it back up :-) ------------------------------------------------------- Date: 2000-Aug-13 23:42 By: mhammond Comment: This needs a resolution. The "registered module" code in the code also needs to support HKEY_CURRENT_USER along with the HKEY_LOCAL_MACHINE it does now. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=110637&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:51:41 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:51:41 -0800 Subject: [Python-bugs-list] [Bug #110682] pdb can only step when at botframe (PR#4) Message-ID: <200012122051.MAA28889@usw-sf-web2.sourceforge.net> Bug #110682, was updated on 2000-Jul-31 14:14 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: Later Bug Group: None Priority: 5 Submitted by: nobody Assigned to : gvanrossum Summary: pdb can only step when at botframe (PR#4) Details: Jitterbug-Id: 4 Submitted-By: MHammond@skippinet.com.au Date: Mon, 12 Jul 1999 15:38:43 -0400 (EDT) Version: 1.5.2 OS: Windows [Resubmitted by GvR] It is a problem that bugged me for _ages_. Since the years I first wrote the Pythonwin debugger Ive learnt alot about how it works :-) The problem is simply: when the frame being debugged is self.botframe, it is impossible to continue - only "step" works. A "continue" command functions as a step until you start debugging a frame below self.botframe. It is less of a problem with pdb, but makes a GUI debugger clunky - if you start a debug session by stepping into a module, the "go" command seems broken. The simplest way to demonstrate the problem is to create a module, and add a "pdb.set_trace()" statement at the top_level (ie, at indent level 0). You will not be able to "continue" until you enter a function. My solution was this: instead of run() calling "exec" directly, it calls another internal function. This internal function contains a single line - the "exec", and therefore never needs to be debugged directly. Then stop_here is modified accordingly. The end result is that "self.botframe" becomes an "intermediate" frame, and is never actually stopped at - ie, self.botframe effectivly becomes one frame _below_ the bottom frame the user is interested in. Im not yet trying to propose a patch, just to discuss this and see if the "right thing" can be determined and put into pdb. Thanks, Mark. ==================================================================== Audit trail: Mon Jul 12 15:39:35 1999 guido moved from incoming to open Follow-Ups: Date: 2000-Oct-17 07:19 By: nobody Comment: Sorry I forgot to sigh the comment for 2000-Oct-17 07:18 David Hurt davehurt@flash.net ------------------------------------------------------- Date: 2000-Oct-17 07:18 By: nobody Comment: My common workaround is to always create a function called debug(): that calls the function in the module I am debugging. Instead of doing a runcall for my function I do a runcall on debug. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=110682&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:52:32 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:52:32 -0800 Subject: [Python-bugs-list] [Bug #110705] combination of socket.gethostbyname and os.system hangs program (PR#401) Message-ID: <200012122052.MAA20660@usw-sf-web1.sourceforge.net> Bug #110705, was updated on 2000-Jul-31 14:29 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Closed Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: nobody Assigned to : jhylton Summary: combination of socket.gethostbyname and os.system hangs program (PR#401) Details: Jitterbug-Id: 401 Submitted-By: andyg@one.net.au Date: Mon, 17 Jul 2000 04:26:12 -0400 (EDT) Version: Python 1.5.2 (#3, Jun 29 2000, 15:52:04) [GCC 2.8.1] on sunos5 OS: SunOS psol002 5.6 Generic_105181-21 sun4u sparc SUNW,Ultra-Enterprise A combination of socket.gethostbyname and os.system appears to hang python intermittently. We run Dec and Sun systems - it only appears to be a problem with sun systems. The following is the simplest way I can reproduce the problem: test.py: -------------------------------- #!/usr/local/bin/python import os import socket print socket.gethostbyname( "a hostname (but not localhost)" ) os.system("echo fred") hang.sh: -------------------------------- #!/bin/ksh while true ; do ./test.py done output: --------------------------------- 10.666.666.666 fred 10.666.666.666 fred ... eventually ... 10.666.666.666 If "a hostname" is "localhost" it doesn't hang. For anything else which it can successfully resolve, it seems to hang. Thanks ! Andy. ==================================================================== Audit trail: Mon Jul 24 18:39:07 2000 jeremy changed notes Mon Jul 24 18:39:07 2000 jeremy moved from incoming to platformbug Follow-Ups: Date: 2000-Dec-12 12:52 By: gvanrossum Comment: No new information. Giving up on this one. ------------------------------------------------------- Date: 2000-Sep-21 20:57 By: gvanrossum Comment: Ask if the Sun system is a multi CPU system. It could be the dual CPU bug in disguise. Otherwise I have no clue how this could be a bug in Python (more likely a platform C library interaction), so am lowering the priority. ------------------------------------------------------- Date: 2000-Sep-12 08:58 By: jhylton Comment: attempt to contact the original submittor ------------------------------------------------------- Date: 2000-Sep-07 15:01 By: jhylton Comment: Please do triage on this bug. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=110705&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:54:11 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:54:11 -0800 Subject: [Python-bugs-list] [Bug #110838] Inverse hyperbolic functions in cmath module (PR#231) Message-ID: <200012122054.MAA20706@usw-sf-web1.sourceforge.net> Bug #110838, was updated on 2000-Aug-01 14:15 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Closed Resolution: None Bug Group: Feature Request Priority: 1 Submitted by: nobody Assigned to : tim_one Summary: Inverse hyperbolic functions in cmath module (PR#231) Details: Jitterbug-Id: 231 Submitted-By: nadavh@envision.co.il Date: Fri, 10 Mar 2000 18:35:07 -0500 (EST) Version: 1.52 OS: NT 4.0 SP4 1. The function cmath.acosh provides the negative branch with low precision. For example: >>> cmath.acosh(cmath.cosh(10.0)) (-10.0000000135+0j) Proposed solution --- use the following formula which is precise and avoids singularities with complex arguments: def acosh(x): return 2.0*log(sqrt(x+1.0) + sqrt(x-1.0)) - log(2.0) 2. The function cmath.sinh does not handle moderately large arguments. For example: >>> cmath.asinh(cmath.sinh(20.0)) (1.#INF+0j) Proposed solution: Use the textbook formula: def asinh(x): return log(x+sqrt(x*x+1.0)) This calculation is more limited then the acosh calculation, but still works fine. ==================================================================== Audit trail: Mon Apr 03 18:38:28 2000 guido changed notes Mon Apr 03 18:38:28 2000 guido moved from incoming to request Follow-Ups: Date: 2000-Dec-12 12:54 By: gvanrossum Comment: I've added this feature request to PEP 42. ------------------------------------------------------- Date: 2000-Aug-01 17:38 By: jhylton Comment: I think this bug should be left open, but perhaps a new bug should be created for the general feature request "re-write cmath in python." It's up to you, Tim. ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: Might be a good idea. Waiting for patches. ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: From: "Tim Peters" Subject: RE: [Python-bugs-list] Inverse hyperbolic functions in cmath module (PR#231) Date: Sat, 11 Mar 2000 13:47:25 -0500 [Tim] > C doesn't define any functions on complex numbers -- cmathmodule.c > implements these all on its own. [David Ascher] > As an aside, if anyone ever wants to trim the number of builtin C > modules, I found that it was much easier to write cmath.py than to > write cmath.java (for JPython). The same cmath.py should work fine > in CPython. Yes, I don't see anything in cmathmodule.c that *needs* to be coded in C; & coding would be much clearer in Python, using infix notation for the basic complex binary ops. Two possible reasons for leaving it in C: 1. Lower internal call overheads (i.e., speed). 2. Improving quality -- complex libraries are very difficult to get right in all cases if they're made IEEE-754 aware, and doing so requires fiddling with the processor-level 754 control & status features. But there's no portable way to do that now, and won't be until the next iteration of C. > I can dig it up, but I can't swear that I used the most numerically stable > algorithms. I can: you didn't . Doesn't matter, though! cmathmodule.c is naive too, and achieving good accuracy across the whole domain is a major undertaking. That gives the best reason to write it in Python: 3. There's a long way to go to make this "industrial strength", so the current cmath is really just a prototype. Everyone knows prototyping is much easier in Python. QED . > It did give the same numbers as CPython's cmath on a test set. So ship it . ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: From: "David Ascher" Subject: RE: [Python-bugs-list] Inverse hyperbolic functions in cmath module (PR#231) Date: Fri, 10 Mar 2000 21:49:27 -0800 > [Nadav Horesh, suggests different algorithms in cmath, for some complex > inverse hyperbolics] > > [Guido, misfires] > > We're just using the VC++ C library. > > C doesn't define any functions on complex numbers -- cmathmodule.c > implements these all on its own. I can't make time to look at > this now, but > complaining to Microsoft about this will do Nadav even less good than when > it *is* their problem . As an aside, if anyone ever wants to trim the number of builtin C modules, I found that it was much easier to write cmath.py than to write cmath.java (for JPython). The same cmath.py should work fine in CPython. I can dig it up, but I can't swear that I used the most numerically stable algorithms. It did give the same numbers as CPython's cmath on a test set. -david ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: From: "Tim Peters" Subject: RE: [Python-bugs-list] Inverse hyperbolic functions in cmath module (PR#231) Date: Fri, 10 Mar 2000 22:05:07 -0500 [Nadav Horesh, suggests different algorithms in cmath, for some complex inverse hyperbolics] [Guido, misfires] > We're just using the VC++ C library. C doesn't define any functions on complex numbers -- cmathmodule.c implements these all on its own. I can't make time to look at this now, but complaining to Microsoft about this will do Nadav even less good than when it *is* their problem . ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: From: "David Ascher" Subject: RE: [Python-bugs-list] Inverse hyperbolic functions in cmath module (PR#231) Date: Fri, 10 Mar 2000 16:47:23 -0800 > We're just using the VC++ C library. I suggest you send your bug > report to Microsoft. FWIW: the Perl folks are more and more (it seems to me) redoing things themselves if the C library tends to be broken or slow. I'm not suggesting that it's a good decision, just commenting. --david ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: From: Guido van Rossum Subject: Re: [Python-bugs-list] Inverse hyperbolic functions in cmath module (PR#231) Date: Fri, 10 Mar 2000 18:44:01 -0500 > Full_Name: Nadav Horesh > Version: 1.52 > OS: NT 4.0 SP4 > Submission from: (NULL) (212.25.119.223) > > > 1. The function cmath.acosh provides the negative branch with low > precision. For example: > > >>> cmath.acosh(cmath.cosh(10.0)) > (-10.0000000135+0j) > > Proposed solution --- use the following formula which is precise and > avoids singularities with complex arguments: > > def acosh(x): > return 2.0*log(sqrt(x+1.0) + sqrt(x-1.0)) - log(2.0) > > 2. The function cmath.sinh does not handle moderately large > arguments. For example: > > >>> cmath.asinh(cmath.sinh(20.0)) > (1.#INF+0j) > > Proposed solution: > > Use the textbook formula: > def asinh(x): > return log(x+sqrt(x*x+1.0)) > > This calculation is more limited then the acosh calculation, but > still works fine. We're just using the VC++ C library. I suggest you send your bug report to Microsoft. --Guido van Rossum (home page: http://www.python.org/~guido/) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=110838&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:55:28 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:55:28 -0800 Subject: [Python-bugs-list] [Bug #114598] Blue Screen crash, popen, Win98, Norton Antivirus 2000 Message-ID: <200012122055.MAA25613@usw-sf-web3.sourceforge.net> Bug #114598, was updated on 2000-Sep-16 13:24 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Closed Resolution: None Bug Group: 3rd Party Priority: 3 Submitted by: gvanrossum Assigned to : gvanrossum Summary: Blue Screen crash, popen, Win98, Norton Antivirus 2000 Details: On Windows 98, all of the os.popen*() functions cause a hard "blue screen" crash requiring a reboot, when Nortin Antivirus 2000 is installed and enabled. (Not with Norton Antivirus version 5, not on Windows 2000.) While this is probably a bug in Norton Antivirus 2000, and we're already warning for this on the download page, that's not enough (we've seen at least one report in the newsgroup), and it would be good if we could somehow prevent it. Even a call to Py_FatalError is better than the blue screen, and raising os.error would be a *lot* better. Follow-Ups: Date: 2000-Dec-12 12:55 By: gvanrossum Comment: Nothing we can do about this. Clising the bug report. ------------------------------------------------------- Date: 2000-Nov-27 13:11 By: gvanrossum Comment: Yup, os.popen() still crashes after updating NAV. Windows Update doesn't tell me to do any critical updates. ------------------------------------------------------- Date: 2000-Nov-20 11:13 By: tim_one Comment: Assigned to Guido cuz there's something I want him to try: My new desktop box is running Win98SE and has NAV 2000 version 6.10.20. But I don't have any problem running os.popen with autoprotect engaged. Could you try using the NAV LiveUpdate facility to make sure all your NAV components are up to date, and try this again? The virus defs on my box are dated 11/13/2000. Perhaps also use Windows Update to make sure your OS components are up to date too. ------------------------------------------------------- Date: 2000-Oct-06 16:23 By: tim_one Comment: Indeed, I can detect whether AutoProtect is loaded at system startup, but not whether it's currently running. Just left another nag on the Symantec board but don't expect a useful response. ------------------------------------------------------- Date: 2000-Oct-06 12:23 By: gvanrossum Comment: Reduced priority even more. There's not much else we can do about this, it seems, except yell at Norton. :-( ------------------------------------------------------- Date: 2000-Sep-22 20:46 By: tim_one Comment: Reduced the priority a notch. The biggest trigger here was IDLE's use of os.popen (indirectly via webbrowser.py) to launch the Python HTML docs. I put in a new Windows function (os.startfile) that uses Win32 ShellExecute instead, and changed webbrowser.py to use that. IDLE now lives peacefully with NAV 6.10.20. This was all Guido's idea, and he suggested that the priority of this bug should be dropped now. ------------------------------------------------------- Date: 2000-Sep-21 11:32 By: tim_one Comment: I cranked up my obnoxiousness level on the Symantec board (URL two comments down), and they may be on the edge of taking this seriously now. ------------------------------------------------------- Date: 2000-Sep-19 09:06 By: gvanrossum Comment: Tim's little C program crashes in the same way as Python when NAV2000 is enabled. back to Tim... ------------------------------------------------------- Date: 2000-Sep-19 02:21 By: tim_one Comment: 1. Sent Guido a small self-contained C program in the hope that it's enough to provoke the problem by itself. Reassigned this bug to Guido since he has to tell me what happens with that next (then assign back to me). 2. That's because Symantec is being no help at all. They suggest upgrading to NAV 2001(!). If I can post a tiny C program, maybe it will embarrass then into doing something. You can follow this soap opera at: http://servicenews.symantec.com/cgi-bin/displayArticle.cgi?group=symantec.support.win9x.nortonantivirus2000.general&article=57765 ------------------------------------------------------- Date: 2000-Sep-16 14:26 By: tim_one Comment: I don't know how to stop this. Pissed away an hour trying to get help on the Symantec site, and eventually found a support board to which I posted this msg: Product: Norton AntiVirus 2000 6.0 for Windows 95/98/NT Supported operating system: Windows 98 Action: Run Another Program Error Message: Summary: Blue screen crash I develop the Windows version of the core Python language. Several reports of blue-screen death on Win98 have been traced to NAV2000. Here's the exact msg from one such: An exception 0E has occurred at 0028:C02AD9D0 in VxD IFSMGR(04) + 000050E4. This was called from 0028:C000B511 in VxD VMM(01) + 0000A511. It may be possible to continue normally. This occurs whenever a user executes a member of the popen family of functions from within Python; for example, this Python program: import os p = os.popen("dir") popen is a std C library function, poorly implemented by Microsoft, but not *that* poorly . You can get our latest installer here: http://www.pythonlabs.com/tech/python2.0/download.html As the warning there says: ----------------- Incompatibility warning: Norton Antivirus 2000 can cause blue screen crashes on Windows 98 when a function in the os.popen*() family is invoked. To prevent this problem, disable Norton Antivirus when using Python. (Confirmed on Windows 98 Second Edition with Norton Antivirus version 6.10.20. The same Norton Antivirus version doesn't have this problem on Windows 2000. Norton Antivirus version 5 on Windows 98SE doesn't have this problem either.) ----------------- ActiveState is seeing the same problem with their derivative work ActivePython, as is PythonWare with their derivative work PythonWorks. So this affects a lot of people. A recent change to the implementation of a popular library module has made it acutely visible recently (didn't use to use popen, but does now). A workaround would be nice. More importantly, how can I *detect* whether an affected version of NAV is in use, so that we can shut down Python gracefully with an appropriate message before executing any of the popen functions that throw NAV into blue-screen territory? We try to be a very user-friendly language, and we'll do anything to prevent a crash. Alas, right now I don't know how to stop it. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=114598&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:54:49 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:54:49 -0800 Subject: [Python-bugs-list] [Bug #113797] Build problems on Reliant Unix Message-ID: <200012122054.MAA20730@usw-sf-web1.sourceforge.net> Bug #113797, was updated on 2000-Sep-07 07:33 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: ddittmar Assigned to : fdrake Summary: Build problems on Reliant Unix Details: - the linker requires the options '-W1 -Blargedynsym', otherwise, Python's global functions and variables are not visible to external modules - when building --with-threads, the linker requires the option -Kpthread - mmapmodule.o requires a special library Python version: 2.0b1 compiler version:CDR9908: cc: Fujitsu Siemens Computers GmbH: CDS++ V2.0C0003, 1.2.7.2 from 29 Jun 2000 CDR9908: cc: Fujitsu Siemens Computers GmbH: CDS++ V2.0C0003, 1.2.7.2 from 29 Jun 2000 Follow-Ups: Date: 2000-Oct-12 09:28 By: fdrake Comment: I'm recording here part of the needed patch for threads; this is needed in configure.in to get the -Kpthread option passed to the compiler and linker in all the appropriate places. There are still problems in the code. (It won't look right in a browser, but it provides all the needed information.) *************** *** 752,757 **** --- 752,761 ---- if test ! -z "$withval" -a -d "$withval" then LDFLAGS="$LDFLAGS -L$withval" fi + case "$ac_sys_system" in + ReliantUNIX*) LDFLAGS="$LDFLAGS -Kpthread"; + OPT="$OPT -Kpthread";; + esac AC_DEFINE(_REENTRANT) AC_CHECK_HEADER(mach/cthreads.h, [AC_DEFINE(WITH_THREAD) AC_DEFINE(C_THREADS) ------------------------------------------------------- Date: 2000-Oct-05 19:00 By: fdrake Comment: This won't be resolved for Python 2.0. There's enough that would be effected by revising the thread identification code that we don't want to destabilize the sources at this point. We should be able to resolve this for Python 2.1. I've added a comment about this to the platform notes section of the README file (revision 1.102) to alert the reader to this situation. ------------------------------------------------------- Date: 2000-Oct-03 09:22 By: fdrake Comment: I've sent a note to Daniel asking for the config.h and config.log files generated by configure. There's a real problem with the way we're creating thread identifiers; casting to a long just isn't sufficient. It might be good to know what kind of processor is on the machine. ------------------------------------------------------- Date: 2000-Oct-03 09:16 By: fdrake Comment: Daniel Dittmar's response: - the configure script works, except that the correct option is '-Kpthread' (you mistyped '-Lpthread' - there's a compilation error in thread_pthread.h:181: the expression '(long) threadid' is not valid. The definition of pthread_t is typedef struct PTHREAD_HANDLE_T { void *field1; short int field2; short int field3; } pthread_handle_t; typedef pthread_handle_t pthread_t; so I doubt that the alternative return (long) *(long *) &threadid is valid. I could compile it with this version, but I doubt it's returning a meaningful thread id. Is there a test for the thread module? I'm away for the rest of the week, so I couldn't test anything for the Wednesday date. - for your information, I'm including the results of 'make test', at least the failed ones: test test_fork1 crashed -- exceptions.OSError: [Errno 4] Interrupted system call test test_popen2 crashed -- exceptions.IOError: [Errno 4] Interrupted system call test_signal Trace/Breakpoint Trap - core dumped make: *** Error code 133 (ignored) make: *** Error code 133 (bu21) (ignored) These test leave a few python processes around. I'll probably look into this when I return. Daniel ------------------------------------------------------- Date: 2000-Oct-02 08:30 By: fdrake Comment: Sent another version of the configure script to Daniel Dittmar for testing the thread support. I think this is the last remaining problem listed in this bug report. ------------------------------------------------------- Date: 2000-Oct-01 10:51 By: fdrake Comment: mmap patch checked in as Modules/mmapmodule.c revision 2.24. ------------------------------------------------------- Date: 2000-Oct-01 09:34 By: ddittmar Comment: The patch for the mmap module works on 2.0b2 ------------------------------------------------------- Date: 2000-Sep-28 10:36 By: fdrake Comment: I've sent a patch for the mmap module to Daniel to test on Reliant UNIX; the patch should remove the need to link to libucb on that platform (the only platform that needed that as far as we know). ------------------------------------------------------- Date: 2000-Sep-25 08:10 By: fdrake Comment: Fix to make sure the public API properly exposed to extensions checked in as configure.in revision 1.155. The rest of these issues can be dealt with in 2.0 final. ------------------------------------------------------- Date: 2000-Sep-24 05:46 By: ddittmar Comment: - configure --without-threads works with the configure patch Revision 1.158 - mmapmodule would work if it includes the lines #include static int getpagesize (void) { return sysconf (_SC_PAGESIZE); } This would be the preferred way as using the BSD compatibility with -lucb is discouraged. It requires chages to configure (has_pagesize, has_sysconf_sc_pagesize) - configure --with-threads doesn't build yet, keeping contact with fdrake ------------------------------------------------------- Date: 2000-Sep-21 08:46 By: fdrake Comment: Received message from Daniel indicating he should get a chace to test the changes this weekend, so it should be available for 2.0b2. ------------------------------------------------------- Date: 2000-Sep-21 08:17 By: fdrake Comment: Sent query to Daniel Dittmar asking if he's had a chance to test the revised configure script I sent. ------------------------------------------------------- Date: 2000-Sep-15 13:37 By: fdrake Comment: I'm sending a modified version of the configure script to Daniel Dittmar to test for the first two points in this bug report. ------------------------------------------------------- Date: 2000-Sep-15 11:57 By: fdrake Comment: For the mmap issue, I've added a comment to Modules/Setup.in to let installers know that -lucb may be needed. In revision 1.110. ------------------------------------------------------- Date: 2000-Sep-12 21:59 By: fdrake Comment: Received the following response from Daniel Dittmar : > We need to know the output of "uname -s" and "uname -r" > for this system. (If "uname -r" reports an error, please > try "uname -v".) uname -s ReliantUNIX-N uname -r 5.45 > Are you willing to test a modified configure script on > this platform? sure. > What additional library is required for the mmap module? The man page states -lucb. This didn't work on my machine as the BSD compatibility layer is not active. I tell you more as soon as I know how to activate it. *********** Another problem: to detect pthreads, the compiler must be called with -Kpthread. Otherwise, pthread.h goes into a branch where it tries to include a non existent header, fails, and configure reports 'no pthreads'. Daniel ------------------------------------------------------- Date: 2000-Sep-08 13:45 By: fdrake Comment: We need to know the output of "uname -s" and "uname -r" for this system. (If "uname -r" reports an error, please try "uname -v".) Are you willing to test a modified configure script on this platform? What additional library is required for the mmap module? ------------------------------------------------------- Date: 2000-Sep-07 15:05 By: jhylton Comment: Please do triage on this bug. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=113797&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:59:15 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:59:15 -0800 Subject: [Python-bugs-list] [Bug #116289] Programs using Tkinter sometimes can't shut down (Windows) Message-ID: <200012122059.MAA20837@usw-sf-web1.sourceforge.net> Bug #116289, was updated on 2000-Oct-06 19:25 Here is a current snapshot of the bug. Project: Python Category: Tkinter Status: Open Resolution: None Bug Group: 3rd Party Priority: 3 Submitted by: tim_one Assigned to : tim_one Summary: Programs using Tkinter sometimes can't shut down (Windows) Details: The following msg from the Tutor list is about 1.6, but I noticed the same thing several times today using 2.0b2+CVS. In my case, I was running IDLE via python ../tool/idle/idle.pyw from a DOS box in my PCbuild directory. Win98SE. *Most* of the time, shutting down IDLE via Ctrl+Q left the DOS box hanging. As with the poster, the only way to regain control was to use the Task Manager to kill off Winoldap. -----Original Message----- From: Joseph Stubenrauch Sent: Friday, October 06, 2000 9:23 PM To: tutor@python.org Subject: Re: [Tutor] Python 1.6 BUG Strange, I have been experiencing the same bug myself. Here's the low down for me: Python 1.6 with win95 I am running a little Tkinter program The command line I use is simply: "python foo.py" About 25-35% of the time, when I close the Tkinter window, DOS seems to "freeze" and never returns to the c:\ command prompt. I have to ctrl-alt-delete repeatedly and shut down "winoldapp" to get rid of the window and then shell back into DOS and keep working. It's a bit of a pain, since I have the habit of testing EVERYTHING in tiny little stages, so I change one little thing, test it ... freeze ... ARGH! Change one more tiny thing, test it ... freeze ... ARGH! However, sometimes it seems to behave and doesn't bother me for an entire several hour session of python work. That's my report on the problem. Cheers, Joe Follow-Ups: Date: 2000-Dec-12 12:58 By: gvanrossum Comment: Tim, can you still reproduce this with the current CVS version? There's been one critical patch to _tkinter since the 2.0 release. An alternative would be to try with a newer version of Tcl (isn't 8.4 out already?). ------------------------------------------------------- Date: 2000-Oct-15 09:47 By: nobody Comment: Same as I've reported earlier; it hangs in the call to Tcl_Finalize (which is called by the DLL finalization code). It's less likely to hang if I call Tcl_Finalize from the _tkinter DLL (from user code). Note that the problem isn't really Python-related -- I have stand-alone samples (based on wish) that hangs in the same way. More later. ------------------------------------------------------- Date: 2000-Oct-13 07:40 By: gvanrossum Comment: Back to Tim since I have no clue what to do here. ------------------------------------------------------- Date: 2000-Oct-12 10:25 By: gvanrossum Comment: The recent fix to _tkinter (Tcl_GetStringResult(interp) instead of interp->result) didn't fix this either. As Tim has remarked in private but not yet recorded here, a workaround is to use pythonw instead of python, so I'm lowering thepriority again. Also note that the hanging process that Tim writes about apparently prevents Win98 from shutting down properly. ------------------------------------------------------- Date: 2000-Oct-07 00:37 By: tim_one Comment: More info (none good, but some worse so boosted priority): + Happens under release and debug builds. + Have not been able to provoke when starting in the debugger. + Ctrl+Alt+Del and killing Winoldap is not enough to clean everything up. There's still a Python (or Python_d) process hanging around that Ctrl+Alt+Del doesn't show. + This process makes it impossible to delete the associated Python .dll, and in particular makes it impossible to rebuild Python successfully without a reboot. + These processes cannot be killed! Wintop and Process Viewer both fail to get the job done. PrcView (a freeware process viewer) itself locks up if I try to kill them using it. Process Viewer freezes for several seconds before giving up. + Attempting to attach to the process with the MSVC debugger (in order to find out what the heck it's doing) finds the process OK, but then yields the cryptic and undocumented error msg "Cannot execute program". + The processes are not accumulating cycles. + Smells like deadlock. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116289&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:00:11 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:00:11 -0800 Subject: [Python-bugs-list] [Bug #116388] cStringIO rejects Unicode strings Message-ID: <200012122100.NAB20866@usw-sf-web1.sourceforge.net> Bug #116388, was updated on 2000-Oct-08 17:42 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Feature Request Priority: 5 Submitted by: prescod Assigned to : fdrake Summary: cStringIO rejects Unicode strings Details: >>> import cStringIO >>> s=cStringIO.StringIO(u"abcdefgh") Traceback (innermost last): File "", line 1, in ? s=cStringIO.StringIO(u"abcdefgh") TypeError: expected string, unicode found Follow-Ups: Date: 2000-Dec-12 13:00 By: gvanrossum Comment: Assigned to Fred -- maybe you can prod Jim into looking into this. ------------------------------------------------------- Date: 2000-Oct-09 01:34 By: lemburg Comment: I've marked this as feature request since making the standard lib Unicode compatible is a post-2.0 project (probably a good one for 2.1). ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116388&group_id=5470 From noreply@sourceforge.net Tue Dec 12 20:58:11 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 12:58:11 -0800 Subject: [Python-bugs-list] [Bug #116289] Programs using Tkinter sometimes can't shut down Message-ID: <200012122058.MAA29052@usw-sf-web2.sourceforge.net> Bug #116289, was updated on 2000-Oct-06 19:25 Here is a current snapshot of the bug. Project: Python Category: Tkinter Status: Open Resolution: None Bug Group: None Priority: 3 Submitted by: tim_one Assigned to : tim_one Summary: Programs using Tkinter sometimes can't shut down Details: The following msg from the Tutor list is about 1.6, but I noticed the same thing several times today using 2.0b2+CVS. In my case, I was running IDLE via python ../tool/idle/idle.pyw from a DOS box in my PCbuild directory. Win98SE. *Most* of the time, shutting down IDLE via Ctrl+Q left the DOS box hanging. As with the poster, the only way to regain control was to use the Task Manager to kill off Winoldap. -----Original Message----- From: Joseph Stubenrauch Sent: Friday, October 06, 2000 9:23 PM To: tutor@python.org Subject: Re: [Tutor] Python 1.6 BUG Strange, I have been experiencing the same bug myself. Here's the low down for me: Python 1.6 with win95 I am running a little Tkinter program The command line I use is simply: "python foo.py" About 25-35% of the time, when I close the Tkinter window, DOS seems to "freeze" and never returns to the c:\ command prompt. I have to ctrl-alt-delete repeatedly and shut down "winoldapp" to get rid of the window and then shell back into DOS and keep working. It's a bit of a pain, since I have the habit of testing EVERYTHING in tiny little stages, so I change one little thing, test it ... freeze ... ARGH! Change one more tiny thing, test it ... freeze ... ARGH! However, sometimes it seems to behave and doesn't bother me for an entire several hour session of python work. That's my report on the problem. Cheers, Joe Follow-Ups: Date: 2000-Dec-12 12:58 By: gvanrossum Comment: Tim, can you still reproduce this with the current CVS version? There's been one critical patch to _tkinter since the 2.0 release. An alternative would be to try with a newer version of Tcl (isn't 8.4 out already?). ------------------------------------------------------- Date: 2000-Oct-15 09:47 By: nobody Comment: Same as I've reported earlier; it hangs in the call to Tcl_Finalize (which is called by the DLL finalization code). It's less likely to hang if I call Tcl_Finalize from the _tkinter DLL (from user code). Note that the problem isn't really Python-related -- I have stand-alone samples (based on wish) that hangs in the same way. More later. ------------------------------------------------------- Date: 2000-Oct-13 07:40 By: gvanrossum Comment: Back to Tim since I have no clue what to do here. ------------------------------------------------------- Date: 2000-Oct-12 10:25 By: gvanrossum Comment: The recent fix to _tkinter (Tcl_GetStringResult(interp) instead of interp->result) didn't fix this either. As Tim has remarked in private but not yet recorded here, a workaround is to use pythonw instead of python, so I'm lowering thepriority again. Also note that the hanging process that Tim writes about apparently prevents Win98 from shutting down properly. ------------------------------------------------------- Date: 2000-Oct-07 00:37 By: tim_one Comment: More info (none good, but some worse so boosted priority): + Happens under release and debug builds. + Have not been able to provoke when starting in the debugger. + Ctrl+Alt+Del and killing Winoldap is not enough to clean everything up. There's still a Python (or Python_d) process hanging around that Ctrl+Alt+Del doesn't show. + This process makes it impossible to delete the associated Python .dll, and in particular makes it impossible to rebuild Python successfully without a reboot. + These processes cannot be killed! Wintop and Process Viewer both fail to get the job done. PrcView (a freeware process viewer) itself locks up if I try to kill them using it. Process Viewer freezes for several seconds before giving up. + Attempting to attach to the process with the MSVC debugger (in order to find out what the heck it's doing) finds the process OK, but then yields the cryptic and undocumented error msg "Cannot execute program". + The processes are not accumulating cycles. + Smells like deadlock. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116289&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:03:22 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:03:22 -0800 Subject: [Python-bugs-list] [Bug #117158] String literal documentation is not up to date Message-ID: <200012122103.NAA20950@usw-sf-web1.sourceforge.net> Bug #117158, was updated on 2000-Oct-18 03:41 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 7 Submitted by: edg Assigned to : fdrake Summary: String literal documentation is not up to date Details: Section 2.4.1 of the Reference Manual does not mention unicode strings and the unicode escape sequences \u and \U at all. Moreover, it still states that "\x" escapes consume an arbitrary number (>=2) of hex digits (while it is exactly 2 right now: PEP223). Follow-Ups: Date: 2000-Dec-12 13:03 By: gvanrossum Comment: Can you fix this? Shouldn't be hard, right? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117158&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:04:26 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:04:26 -0800 Subject: [Python-bugs-list] [Bug #116677] minidom:Node.appendChild() has wrong semantics Message-ID: <200012122104.NAA20976@usw-sf-web1.sourceforge.net> Bug #116677, was updated on 2000-Oct-11 19:24 Here is a current snapshot of the bug. Project: Python Category: XML Status: Open Resolution: None Bug Group: None Priority: 7 Submitted by: akuchling Assigned to : fdrake Summary: minidom:Node.appendChild() has wrong semantics Details: Consider this test program: from xml.dom import minidom doc = minidom.Document() root = doc.createElement('root') ; doc.appendChild( root ) elem = doc.createElement('leaf') root.appendChild( elem ) root.appendChild( elem ) print doc.toxml() print root.childNodes It prints: [, ] 'elem' is now linked into the DOM tree in two places, which is wrong; according to the DOM Level 1 spec, "If the newChild is already in the tree, it is first removed." Follow-Ups: Date: 2000-Dec-12 13:04 By: gvanrossum Comment: Fred, can you check status on this? Possibly it's alrady been fixed. ------------------------------------------------------- Date: 2000-Nov-23 18:29 By: akuchling Comment: Patch #102492 has been submitted to fix this. ------------------------------------------------------- Date: 2000-Nov-21 14:23 By: fdrake Comment: Re-categorized this bug to "XML". This is *not* fixed by Lib/xml/dom/minidom.py revision 1.14. Unfortunately, this bug will be a little harder to fix. I looked to see if I could determine presence in the tree by checking for parentNode != None, but that isn't sufficient. xml.dom.pulldom maintains state by filling in the parentNode attribute, so it has a chain of ancestors; it needs this to find the node to add children to in DOMEventStream.expandNode(). Testing that a node is already in the tree is harder, but not much harder. A reasonable fix for this bug should not be difficult. ------------------------------------------------------- Date: 2000-Oct-16 06:47 By: akuchling Comment: I don't see why this particular deviation is a border case. All the methods for modifying a DOM tree -- appendChild(), insertBefore(), replaceChild() -- all behave the same way, first removing the added node if it's already in the tree somewhere. This will make it more difficult to translate DOM-using code from, say, Java, to Python + minidom, since you'll have to remember to add extra .removeChild() calls. Worse still, the problems caused by this will be hard to track down; portions of your DOM tree are aliased, but .toxml() won't make this clear. ------------------------------------------------------- Date: 2000-Oct-16 00:43 By: loewis Comment: This is indeed a bug in minidom, but I don't think it should be corrected for 2.0; I suggest to reduce the priority of it, or close it as "later". While this is a deviation from the DOM spec, it seems as a border case. As such, it should be documented; users can always explicitly remove the node before appending it elsewhere. ------------------------------------------------------- Date: 2000-Oct-12 07:37 By: nobody Comment: The test_minidom failure turned out to be caused by something else. However, I rechecked my test case and it's still broken with tonight's CVS. ------------------------------------------------------- Date: 2000-Oct-11 20:11 By: akuchling Comment: CVS as of this evening. Did it work before? (Hmm... tonight test_minidom is failing for me for some reason. Wonder if it's related?) ------------------------------------------------------- Date: 2000-Oct-11 19:55 By: fdrake Comment: Andrew: Are you using 2.0c1 or CVS? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116677&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:07:09 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:07:09 -0800 Subject: [Python-bugs-list] [Bug #117090] PIL (TkImaging) extension instructions wrong Message-ID: <200012122107.NAA25895@usw-sf-web3.sourceforge.net> Bug #117090, was updated on 2000-Oct-17 08:55 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: None Bug Group: None Priority: 7 Submitted by: vanandel Assigned to : fdrake Summary: PIL (TkImaging) extension instructions wrong Details: Modules/Setup.in contains: # *** Uncomment and edit for PIL (TkImaging) extension only: # -DWITH_PIL -I../Extensions/Imaging/libImaging tkImaging.c \ However, there is no directory 'Extensions' or any source file tkImaging.c Either these instructions should be removed from Setup.in, or tkImaging.c should be added to the distribution, or instructions should be added where to find tkImaging.c and the associated imaging libraries. Follow-Ups: Date: 2000-Dec-12 13:07 By: gvanrossum Comment: I'm adding a comment with a pointer to the URL that /F mentions. ------------------------------------------------------- Date: 2000-Oct-18 16:19 By: nobody Comment: I'm not sure "wrong" is the right word here -- the comment is clearly correct, as long as the reader understands that modules that are commented out in the Setup file doesn't necessarily build on all platforms (as mentioned at the top of the Setup file, and also in the README file. "if you get compilation or link errors, disable it -- you're missing support"). however, the easiest solution here is probably to add a reference to: http://www.pythonware.com/products/pil ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117090&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:08:06 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:08:06 -0800 Subject: [Python-bugs-list] [Bug #116678] minidom doesn't raise exception for illegal children Message-ID: <200012122108.NAA25935@usw-sf-web3.sourceforge.net> Bug #116678, was updated on 2000-Oct-11 19:30 Here is a current snapshot of the bug. Project: Python Category: XML Status: Open Resolution: None Bug Group: None Priority: 6 Submitted by: akuchling Assigned to : fdrake Summary: minidom doesn't raise exception for illegal children Details: Some types of DOM node such as Text can't have children. minidom doesn't check for this at all: from xml.dom import minidom doc = minidom.Document() text = doc.createTextNode('lorem ipsum') elem = doc.createElement('leaf') text.appendChild( elem ) print text.toxml() This outputs just 'lorem ipsum', but elem really is a child of text; Text.toxml() just isn't recursing because it doesn't expect to do so. Follow-Ups: Date: 2000-Dec-12 13:08 By: gvanrossum Comment: Reassigning to Fred so he can pressure Paul into doing something about this. ------------------------------------------------------- Date: 2000-Nov-23 08:07 By: akuchling Comment: Patch #102485 has been submitted to fix this. ------------------------------------------------------- Date: 2000-Nov-21 14:33 By: fdrake Comment: Oops, should re-categorize this as "XML" while I'm at it. ------------------------------------------------------- Date: 2000-Nov-21 14:33 By: fdrake Comment: >From the documentation, I'd expect the Pythonic "moral equivalents" to be raised, which would be a ValueError in the case of illegal node types. I'll even go so far as to say that ValueError should be raised when a second documentElement is appended, instead of a TypeError, to be more consistent with usage else where in the standard library: Pythonic style is to raise a ValueError when the type of a value is right (in this case, a DOM Node), but the specific value is not acceptable, either because it is illegal or because it cannot be accepted given existing state (like already having a documentElement). ------------------------------------------------------- Date: 2000-Oct-15 07:06 By: loewis Comment: I believe this is not a bug, but an intended deviation from the DOM spec. minidom (as the proposed documentation in patch 101821 explains) does not support the IDL exceptions of module DOM, so it cannot report errors about improper usage. ------------------------------------------------------- Date: 2000-Oct-12 20:02 By: fdrake Comment: This is a bug with detecting an improper use. It should be fixed, but need not be for Python 2.0. Correct use will not produce erroneous behavior. Reducing priority by one. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116678&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:09:24 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:09:24 -0800 Subject: [Python-bugs-list] [Bug #117508] Building 2.0 under Solaris 8 with GCC 2.95.2 fails to link Message-ID: <200012122109.NAA25961@usw-sf-web3.sourceforge.net> Bug #117508, was updated on 2000-Oct-23 10:16 Here is a current snapshot of the bug. Project: Python Category: Build Status: Closed Resolution: None Bug Group: Platform-specific Priority: 6 Submitted by: devphil Assigned to : david_ascher Summary: Building 2.0 under Solaris 8 with GCC 2.95.2 fails to link Details: With no special options to 'configure' the final link step results in cd Modules; make OPT="-g -O2 -Wall -Wstrict-prototypes" VERSION="2.0" \ prefix="/usr/local" exec_prefix="/usr/local" \ LIBRARY=../libpython2.0.a link make[1]: Entering directory `/tmp/pedwards/newbuild/Python-2.0/Modules' gcc python.o \ ../libpython2.0.a -ldb -lpthread -lsocket -lnsl -ldl -lthread -lm -o python Undefined first referenced symbol in file dbopen ../libpython2.0.a(bsddbmodule.o) ld: fatal: Symbol referencing errors. No output written to python collect2: ld returned 1 exit status make[1]: *** [link] Error 1 This is using GNU make, and GCC with the native linker. Using the native compiler works fine. Follow-Ups: Date: 2000-Dec-12 13:09 By: gvanrossum Comment: Closing. Seems to have been a unique configuration problem. ------------------------------------------------------- Date: 2000-Oct-28 19:17 By: devphil Comment: Incredible. I file a bug on Python, and end up discovering a bug in BSDDB. There is a BSDDB locally installed... % ls -l /usr/local/include/db.h lrwxrwxrwx 1 38 Dec 7 1999 /usr/local/include/db.h -> xxx/db-2.7.7/BerkeleyDB/include/db.h % ls -l /usr/local/lib/libdb.a lrwxrwxrwx 1 37 Dec 7 1999 /usr/local/lib/libdb.a -> xxx/db-2.7.7/BerkeleyDB/lib/libdb.a ...but they do match up, and -ldb is passed on the link line. There is no dbopen() in db.h, only db_open(), and that function is in the library. The dbopen() function is in db_185.h, but BSDDB 1.85 compatability has to be specifically enabled when building BSDDB. We didn't request it, because we don't need it. So it didn't get built, and that is correct behavior. However, BSDDB installs db_185.h unconditionally, even if the actual implemantion is left out of the library. And bsddbmodule looks for db_185.h before db.h. (This might have been corrected; I'm looking through source from 2.7.7 which is 14 months old.) Guess I need to tell BSDDB folks to actually *check* to see if 1.85 support was requested by the user before installing the header. Grrrrr... Thanks for pointing that out. I try to stay away from databases when possible, and would never think on my own to look for this. ------------------------------------------------------- Date: 2000-Oct-28 03:50 By: loewis Comment: It appears that configure detected the presence of db.h on your system, perhaps in /usr/local. Do you have a BSDDB installation there? Could it be that this installation is somehow corrupted (e.g. that the db.h header does not match the libdb.a library?). That would explain why the native compiler has no problems - it simply doesn't see the db.h header in /usr/local, so it doesn't even attempt to build bsddbmodule.o. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117508&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:01:14 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:01:14 -0800 Subject: [Python-bugs-list] [Bug #116547] test_poll.py fails on SPARCstation LX under Red Hat 5.2 Message-ID: <200012122101.NAA29128@usw-sf-web2.sourceforge.net> Bug #116547, was updated on 2000-Oct-10 15:02 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Open Resolution: None Bug Group: Irreproducible Priority: 1 Submitted by: holdenweb Assigned to : akuchling Summary: test_poll.py fails on SPARCstation LX under Red Hat 5.2 Details: Attached is a traceback from running the test manually. Running poll test 1 This is a test. This is a test. This is a test. This is a test. This is a test. This is a test. This is a test. This is a test. This is a test. This is a test. This is a test. This is a test. Traceback (most recent call last): File "./Lib/test/test_poll.py", line 171, in ? test_poll1() File "./Lib/test/test_poll.py", line 65, in test_poll1 poll_unit_tests() File "./Lib/test/test_poll.py", line 77, in poll_unit_tests r = p.poll() select.error: (9, 'Bad file descriptor') Follow-Ups: Date: 2000-Dec-12 13:01 By: gvanrossum Comment: Is this worth keeping open? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116547&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:11:56 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:11:56 -0800 Subject: [Python-bugs-list] [Bug #117464] clash with BSD db when building Message-ID: <200012122111.NAA26038@usw-sf-web3.sourceforge.net> Bug #117464, was updated on 2000-Oct-23 00:29 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: fleury Assigned to : nobody Summary: clash with BSD db when building Details: On RH7.0, beside the LONG_BIT issue, I came across a db.h missmatch. The preprocessor directive values used in bsddbmodule.c correspond to the /usr/include/db1/db.h file, but on my system, /usr/include/db.h points to db3/db.h which does not define the same set of values, and does not compile. Compiling with the old file works, but obviously it does not link... configure (with no options) ran trouble free. The system is: RedHat 7.0 gcc version 2.96 20000731 (yes with the LONG_BIT problem) Python 2.0 final release Regards, Pascal Follow-Ups: Date: 2000-Nov-06 08:40 By: montanaro Comment: This is going to require a bit of effort. My current scheme for detecting whether bsddb can be built/linked or not relies on the presence or absence of db.h and/or db_185.h. If db_185.h is present, libdb v.2 is assumed. If only db.h is present, libdb v.1 is assumed. Now Sleepycat has libdb v.3, and on RH7 it appears you can have all three versions installed at once. I don't yet know if bsddbmodule.c can be built/linked with v.3 (seems likely, since db_185.h still existts), but even if it can, configure will have to grovel around in db.h looking for DB_VERSION_MAJOR. If it doesn't exist, we have v.1. If it does exist, its value will determine what version > 1 we have. I imagine for an autoconf whiz this will be a simple task, but it's more of a challenge than I have time for at the moment. Anyone want to take this on? ------------------------------------------------------- Date: 2000-Oct-23 18:13 By: fleury Comment: Well, I also tried at home, where I have a vanilla RH7.0 and it compiles perfectly. The reported bug was on a RH6.2->RH7.0 upgraded machine. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117464&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:12:41 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:12:41 -0800 Subject: [Python-bugs-list] [Bug #116008] Subsection Hypertext Links are broken in HTML Docs Message-ID: <200012122112.NAA21214@usw-sf-web1.sourceforge.net> Bug #116008, was updated on 2000-Oct-04 07:33 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: pefu Assigned to : fdrake Summary: Subsection Hypertext Links are broken in HTML Docs Details: For example load ftp://python.beopen.com/pub/docco/devel/tut/node3.html into your favorite HTML browser and click on the link labeled "1.1 Where From Here". It doesn't work as it used to work before in the 1.5.2 docs. Unfortunately I can't tell which change to the latex2html engine broke this. Follow-Ups: Date: 2000-Dec-12 13:12 By: gvanrossum Comment: Is this still broken, Fred? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116008&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:17:50 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:17:50 -0800 Subject: [Python-bugs-list] [Bug #119707] urllib failure when return code not 200 Message-ID: <200012122117.NAA26203@usw-sf-web3.sourceforge.net> Bug #119707, was updated on 2000-Oct-29 16:11 Here is a current snapshot of the bug. Project: Python Category: Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : jhylton Summary: urllib failure when return code not 200 Details: urllib fails sometimes. A traceback follows but the problem is that, on line 286, fp is used as a parameter when it has a value of None. Not sure why; HTTP.getreply seems like it always returns a file. Not time right now to look further into it... brian@sweetapp.com Traceback (most recent call last): File "/usr/home/sweetapp/public_html/zonewatcher/Rating", line 83, in ? UpdatePlayerRatings( database, zonePlayer ) File "/usr/home/sweetapp/public_html/zonewatcher/Rating", line 59, in UpdatePlayerRatings mainRatings = GetMainRatings( zonePlayer.GetZoneName( ), AOEIImainRating, AOEIIexpansionMainRating ) File "/usr/home/sweetapp/public_html/zonewatcher/Rating", line 20, in GetMainRatings ratingsPage = GetRatingsPage( zoneID ) File "/usr/home/sweetapp/public_html/zonewatcher/Rating", line 14, in GetRatingsPage url = urllib.urlopen( 'http://www.zone.com/Profile/RatingsPlayer.asp?PlayerID=' + zoneID ) File "/usr/home/sweetapp/Python-2.0/Lib/urllib.py", line 61, in urlopen return _urlopener.open(url) File "/usr/home/sweetapp/Python-2.0/Lib/urllib.py", line 166, in open return getattr(self, name)(url) File "/usr/home/sweetapp/Python-2.0/Lib/urllib.py", line 286, in open_http return self.http_error(url, fp, errcode, errmsg, headers) File "/usr/home/sweetapp/Python-2.0/Lib/urllib.py", line 303, in http_error return self.http_error_default(url, fp, errcode, errmsg, headers) File "/usr/home/sweetapp/Python-2.0/Lib/urllib.py", line 518, in http_error_default return addinfourl(fp, headers, "http:" + url) File "/usr/home/sweetapp/Python-2.0/Lib/urllib.py", line 772, in __init__ addbase.__init__(self, fp) File "/usr/home/sweetapp/Python-2.0/Lib/urllib.py", line 726, in __init__ self.read = self.fp.read AttributeError: 'None' object has no attribute 'read' Follow-Ups: Date: 2000-Dec-12 13:17 By: gvanrossum Comment: Jeremy, can you see if this is a valid bug report? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119707&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:19:11 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:19:11 -0800 Subject: [Python-bugs-list] [Bug #119709] A make error Message-ID: <200012122119.NAA26263@usw-sf-web3.sourceforge.net> Bug #119709, was updated on 2000-Oct-29 16:20 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: jsfrank Assigned to : akuchling Summary: A make error Details: I use: Slackware96 Linux 2.0.0 gcc 2.7.2 glibc2 PC/i486 And I try to install Python-2.0.tar.gz package. I use default Modules/Setup. When run: #./configure #make gcc tells me in Modules/selectmodule.c, begins from the 345 line, POLLIN undeclared,... Every POLL* name follows are all undeclared. Which header file lost? poll.h? Or something wrong? Thanks. Follow-Ups: Date: 2000-Dec-12 13:19 By: gvanrossum Comment: Did the user ever reply? If not, let's close this one. There are too many potential configuration problems lingering around in the Bugs list that are probably not bugs in Python... ------------------------------------------------------- Date: 2000-Nov-03 12:09 By: akuchling Comment: Can you provide the exact output from make, please, and a copy of the config.h generated by Python's configure script? It's possible that both HAVE_POLL_H and HAVE_POLL are defined but the header files are wrong in some way that POLLIN isn't defined. You can provide the output and config.h via private e-mail to akuchlin@mems-exchange.org. ------------------------------------------------------- Date: 2000-Nov-02 20:16 By: fdrake Comment: I think this has been fixed post-2.0, but I'm not sure. Assigned to Andrew since he'll know and, if it's not fixed, will be the one to do so. ;-) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119709&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:32:04 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:32:04 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: <200012122132.NAA26620@usw-sf-web3.sourceforge.net> Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : fdrake Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error Follow-Ups: Date: 2000-Dec-12 13:32 By: gvanrossum Comment: OK, assigned to Fred. You may ask Andrew what to write. :-) ------------------------------------------------------- Date: 2000-Dec-08 14:50 By: abo Comment: I'm not that sure I'm happy with it just being marked closed. AFAIKT, the implementation definitely doesn't do what the documentation says, so to save people like me time when they hit it, I'prefer the bug at least be assigned to documentation so that the limitation is documented. >From my reading of the documentation as it stands, the fact that there is more pending data in the decompressor should be indicated by it's "unused_data" attribute. The tests seem to show that "decompress()" is only processing 16K of compressed data each call, which would suggest that "unused_data" should contain the rest. However, in all my tests that attribute has always been empty. Perhaps the bug is in there somewhere? Another slight strangeness, even if "unused_data" did contain something, the only way to get it out is by feeding in more compressed data, or issuing a flush(), thus ending the decompression... I guess that since I've been bitten by this, it's up to me to fix it. I've got the source to 2.0 and I'll have a look and see if I can submit a patch. and I was coding this app in python to avoid coding in C :-) ------------------------------------------------------- Date: 2000-Dec-08 09:26 By: akuchling Comment: Python 2.0 demonstrates the problem, too. I'm not sure what this is: a zlibmodule bug/oversight or simply problems with zlib's API. Looking at zlib.h, it implies that you'd have to call inflate() with the flush parameter set to Z_SYNC_FLUSH to get the remaining data. Unfortunately this doesn't seem to help -- .flush() method doesn't support an argument, but when I patch zlibmodule.c to allow one, .flush(Z_SYNC_FLUSH) always fails with a -5: buffer error, perhaps because it expects there to be some new data. (The DEFAULTALLOC constant in zlibmodule.c is 16K, but this seems to be unrelated to the problem showing up with more than 16K of data, since changing DEFAULTALLOC to 32K or 1K makes no difference to the size of data at which the bug shows up.) In short, I have no idea what's at fault, or if it can or should be fixed. Unless you or someone else submits a patch, I'll just leave it alone, and mark this bug as closed and "Won't fix". ------------------------------------------------------- Date: 2000-Dec-08 07:44 By: gvanrossum Comment: I *think* this may have been fixed in Python 2.0. I'm assigning this to Andrew who can confirm that and close the bug report (if it is fixed). ------------------------------------------------------- Date: 2000-Dec-07 23:28 By: abo Comment: Argh... SF killed all my indents... sorry about that. You should be able to figure it out, but if not email me and I can send a copy. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:54:38 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:54:38 -0800 Subject: [Python-bugs-list] [Bug #117195] Broken \ref link in documentation Message-ID: <200012122154.NAA30583@usw-sf-web2.sourceforge.net> Bug #117195, was updated on 2000-Oct-18 11:51 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: fdrake Assigned to : fdrake Summary: Broken \ref link in documentation Details: [Report received by python-docs.] From: Roy Smith Date: Wed, 18 Oct 2000 14:45:25 -0700 On the page http://www.python.org/doc/current/ref/exceptions.html, if I click on the link for secion 7.4 (http://www.python.org/doc/current/ref/node83.html#try), I get an Error 404: file not found. Follow-Ups: Date: 2000-Dec-12 13:54 By: fdrake Comment: I'll note that I think this is a LaTeX2HTML bug, but I need to spend some time digging into the \ref{} handling. It seems to have other problems as well. ;-( ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117195&group_id=5470 From noreply@sourceforge.net Tue Dec 12 21:56:54 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 13:56:54 -0800 Subject: [Python-bugs-list] [Bug #117608] test_largefile crashes or IRIX 6 Message-ID: <200012122156.NAA22343@usw-sf-web1.sourceforge.net> Bug #117608, was updated on 2000-Oct-24 08:51 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: Platform-specific Priority: 3 Submitted by: bbaxter Assigned to : sjoerd Summary: test_largefile crashes or IRIX 6 Details: During "make test", test_largefile caused an error. Here's the result in python: % python python2.0/test/test_largefile.py create large file via seek (may be sparse file) ... Traceback (most recent call last): File "python2.0/test/test_largefile.py", line 60, in ? f.flush() IOError: [Errno 22] Invalid argument Here's the version I'm running: Python 2.0 (#5, Oct 24 2000, 09:51:57) [C] on irix6 Follow-Ups: Date: 2000-Dec-12 13:56 By: bwarsaw Comment: Reassigning because I have neither large file support nor an IRIX machine. Guido suggests that Sjoerd might have access to IRIX. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117608&group_id=5470 From noreply@sourceforge.net Tue Dec 12 23:18:06 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 15:18:06 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : fdrake Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error Follow-Ups: Date: 2000-Dec-12 15:18 By: abo Comment: Further comments... After looking at the C code, a few things became clear; I need to read more about C/Python interfacing, and the "unused_data" attribute will only contain data if additional data is fed to a de-compressor at the end of a complete compressed stream. The purpose of the "unused_data" attribute is not clear in the documentation, so that should probably be clarified (mind you, I am looking at pre-2.0 docs so maybe it already has?). The failure to produce all data up to a sync-flush is something else... I'm still looking into it. I'm not sure if it is an inherent limitation of zlib, something that needs to be fixed in zlib, or something that needs to be fixed in the python interface. If it is an inherent limitation, I'd like to characterise it a bit better before documenting it. If it is something that needs to be fixed in either zlib or the python interface, I'd like to fix it. Unfortunately, this is a bit beyond me at the moment, mainly in time, but also a bit in skill (need to read the python/C interfacing documentation). Maybe over the christmas holidays I'll get a chance to fix it. ------------------------------------------------------- Date: 2000-Dec-12 13:32 By: gvanrossum Comment: OK, assigned to Fred. You may ask Andrew what to write. :-) ------------------------------------------------------- Date: 2000-Dec-08 14:50 By: abo Comment: I'm not that sure I'm happy with it just being marked closed. AFAIKT, the implementation definitely doesn't do what the documentation says, so to save people like me time when they hit it, I'prefer the bug at least be assigned to documentation so that the limitation is documented. >From my reading of the documentation as it stands, the fact that there is more pending data in the decompressor should be indicated by it's "unused_data" attribute. The tests seem to show that "decompress()" is only processing 16K of compressed data each call, which would suggest that "unused_data" should contain the rest. However, in all my tests that attribute has always been empty. Perhaps the bug is in there somewhere? Another slight strangeness, even if "unused_data" did contain something, the only way to get it out is by feeding in more compressed data, or issuing a flush(), thus ending the decompression... I guess that since I've been bitten by this, it's up to me to fix it. I've got the source to 2.0 and I'll have a look and see if I can submit a patch. and I was coding this app in python to avoid coding in C :-) ------------------------------------------------------- Date: 2000-Dec-08 09:26 By: akuchling Comment: Python 2.0 demonstrates the problem, too. I'm not sure what this is: a zlibmodule bug/oversight or simply problems with zlib's API. Looking at zlib.h, it implies that you'd have to call inflate() with the flush parameter set to Z_SYNC_FLUSH to get the remaining data. Unfortunately this doesn't seem to help -- .flush() method doesn't support an argument, but when I patch zlibmodule.c to allow one, .flush(Z_SYNC_FLUSH) always fails with a -5: buffer error, perhaps because it expects there to be some new data. (The DEFAULTALLOC constant in zlibmodule.c is 16K, but this seems to be unrelated to the problem showing up with more than 16K of data, since changing DEFAULTALLOC to 32K or 1K makes no difference to the size of data at which the bug shows up.) In short, I have no idea what's at fault, or if it can or should be fixed. Unless you or someone else submits a patch, I'll just leave it alone, and mark this bug as closed and "Won't fix". ------------------------------------------------------- Date: 2000-Dec-08 07:44 By: gvanrossum Comment: I *think* this may have been fixed in Python 2.0. I'm assigning this to Andrew who can confirm that and close the bug report (if it is fixed). ------------------------------------------------------- Date: 2000-Dec-07 23:28 By: abo Comment: Argh... SF killed all my indents... sorry about that. You should be able to figure it out, but if not email me and I can send a copy. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Wed Dec 13 02:00:39 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 18:00:39 -0800 Subject: [Python-bugs-list] [Bug #116289] Programs using Tkinter sometimes can't shut down (Windows) Message-ID: Bug #116289, was updated on 2000-Oct-06 19:25 Here is a current snapshot of the bug. Project: Python Category: Tkinter Status: Open Resolution: None Bug Group: 3rd Party Priority: 3 Submitted by: tim_one Assigned to : tim_one Summary: Programs using Tkinter sometimes can't shut down (Windows) Details: The following msg from the Tutor list is about 1.6, but I noticed the same thing several times today using 2.0b2+CVS. In my case, I was running IDLE via python ../tool/idle/idle.pyw from a DOS box in my PCbuild directory. Win98SE. *Most* of the time, shutting down IDLE via Ctrl+Q left the DOS box hanging. As with the poster, the only way to regain control was to use the Task Manager to kill off Winoldap. -----Original Message----- From: Joseph Stubenrauch Sent: Friday, October 06, 2000 9:23 PM To: tutor@python.org Subject: Re: [Tutor] Python 1.6 BUG Strange, I have been experiencing the same bug myself. Here's the low down for me: Python 1.6 with win95 I am running a little Tkinter program The command line I use is simply: "python foo.py" About 25-35% of the time, when I close the Tkinter window, DOS seems to "freeze" and never returns to the c:\ command prompt. I have to ctrl-alt-delete repeatedly and shut down "winoldapp" to get rid of the window and then shell back into DOS and keep working. It's a bit of a pain, since I have the habit of testing EVERYTHING in tiny little stages, so I change one little thing, test it ... freeze ... ARGH! Change one more tiny thing, test it ... freeze ... ARGH! However, sometimes it seems to behave and doesn't bother me for an entire several hour session of python work. That's my report on the problem. Cheers, Joe Follow-Ups: Date: 2000-Dec-12 18:00 By: tim_one Comment: Just reproduced w/ current CVS, but didn't hang until the 8th try. http://dev.scriptics.com/software/tcltk/ says 8.3 is still the latest released version; don't know whether that URL still makes sense, though. ------------------------------------------------------- Date: 2000-Dec-12 12:58 By: gvanrossum Comment: Tim, can you still reproduce this with the current CVS version? There's been one critical patch to _tkinter since the 2.0 release. An alternative would be to try with a newer version of Tcl (isn't 8.4 out already?). ------------------------------------------------------- Date: 2000-Oct-15 09:47 By: nobody Comment: Same as I've reported earlier; it hangs in the call to Tcl_Finalize (which is called by the DLL finalization code). It's less likely to hang if I call Tcl_Finalize from the _tkinter DLL (from user code). Note that the problem isn't really Python-related -- I have stand-alone samples (based on wish) that hangs in the same way. More later. ------------------------------------------------------- Date: 2000-Oct-13 07:40 By: gvanrossum Comment: Back to Tim since I have no clue what to do here. ------------------------------------------------------- Date: 2000-Oct-12 10:25 By: gvanrossum Comment: The recent fix to _tkinter (Tcl_GetStringResult(interp) instead of interp->result) didn't fix this either. As Tim has remarked in private but not yet recorded here, a workaround is to use pythonw instead of python, so I'm lowering thepriority again. Also note that the hanging process that Tim writes about apparently prevents Win98 from shutting down properly. ------------------------------------------------------- Date: 2000-Oct-07 00:37 By: tim_one Comment: More info (none good, but some worse so boosted priority): + Happens under release and debug builds. + Have not been able to provoke when starting in the debugger. + Ctrl+Alt+Del and killing Winoldap is not enough to clean everything up. There's still a Python (or Python_d) process hanging around that Ctrl+Alt+Del doesn't show. + This process makes it impossible to delete the associated Python .dll, and in particular makes it impossible to rebuild Python successfully without a reboot. + These processes cannot be killed! Wintop and Process Viewer both fail to get the job done. PrcView (a freeware process viewer) itself locks up if I try to kill them using it. Process Viewer freezes for several seconds before giving up. + Attempting to attach to the process with the MSVC debugger (in order to find out what the heck it's doing) finds the process OK, but then yields the cryptic and undocumented error msg "Cannot execute program". + The processes are not accumulating cycles. + Smells like deadlock. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116289&group_id=5470 From noreply@sourceforge.net Wed Dec 13 02:28:51 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Dec 2000 18:28:51 -0800 Subject: [Python-bugs-list] [Bug #110838] Inverse hyperbolic functions in cmath module (PR#231) Message-ID: Bug #110838, was updated on 2000-Aug-01 14:15 Here is a current snapshot of the bug. Project: Python Category: Modules Status: Closed Resolution: Fixed Bug Group: None Priority: 1 Submitted by: nobody Assigned to : tim_one Summary: Inverse hyperbolic functions in cmath module (PR#231) Details: Jitterbug-Id: 231 Submitted-By: nadavh@envision.co.il Date: Fri, 10 Mar 2000 18:35:07 -0500 (EST) Version: 1.52 OS: NT 4.0 SP4 1. The function cmath.acosh provides the negative branch with low precision. For example: >>> cmath.acosh(cmath.cosh(10.0)) (-10.0000000135+0j) Proposed solution --- use the following formula which is precise and avoids singularities with complex arguments: def acosh(x): return 2.0*log(sqrt(x+1.0) + sqrt(x-1.0)) - log(2.0) 2. The function cmath.sinh does not handle moderately large arguments. For example: >>> cmath.asinh(cmath.sinh(20.0)) (1.#INF+0j) Proposed solution: Use the textbook formula: def asinh(x): return log(x+sqrt(x*x+1.0)) This calculation is more limited then the acosh calculation, but still works fine. ==================================================================== Audit trail: Mon Apr 03 18:38:28 2000 guido changed notes Mon Apr 03 18:38:28 2000 guido moved from incoming to request Follow-Ups: Date: 2000-Dec-12 18:28 By: tim_one Comment: Hmm! According to CVS, Guido checked in Nadav's changes at the end of June, cmathmodule.c rev 2.13. Changing to Closed and Fixed accordingly. ------------------------------------------------------- Date: 2000-Dec-12 12:54 By: gvanrossum Comment: I've added this feature request to PEP 42. ------------------------------------------------------- Date: 2000-Aug-01 17:38 By: jhylton Comment: I think this bug should be left open, but perhaps a new bug should be created for the general feature request "re-write cmath in python." It's up to you, Tim. ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: Might be a good idea. Waiting for patches. ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: From: "Tim Peters" Subject: RE: [Python-bugs-list] Inverse hyperbolic functions in cmath module (PR#231) Date: Sat, 11 Mar 2000 13:47:25 -0500 [Tim] > C doesn't define any functions on complex numbers -- cmathmodule.c > implements these all on its own. [David Ascher] > As an aside, if anyone ever wants to trim the number of builtin C > modules, I found that it was much easier to write cmath.py than to > write cmath.java (for JPython). The same cmath.py should work fine > in CPython. Yes, I don't see anything in cmathmodule.c that *needs* to be coded in C; & coding would be much clearer in Python, using infix notation for the basic complex binary ops. Two possible reasons for leaving it in C: 1. Lower internal call overheads (i.e., speed). 2. Improving quality -- complex libraries are very difficult to get right in all cases if they're made IEEE-754 aware, and doing so requires fiddling with the processor-level 754 control & status features. But there's no portable way to do that now, and won't be until the next iteration of C. > I can dig it up, but I can't swear that I used the most numerically stable > algorithms. I can: you didn't . Doesn't matter, though! cmathmodule.c is naive too, and achieving good accuracy across the whole domain is a major undertaking. That gives the best reason to write it in Python: 3. There's a long way to go to make this "industrial strength", so the current cmath is really just a prototype. Everyone knows prototyping is much easier in Python. QED . > It did give the same numbers as CPython's cmath on a test set. So ship it . ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: From: "David Ascher" Subject: RE: [Python-bugs-list] Inverse hyperbolic functions in cmath module (PR#231) Date: Fri, 10 Mar 2000 21:49:27 -0800 > [Nadav Horesh, suggests different algorithms in cmath, for some complex > inverse hyperbolics] > > [Guido, misfires] > > We're just using the VC++ C library. > > C doesn't define any functions on complex numbers -- cmathmodule.c > implements these all on its own. I can't make time to look at > this now, but > complaining to Microsoft about this will do Nadav even less good than when > it *is* their problem . As an aside, if anyone ever wants to trim the number of builtin C modules, I found that it was much easier to write cmath.py than to write cmath.java (for JPython). The same cmath.py should work fine in CPython. I can dig it up, but I can't swear that I used the most numerically stable algorithms. It did give the same numbers as CPython's cmath on a test set. -david ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: From: "Tim Peters" Subject: RE: [Python-bugs-list] Inverse hyperbolic functions in cmath module (PR#231) Date: Fri, 10 Mar 2000 22:05:07 -0500 [Nadav Horesh, suggests different algorithms in cmath, for some complex inverse hyperbolics] [Guido, misfires] > We're just using the VC++ C library. C doesn't define any functions on complex numbers -- cmathmodule.c implements these all on its own. I can't make time to look at this now, but complaining to Microsoft about this will do Nadav even less good than when it *is* their problem . ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: From: "David Ascher" Subject: RE: [Python-bugs-list] Inverse hyperbolic functions in cmath module (PR#231) Date: Fri, 10 Mar 2000 16:47:23 -0800 > We're just using the VC++ C library. I suggest you send your bug > report to Microsoft. FWIW: the Perl folks are more and more (it seems to me) redoing things themselves if the C library tends to be broken or slow. I'm not suggesting that it's a good decision, just commenting. --david ------------------------------------------------------- Date: 2000-Aug-01 14:15 By: nobody Comment: From: Guido van Rossum Subject: Re: [Python-bugs-list] Inverse hyperbolic functions in cmath module (PR#231) Date: Fri, 10 Mar 2000 18:44:01 -0500 > Full_Name: Nadav Horesh > Version: 1.52 > OS: NT 4.0 SP4 > Submission from: (NULL) (212.25.119.223) > > > 1. The function cmath.acosh provides the negative branch with low > precision. For example: > > >>> cmath.acosh(cmath.cosh(10.0)) > (-10.0000000135+0j) > > Proposed solution --- use the following formula which is precise and > avoids singularities with complex arguments: > > def acosh(x): > return 2.0*log(sqrt(x+1.0) + sqrt(x-1.0)) - log(2.0) > > 2. The function cmath.sinh does not handle moderately large > arguments. For example: > > >>> cmath.asinh(cmath.sinh(20.0)) > (1.#INF+0j) > > Proposed solution: > > Use the textbook formula: > def asinh(x): > return log(x+sqrt(x*x+1.0)) > > This calculation is more limited then the acosh calculation, but > still works fine. We're just using the VC++ C library. I suggest you send your bug report to Microsoft. --Guido van Rossum (home page: http://www.python.org/~guido/) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=110838&group_id=5470 From noreply@sourceforge.net Wed Dec 13 10:38:04 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 02:38:04 -0800 Subject: [Python-bugs-list] [Bug #125598] Confusing KeyError-Message when key is tuple of size 1 Message-ID: Bug #125598, was updated on 2000-Dec-13 02:38 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: Feature Request Priority: 5 Submitted by: murple Assigned to : nobody Summary: Confusing KeyError-Message when key is tuple of size 1 Details: Following caused some confusion for me: >>> dic = {1:1,2:"bla"} >>> dic[1] 1 >>> b = (1,) #1000 lines of code >>> dic[b] Traceback (innermost last): File "", line 1, in ? KeyError: 1 # This should be KeyError: (1,) # because 1 is a valid key for dic >>> dic[(1,2)] Traceback (innermost last): File "", line 1, in ? KeyError: (1, 2) >>> For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125598&group_id=5470 From noreply@sourceforge.net Wed Dec 13 13:34:56 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 05:34:56 -0800 Subject: [Python-bugs-list] [Bug #125610] SuppReq: please elaborate on your email notif. requests Message-ID: Bug #125610, was updated on 2000-Dec-13 05:34 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: Not a Bug Priority: 5 Submitted by: pfalcon Assigned to : nobody Summary: SuppReq: please elaborate on your email notif. requests Details: We've got the task "Python requests" http://sourceforge.net/pm/task.php?func=detailtask&project_task_id=22577&group_id=1&group_project_id=2 . I believe bigdisk knows what that means but I think I could do that faster, so I'd like to have information from the original source. Please give specific examples how you want it to be. Thanks. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125610&group_id=5470 From noreply@sourceforge.net Wed Dec 13 14:09:48 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 06:09:48 -0800 Subject: [Python-bugs-list] [Bug #125598] Confusing KeyError-Message when key is tuple of size 1 Message-ID: Bug #125598, was updated on 2000-Dec-13 02:38 Here is a current snapshot of the bug. Project: Python Category: Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: murple Assigned to : bwarsaw Summary: Confusing KeyError-Message when key is tuple of size 1 Details: Following caused some confusion for me: >>> dic = {1:1,2:"bla"} >>> dic[1] 1 >>> b = (1,) #1000 lines of code >>> dic[b] Traceback (innermost last): File "", line 1, in ? KeyError: 1 # This should be KeyError: (1,) # because 1 is a valid key for dic >>> dic[(1,2)] Traceback (innermost last): File "", line 1, in ? KeyError: (1, 2) >>> Follow-Ups: Date: 2000-Dec-13 06:09 By: gvanrossum Comment: This seems a problem in exception reporting. I can reproduce it as follows: >>> raise KeyError, (1,) Traceback (most recent call last): File "", line 1, in ? KeyError: 1 >>> Assigned to Barry since he's the master of this code.] ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125598&group_id=5470 From noreply@sourceforge.net Wed Dec 13 14:17:41 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 06:17:41 -0800 Subject: [Python-bugs-list] [Bug #124782] configure should attempt to find a default C++ compiler. Message-ID: Bug #124782, was updated on 2000-Dec-06 15:50 Here is a current snapshot of the bug. Project: Python Category: Build Status: Closed Resolution: Wont Fix Bug Group: None Priority: 3 Submitted by: gvanrossum Assigned to : loewis Summary: configure should attempt to find a default C++ compiler. Details: It's annoying that C++ isn't better supported by default. Currently, you must specify the C++ compiler with the --with-gxx=... flag. The configure script could easily set CXX to g++ if that exists and if we are using GCC, for example. (But why does using the --with-gxx flag automatically create a main program compiled with C++?) Follow-Ups: Date: 2000-Dec-13 06:17 By: loewis Comment: The --with-cxx flag is designed to support extension modules written in C++. In some compilation systems, compiling any object file with C++ requires that the main function is compiled and linked with the C++ compiler. For example, on an a.out system, with g++, g++ will generate a call to __main as the first thing in main(), to allow for construction of global objects. On an advanced compilation system (e.g. ELF, or Win32), this is not necessary - global objects will be constructed even if main was not compiled with a C++ compiler. I believe the sole purpose of --with-cxx flag is to support that case; I can't emagine any other reason to use it. Since such requirement of the C++ compiler is becoming rare, I don't think there is a need to change the behaviour of the Python configure.in. So the real bug is that --with-cxx was not documented; that is corrected in README 1.107. ------------------------------------------------------- Date: 2000-Dec-11 12:47 By: gvanrossum Comment: Martin, do you happen to be a C++ user? Maybe you have an idea what to do with this? If not, assign it back to me or to Nobody. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124782&group_id=5470 From noreply@sourceforge.net Wed Dec 13 14:20:58 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 06:20:58 -0800 Subject: [Python-bugs-list] [Bug #125610] SuppReq: please elaborate on your email notif. requests Message-ID: Bug #125610, was updated on 2000-Dec-13 05:34 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: Not a Bug Priority: 5 Submitted by: pfalcon Assigned to : nobody Summary: SuppReq: please elaborate on your email notif. requests Details: We've got the task "Python requests" http://sourceforge.net/pm/task.php?func=detailtask&project_task_id=22577&group_id=1&group_project_id=2 . I believe bigdisk knows what that means but I think I could do that faster, so I'd like to have information from the original source. Please give specific examples how you want it to be. Thanks. Follow-Ups: Date: 2000-Dec-13 06:20 By: gvanrossum Comment: OK, I'll clarify. Note that this applies both to the patch and the bugs products. 1. Word wrap: the comments entered in the database for bugs & patches are often entered with a single very long line per paragraph. When the notification email is sent out, most Unix mail readers don't wrap words correctly. The request is to break any line that is longer than 79 characters in shorter pieces, the way e.g. ESC-q does in Emacs, or the fmt(1) program. 2. clickable submitter name: in the patch or bug details page, the submitter ("Submitted By" field) should be a hyperlink to the developer profile for that user (except if it is Nobody, of course). 3. mention what changed in the email: it would be nice if at the top of the notification email it said what caused the mail to be sent, e.g. "status changed from XXX to YYY" or "assiged to ZZZ" or "new comment added by XXX" or "new patch uploaded" or "priority changed to QQQ". If more than one field changed they should all be summarized. Hope this helps! Thanks for doing this. We love our SourceForge! ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125610&group_id=5470 From noreply@sourceforge.net Wed Dec 13 14:27:40 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 06:27:40 -0800 Subject: [Python-bugs-list] [Bug #124782] configure should attempt to find a default C++ compiler. Message-ID: Bug #124782, was updated on 2000-Dec-06 15:50 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: Wont Fix Bug Group: None Priority: 3 Submitted by: gvanrossum Assigned to : loewis Summary: configure should attempt to find a default C++ compiler. Details: It's annoying that C++ isn't better supported by default. Currently, you must specify the C++ compiler with the --with-gxx=... flag. The configure script could easily set CXX to g++ if that exists and if we are using GCC, for example. (But why does using the --with-gxx flag automatically create a main program compiled with C++?) Follow-Ups: Date: 2000-Dec-13 06:27 By: gvanrossum Comment: Reopening, because of one remaining issue. I just checked in changes to Modules/makesetup and Misc/Makefile.pre.in to use $(CXX) instead of $(CCC) for the C++ compiler, since CCC doesn't seem to be defined. However this only works if --with-cxx is used; otherwise CXX is not defined either. There was a bug report about this, #124478. The problem is, CXX extensions using the Makefile.pre.in mechanism don't work out of the box unless --with-cxx is used. I don't care if the --with-cxx option is changed (probably better not), but even if it isn't, the CXX variable should be given a default value if a C++ compiler can be guessed (I bet trying g++ when we're using GCC would take care of 90% of the problem :-). ------------------------------------------------------- Date: 2000-Dec-13 06:17 By: loewis Comment: The --with-cxx flag is designed to support extension modules written in C++. In some compilation systems, compiling any object file with C++ requires that the main function is compiled and linked with the C++ compiler. For example, on an a.out system, with g++, g++ will generate a call to __main as the first thing in main(), to allow for construction of global objects. On an advanced compilation system (e.g. ELF, or Win32), this is not necessary - global objects will be constructed even if main was not compiled with a C++ compiler. I believe the sole purpose of --with-cxx flag is to support that case; I can't emagine any other reason to use it. Since such requirement of the C++ compiler is becoming rare, I don't think there is a need to change the behaviour of the Python configure.in. So the real bug is that --with-cxx was not documented; that is corrected in README 1.107. ------------------------------------------------------- Date: 2000-Dec-11 12:47 By: gvanrossum Comment: Martin, do you happen to be a C++ user? Maybe you have an idea what to do with this? If not, assign it back to me or to Nobody. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124782&group_id=5470 From noreply@sourceforge.net Wed Dec 13 15:27:08 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 07:27:08 -0800 Subject: [Python-bugs-list] [Bug #124782] configure should attempt to find a default C++ compiler. Message-ID: Bug #124782, was updated on 2000-Dec-06 15:50 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: Wont Fix Bug Group: None Priority: 3 Submitted by: gvanrossum Assigned to : loewis Summary: configure should attempt to find a default C++ compiler. Details: It's annoying that C++ isn't better supported by default. Currently, you must specify the C++ compiler with the --with-gxx=... flag. The configure script could easily set CXX to g++ if that exists and if we are using GCC, for example. (But why does using the --with-gxx flag automatically create a main program compiled with C++?) Follow-Ups: Date: 2000-Dec-13 07:27 By: loewis Comment: I've uploaded patch 102817, which runs something like AC_PROG_CXX. We can't use that directly, as it fails if no C++ compiler is found. Also, if -with-cxx is given, no attempt to autodetermine a C++ compiler is made. ------------------------------------------------------- Date: 2000-Dec-13 06:27 By: gvanrossum Comment: Reopening, because of one remaining issue. I just checked in changes to Modules/makesetup and Misc/Makefile.pre.in to use $(CXX) instead of $(CCC) for the C++ compiler, since CCC doesn't seem to be defined. However this only works if --with-cxx is used; otherwise CXX is not defined either. There was a bug report about this, #124478. The problem is, CXX extensions using the Makefile.pre.in mechanism don't work out of the box unless --with-cxx is used. I don't care if the --with-cxx option is changed (probably better not), but even if it isn't, the CXX variable should be given a default value if a C++ compiler can be guessed (I bet trying g++ when we're using GCC would take care of 90% of the problem :-). ------------------------------------------------------- Date: 2000-Dec-13 06:17 By: loewis Comment: The --with-cxx flag is designed to support extension modules written in C++. In some compilation systems, compiling any object file with C++ requires that the main function is compiled and linked with the C++ compiler. For example, on an a.out system, with g++, g++ will generate a call to __main as the first thing in main(), to allow for construction of global objects. On an advanced compilation system (e.g. ELF, or Win32), this is not necessary - global objects will be constructed even if main was not compiled with a C++ compiler. I believe the sole purpose of --with-cxx flag is to support that case; I can't emagine any other reason to use it. Since such requirement of the C++ compiler is becoming rare, I don't think there is a need to change the behaviour of the Python configure.in. So the real bug is that --with-cxx was not documented; that is corrected in README 1.107. ------------------------------------------------------- Date: 2000-Dec-11 12:47 By: gvanrossum Comment: Martin, do you happen to be a C++ user? Maybe you have an idea what to do with this? If not, assign it back to me or to Nobody. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124782&group_id=5470 From noreply@sourceforge.net Wed Dec 13 15:54:21 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 07:54:21 -0800 Subject: [Python-bugs-list] [Bug #124782] configure should attempt to find a default C++ compiler. Message-ID: Bug #124782, was updated on 2000-Dec-06 15:50 Here is a current snapshot of the bug. Project: Python Category: Build Status: Closed Resolution: Wont Fix Bug Group: None Priority: 3 Submitted by: gvanrossum Assigned to : loewis Summary: configure should attempt to find a default C++ compiler. Details: It's annoying that C++ isn't better supported by default. Currently, you must specify the C++ compiler with the --with-gxx=... flag. The configure script could easily set CXX to g++ if that exists and if we are using GCC, for example. (But why does using the --with-gxx flag automatically create a main program compiled with C++?) Follow-Ups: Date: 2000-Dec-13 07:54 By: gvanrossum Comment: Closed again. Thanks! ------------------------------------------------------- Date: 2000-Dec-13 07:27 By: loewis Comment: I've uploaded patch 102817, which runs something like AC_PROG_CXX. We can't use that directly, as it fails if no C++ compiler is found. Also, if -with-cxx is given, no attempt to autodetermine a C++ compiler is made. ------------------------------------------------------- Date: 2000-Dec-13 06:27 By: gvanrossum Comment: Reopening, because of one remaining issue. I just checked in changes to Modules/makesetup and Misc/Makefile.pre.in to use $(CXX) instead of $(CCC) for the C++ compiler, since CCC doesn't seem to be defined. However this only works if --with-cxx is used; otherwise CXX is not defined either. There was a bug report about this, #124478. The problem is, CXX extensions using the Makefile.pre.in mechanism don't work out of the box unless --with-cxx is used. I don't care if the --with-cxx option is changed (probably better not), but even if it isn't, the CXX variable should be given a default value if a C++ compiler can be guessed (I bet trying g++ when we're using GCC would take care of 90% of the problem :-). ------------------------------------------------------- Date: 2000-Dec-13 06:17 By: loewis Comment: The --with-cxx flag is designed to support extension modules written in C++. In some compilation systems, compiling any object file with C++ requires that the main function is compiled and linked with the C++ compiler. For example, on an a.out system, with g++, g++ will generate a call to __main as the first thing in main(), to allow for construction of global objects. On an advanced compilation system (e.g. ELF, or Win32), this is not necessary - global objects will be constructed even if main was not compiled with a C++ compiler. I believe the sole purpose of --with-cxx flag is to support that case; I can't emagine any other reason to use it. Since such requirement of the C++ compiler is becoming rare, I don't think there is a need to change the behaviour of the Python configure.in. So the real bug is that --with-cxx was not documented; that is corrected in README 1.107. ------------------------------------------------------- Date: 2000-Dec-11 12:47 By: gvanrossum Comment: Martin, do you happen to be a C++ user? Maybe you have an idea what to do with this? If not, assign it back to me or to Nobody. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124782&group_id=5470 From noreply@sourceforge.net Wed Dec 13 15:59:50 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 07:59:50 -0800 Subject: [Python-bugs-list] [Bug #117028] Installation fails if prefix directory does not exist. Message-ID: Bug #117028, was updated on 2000-Oct-16 13:20 Here is a current snapshot of the bug. Project: Python Category: Build Status: Closed Resolution: Wont Fix Bug Group: Not a Bug Priority: 5 Submitted by: hove Assigned to : gvanrossum Summary: Installation fails if prefix directory does not exist. Details: I just downloaded and installed Python-2.0c1. I configured with : bash% ./configure --prefix=/local/python/python-2.0c1 The directory /local/python/python-2.0c1 did not exist. The install program did not offer to make this directory, and hence make install failed. I have made a fix for this (I'd be overjoyed if you would consider using it!). bash% diff -Naur Makefile.in.new Makefile.in.old : --- Makefile.in.new Mon Oct 16 22:10:23 2000 +++ Makefile.in.old Mon Oct 16 22:06:53 2000 @@ -102,7 +102,6 @@ INSTALL= @srcdir@/install-sh -c INSTALL_PROGRAM=${INSTALL} -m $(EXEMODE) INSTALL_DATA= ${INSTALL} -m $(FILEMODE) -INSTALL_DIR= @srcdir@/install-sh -d -m $(DIRMODE) # Use this to make a link between python$(VERSION) and python in $(BINDIR) LN=@LN@ @@ -218,12 +217,7 @@ PYTHONPATH= $(TESTPYTHON) $(TESTPROG) $(TESTOPTS) # Install everything -install: prefixdir altinstall bininstall maninstall - -# Make the prefixdir if it does not already exist - logic handled by the install-sh script -prefixdir: - $(INSTALL_DIR) @prefix@ - $(INSTALL_DIR) @exec_prefix@ +install: altinstall bininstall maninstall # Install almost everything without disturbing previous versions altinstall: altbininstall libinstall inclinstall libainstall sharedinstall bash% diff -Naur install-sh.new install-sh.old: --- install-sh.new Mon Oct 16 21:44:30 2000 +++ install-sh.old Thu Aug 13 18:08:45 1998 @@ -43,10 +43,6 @@ shift continue;; - -d) instcmd="mkdir" - shift - continue;; - -m) chmodcmd="$chmodprog $2" shift shift @@ -83,31 +79,10 @@ exit 1 fi - -if [ "$instcmd" = "mkdir" ] -then - dirlist=`echo $src | sed 's/\// /g'` - cd /; - currentdir="/" - for dir in $dirlist - do - currentdir="$currentdir$dir" - if [ ! -d "$dir" ] - then - echo "Creating directory "$currentdir - mkdir $dir - $chmodcmd $dir - fi - cd $dir - currentdir="$currentdir/" - done - exit 0 -fi - - if [ x"$dst" = x ] then echo "install: no destination specified" + exit 1 fi @@ -143,4 +118,3 @@ exit 0 - Follow-Ups: Date: 2000-Dec-13 07:59 By: gvanrossum Comment: I get this a lot. In my opinion it is *not* a good idea to let "make install" create the prefix directory if it doesn't exist. This is a safeguard against typos: if you (as root) try to say "make prefix=/usr/local install" and you make a typo in the pathname, it would create a meaningless directory. Won't fix, status closed. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117028&group_id=5470 From noreply@sourceforge.net Wed Dec 13 16:00:17 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 08:00:17 -0800 Subject: [Python-bugs-list] [Bug #119709] POLLIN undefined Message-ID: Bug #119709, was updated on 2000-Oct-29 16:20 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 1 Submitted by: jsfrank Assigned to : akuchling Summary: POLLIN undefined Details: I use: Slackware96 Linux 2.0.0 gcc 2.7.2 glibc2 PC/i486 And I try to install Python-2.0.tar.gz package. I use default Modules/Setup. When run: #./configure #make gcc tells me in Modules/selectmodule.c, begins from the 345 line, POLLIN undeclared,... Every POLL* name follows are all undeclared. Which header file lost? poll.h? Or something wrong? Thanks. Follow-Ups: Date: 2000-Dec-12 13:19 By: gvanrossum Comment: Did the user ever reply? If not, let's close this one. There are too many potential configuration problems lingering around in the Bugs list that are probably not bugs in Python... ------------------------------------------------------- Date: 2000-Nov-03 12:09 By: akuchling Comment: Can you provide the exact output from make, please, and a copy of the config.h generated by Python's configure script? It's possible that both HAVE_POLL_H and HAVE_POLL are defined but the header files are wrong in some way that POLLIN isn't defined. You can provide the output and config.h via private e-mail to akuchlin@mems-exchange.org. ------------------------------------------------------- Date: 2000-Nov-02 20:16 By: fdrake Comment: I think this has been fixed post-2.0, but I'm not sure. Assigned to Andrew since he'll know and, if it's not fixed, will be the one to do so. ;-) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119709&group_id=5470 From noreply@sourceforge.net Wed Dec 13 16:01:21 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 08:01:21 -0800 Subject: [Python-bugs-list] [Bug #122690] invalid default module path Message-ID: Bug #122690, was updated on 2000-Nov-17 07:35 Here is a current snapshot of the bug. Project: Python Category: Build Status: Closed Resolution: Works For Me Bug Group: Irreproducible Priority: 1 Submitted by: nobody Assigned to : gvanrossum Summary: invalid default module path Details: I have installed Python2.0 in a Linux 2.2 box under /usr/local/python/ I can define PYTHONHOME and PYTHONPATH pointing to the correct places and everything will work fine, but If I just do to include "/usr/local/python/bin/" in the PATH, python is unable to find any module from the standart module set. Is this the correct behaviour? Follow-Ups: Date: 2000-Dec-13 08:01 By: gvanrossum Comment: Haven't received a followup from the (anonymous) user. Probably a system configuration error. Closing this for lack of more info. ------------------------------------------------------- Date: 2000-Nov-27 13:21 By: gvanrossum Comment: This is not correct behavior, but the problem is probably in how your system is set up. Is PATH exported? Are there other Python installations? Are all the permissions set correctly? The best clue may be: what is sys.path when it doesn't work? (It should still be able to import "sys" even if everything else fails). Also try python -v. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=122690&group_id=5470 From noreply@sourceforge.net Wed Dec 13 16:02:59 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 08:02:59 -0800 Subject: [Python-bugs-list] [Bug #121791] Error for bad \x escape doesn't mention filename Message-ID: Bug #121791, was updated on 2000-Nov-06 08:34 Here is a current snapshot of the bug. Project: Python Category: Parser/Compiler Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : tim_one Summary: Error for bad \x escape doesn't mention filename Details: Using: GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 I get the following 'error' message: from interscript.languages.interscript_languages import add_translation File "interscript/languages/interscript_languages.py", line 2, in ? from interscript.encoding.utf8 import utf8 ValueError: invalid \x escape in known correct code (i.e. it works on Python 1.5.2). I have examined the function 'parsestr' in 'compile.c', and added debugging prints to find out what is going on. The function _correctly_ processes the string 'utf8' (quotes included), and returns, then the error is generated _without_ entering the routine! This almost certainly must be a bug in egcs-2.91.66. The code in 'parsestr' looks correct to me. It is possible the error can be replicated by downloading and running 'interscript' (without any arguments). Interscript is available at http://interscript.sourceforge.net [Reply to skaller@maxtal.com.au, sorry, I couldn't figure out how to 'log on'] Follow-Ups: Date: 2000-Dec-13 08:02 By: gvanrossum Comment: Tim, I remember you were looking into this. Any luck? ------------------------------------------------------- Date: 2000-Nov-13 14:51 By: tim_one Comment: Just noting that this is a bit of a mess to repair: no "2nd phase" compile-time errors report file names or line numbers unless they're SyntaxErrors. The bad \x escape here is one path thru that code; bad \x escapes in Unicode strings are another; likewise for OverflowError due to "too large" integer literal. A fix is in progress. ------------------------------------------------------- Date: 2000-Nov-06 09:04 By: gvanrossum Comment: The error message is legitimate: in Python 2.0, \x escapes must have exactly two hex characters following, and he uses \x0\x0 in his __init__.py module, which generates the error message. But his bug report is also legitimate: the ValueError doesn't mention the file where this is occurring! I'm changing the bug subject to reflect this -- it has nothing to do with egcs 2.91.66. I'm randomly assigning this to Tim. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=121791&group_id=5470 From noreply@sourceforge.net Wed Dec 13 16:19:17 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 08:19:17 -0800 Subject: [Python-bugs-list] [Bug #121208] Advanced email module Message-ID: Bug #121208, was updated on 2000-Nov-03 05:04 Here is a current snapshot of the bug. Project: Python Category: Extension Modules Status: Closed Resolution: None Bug Group: Feature Request Priority: 5 Submitted by: holtwick Assigned to : bwarsaw Summary: Advanced email module Details: A nice new module to have would be a mime enabled email composer. I posted a sample to SourceForge that is still very basic: http://sourceforge.net/snippet/detail.php?type=snippet&id=100444 Follow-Ups: Date: 2000-Dec-13 08:19 By: gvanrossum Comment: I've added this feature request to PEP 42. ------------------------------------------------------- Date: 2000-Nov-03 12:49 By: gvanrossum Comment: Hey, I didn't know about the SD "Snippet" feature! Cool! But I'm not sure that this should be a standard library -- it's pretty application specific. Any other opinions? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=121208&group_id=5470 From noreply@sourceforge.net Wed Dec 13 16:21:34 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 08:21:34 -0800 Subject: [Python-bugs-list] [Bug #120983] python2.0 dumps core in gc_list_remove Message-ID: Bug #120983, was updated on 2000-Nov-01 01:17 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: None Priority: 3 Submitted by: ephedra Assigned to : nascheme Summary: python2.0 dumps core in gc_list_remove Details: Source: downloaded from http://www.python.org OS: freebsd 4.1 Compilation options: default (does not occur when compiled with --without-cycle-gc) Observed while: running Zope2-cvs. I have not tested this on other operating systems, but it seems reproducible, if intermittent on freebsd. I will keep the binary and the corefile in case further information is needed. some information extracted with gdb: #0 0x80a8ffb in gc_list_remove (node=0x89eab40) at ./gcmodule.c:88 ---Type to continue, or q to quit--- 88 node->gc_next->gc_prev = node->gc_prev; (gdb) p node $1 = (struct _gc_head *) 0x89eab40 (gdb) p node->gc_next $2 = (struct _gc_head *) 0x0 #0 0x80a8ffb in gc_list_remove (node=0x89eab40) at ./gcmodule.c:88 #1 0x80a9ac3 in _PyGC_Remove (op=0x89eab40) at ./gcmodule.c:523 #2 0x807e01d in instance_dealloc (inst=0x89eab4c) at classobject.c:552 #3 0x808ea46 in insertdict (mp=0x89f004c, key=0x89e3ba8, hash=134733596, value=0x8064d13) at dictobject.c:343 #4 0x808ee01 in PyDict_SetItem (op=0x89f004c, key=0x89e3ba8, value=0x807df1c) at dictobject.c:477 #5 0x2836e33c in subclass_simple_setattro (self=0x89ea900, name=0x8835760, v=0x89ead6c) at ./../Components/ExtensionClass/ExtensionClass.c:2174 #6 0x283914cc in _setattro (self=0x89ea900, oname=0x8835760, v=0x89ead6c, setattrf=0x2836e2cc ) at ./cPersistence.c:661 #7 0x283915d0 in Per_setattro (self=0x89ea900, oname=0x8835760, v=0x89ead6c) at ./cPersistence.c:701 #8 0x80926c5 in PyObject_SetAttr (v=0x89eab40, name=0x89e3ba8, value=0x807df1c) at object.c:767 #9 0x283ae5df in Wrapper_setattro (self=0x8856f70, oname=0x8835760, v=0x89ead6c) at ./../Components/ExtensionClass/Acquisition.c:600 ... (gdb) up #1 0x80a9ac3 in _PyGC_Remove (op=0x89eab40) at ./gcmodule.c:523 523 gc_list_remove(g); (gdb) p *g $4 = {gc_next = 0xc, gc_prev = 0x80db600, gc_refs = 7} (gdb) up #2 0x807e01d in instance_dealloc (inst=0x89eab4c) at classobject.c:552 552 PyObject_GC_Fini(inst); (gdb) p *inst $6 = {ob_refcnt = 0, ob_type = 0x80d89e0, in_class = 0x88a790c, in_dict = 0x89eeccc} (gdb) p *inst->ob_type $7 = {ob_refcnt = 10, ob_type = 0x80db740, ob_size = 0, tp_name = 0x80cb646 "instance", tp_basicsize = 28, tp_itemsize = 0, tp_dealloc = 0x807df1c , tp_print = 0, tp_getattr = 0, tp_setattr = 0, tp_compare = 0x807e860 , tp_repr = 0x807e690 , tp_as_number = 0x80d8940, tp_as_sequence = 0x80d8900, tp_as_mapping = 0x80d88ec, tp_hash = 0x807e93c , tp_call = 0, tp_str = 0, tp_getattro = 0x807e278 , tp_setattro = 0x807e388 , tp_as_buffer = 0x0, tp_flags = 15, tp_doc = 0x0, tp_traverse = 0x807ead4 , tp_clear = 0, tp_xxx7 = 0, tp_xxx8 = 0} (gdb) p *inst->in_class $8 = {ob_refcnt = 4, ob_type = 0x80d8880, cl_bases = 0x80fbcac, cl_dict = 0x88a794c, cl_name = 0x88a54c0, cl_getattr = 0x0, cl_setattr = 0x0, cl_delattr = 0x0} Follow-Ups: Date: 2000-Dec-13 08:21 By: gvanrossum Comment: Neil, this is the only complaint about this. It may well be a user error. Try direct mail to the submitter; if he doesn't reply or doesn't provide new information, you can close the bug report. ------------------------------------------------------- Date: 2000-Nov-17 05:54 By: nascheme Comment: Tobias, is this core dump still occuring? If it is, can you provide some details on how to reproduce it? ------------------------------------------------------- Date: 2000-Nov-01 07:57 By: jhylton Comment: >From a cursory glance, I would guess this is a problem with the extension classes used by Zope, not with the garbage collector. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=120983&group_id=5470 From noreply@sourceforge.net Wed Dec 13 16:25:15 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 08:25:15 -0800 Subject: [Python-bugs-list] [Bug #125673] PyThreadState_Delete: invalid tstate Message-ID: Bug #125673, was updated on 2000-Dec-13 08:25 Here is a current snapshot of the bug. Project: Python Category: Threads Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: gvanrossum Assigned to : tim_one Summary: PyThreadState_Delete: invalid tstate Details: I am working on a simple couroutine/generator package using threads, to prototype the API. It seems to be working fine, except it is exposing a hard-to-find bug in the threadstate code. The following script[*] contains the API implementation and a simple example based on Tim's "fringe()" code. When I run the example, I *sometimes* get: Segmentation fault but *sometimes* I get: Fatal Python error: PyThreadState_Delete: invalid tstate Aborted and *sometimes* it succeeds. If I uncomment the raw_input("Exit?") line at the end I never get an error. The error behavior seems very fickle: making almost arbitrary changes to the code can trigger it or make it go away. When I run it under gdb, I cannot reproduce the problen, ever. (Haven't I heard this before?) The only clue is the fatal error message: it seems to be a race condition at thread termination. But how to debug this? _____ [*] I'm not including the script here. I can mail it to interested parties though. For my own reference: Subject: [Pycabal] Mysterious thread bug To: Date: Thu, 16 Nov 2000 16:21:12 -0500 For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125673&group_id=5470 From noreply@sourceforge.net Wed Dec 13 16:29:29 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 08:29:29 -0800 Subject: [Python-bugs-list] [Bug #117464] clash with BSD db when building Message-ID: Bug #117464, was updated on 2000-Oct-23 00:29 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: fleury Assigned to : montanaro Summary: clash with BSD db when building Details: On RH7.0, beside the LONG_BIT issue, I came across a db.h missmatch. The preprocessor directive values used in bsddbmodule.c correspond to the /usr/include/db1/db.h file, but on my system, /usr/include/db.h points to db3/db.h which does not define the same set of values, and does not compile. Compiling with the old file works, but obviously it does not link... configure (with no options) ran trouble free. The system is: RedHat 7.0 gcc version 2.96 20000731 (yes with the LONG_BIT problem) Python 2.0 final release Regards, Pascal Follow-Ups: Date: 2000-Nov-06 08:40 By: montanaro Comment: This is going to require a bit of effort. My current scheme for detecting whether bsddb can be built/linked or not relies on the presence or absence of db.h and/or db_185.h. If db_185.h is present, libdb v.2 is assumed. If only db.h is present, libdb v.1 is assumed. Now Sleepycat has libdb v.3, and on RH7 it appears you can have all three versions installed at once. I don't yet know if bsddbmodule.c can be built/linked with v.3 (seems likely, since db_185.h still existts), but even if it can, configure will have to grovel around in db.h looking for DB_VERSION_MAJOR. If it doesn't exist, we have v.1. If it does exist, its value will determine what version > 1 we have. I imagine for an autoconf whiz this will be a simple task, but it's more of a challenge than I have time for at the moment. Anyone want to take this on? ------------------------------------------------------- Date: 2000-Oct-23 18:13 By: fleury Comment: Well, I also tried at home, where I have a vanilla RH7.0 and it compiles perfectly. The reported bug was on a RH6.2->RH7.0 upgraded machine. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117464&group_id=5470 From noreply@sourceforge.net Wed Dec 13 16:51:07 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 08:51:07 -0800 Subject: [Python-bugs-list] [Bug #125673] PyThreadState_Delete: invalid tstate (Unix only?) Message-ID: Bug #125673, was updated on 2000-Dec-13 08:25 Here is a current snapshot of the bug. Project: Python Category: Threads Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: gvanrossum Assigned to : gstein Summary: PyThreadState_Delete: invalid tstate (Unix only?) Details: I am working on a simple couroutine/generator package using threads, to prototype the API. It seems to be working fine, except it is exposing a hard-to-find bug in the threadstate code. The following script[*] contains the API implementation and a simple example based on Tim's "fringe()" code. When I run the example, I *sometimes* get: Segmentation fault but *sometimes* I get: Fatal Python error: PyThreadState_Delete: invalid tstate Aborted and *sometimes* it succeeds. If I uncomment the raw_input("Exit?") line at the end I never get an error. The error behavior seems very fickle: making almost arbitrary changes to the code can trigger it or make it go away. When I run it under gdb, I cannot reproduce the problen, ever. (Haven't I heard this before?) The only clue is the fatal error message: it seems to be a race condition at thread termination. But how to debug this? _____ [*] I'm not including the script here. I can mail it to interested parties though. For my own reference: Subject: [Pycabal] Mysterious thread bug To: Date: Thu, 16 Nov 2000 16:21:12 -0500 Follow-Ups: Date: 2000-Dec-13 08:51 By: tim_one Comment: I was never able to provoke a problem on Windows using Guido's script, so changed Group to Platform-specific and added "(Linux only?)" to Summary. Here's the script; assigned to Greg under the hope he can provoke a problem: import thread class EarlyExit(Exception): pass class main_coroutine: def __init__(self): self.id = 0 self.caller = None self.value = None self.lock = thread.allocate_lock() self.lock.acquire() self.done = 0 def __call__(self, value=None): cur = current() assert cur is not self self.caller = cur self.value = value self.lock.release() cur.lock.acquire() if self.done: raise EarlyExit return cur.value all_coroutines = {thread.get_ident(): main_coroutine()} def current(): return all_coroutines[thread.get_ident()] def suspend(value=None): cur = current() caller = cur.caller assert caller and caller is not cur caller.value = value caller.lock.release() cur.lock.acquire() return cur.value nextid = 1 class coroutine(main_coroutine): def __init__(self, func, *args): global nextid self.id = nextid nextid = nextid + 1 self.caller = current() boot = thread.allocate_lock() boot.acquire() thread.start_new_thread(self.run, (boot, func, args)) boot.acquire() def run(self, boot, func, args): me = thread.get_ident() all_coroutines[me] = self self.lock = thread.allocate_lock() self.lock.acquire() self.done = 0 boot.release() self.lock.acquire() if self.value: print "Warning: initial value %s ignored" % `value` try: apply(func, args) finally: del all_coroutines[me] self.done = 1 self.caller.lock.release() def fringe(list): tl = type(list) for item in list: if type(item) is tl: fringe(item) else: suspend(item) def printinorder(list): c = coroutine(fringe, list) try: while 1: print c(), except EarlyExit: pass print if __name__ == '__main__': printinorder([1,2,3]) l = [1,2,[3,4,[5],6]] printinorder(l) #raw_input("Exit?") ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125673&group_id=5470 From noreply@sourceforge.net Wed Dec 13 16:27:20 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 08:27:20 -0800 Subject: [Python-bugs-list] [Bug #125610] SuppReq: please elaborate on your email notif. requests Message-ID: Bug #125610, was updated on 2000-Dec-13 05:34 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: Not a Bug Priority: 5 Submitted by: pfalcon Assigned to : gvanrossum Summary: SuppReq: please elaborate on your email notif. requests Details: We've got the task "Python requests" http://sourceforge.net/pm/task.php?func=detailtask&project_task_id=22577&group_id=1&group_project_id=2 . I believe bigdisk knows what that means but I think I could do that faster, so I'd like to have information from the original source. Please give specific examples how you want it to be. Thanks. Follow-Ups: Date: 2000-Dec-13 08:27 By: gvanrossum Comment: One more thing: it would be really handy if there was a box *somewhere* (maybe in the left margin?) where you could type a bug_id or patch_id and click OK to go directly to the details page of that item. We all need this regularly, and we all use the hack of editing the URL in "Location" field of the browser. There's *got* to be a better way. :-) ------------------------------------------------------- Date: 2000-Dec-13 06:20 By: gvanrossum Comment: OK, I'll clarify. Note that this applies both to the patch and the bugs products. 1. Word wrap: the comments entered in the database for bugs & patches are often entered with a single very long line per paragraph. When the notification email is sent out, most Unix mail readers don't wrap words correctly. The request is to break any line that is longer than 79 characters in shorter pieces, the way e.g. ESC-q does in Emacs, or the fmt(1) program. 2. clickable submitter name: in the patch or bug details page, the submitter ("Submitted By" field) should be a hyperlink to the developer profile for that user (except if it is Nobody, of course). 3. mention what changed in the email: it would be nice if at the top of the notification email it said what caused the mail to be sent, e.g. "status changed from XXX to YYY" or "assiged to ZZZ" or "new comment added by XXX" or "new patch uploaded" or "priority changed to QQQ". If more than one field changed they should all be summarized. Hope this helps! Thanks for doing this. We love our SourceForge! ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125610&group_id=5470 From noreply@sourceforge.net Wed Dec 13 16:56:03 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 08:56:03 -0800 Subject: [Python-bugs-list] [Bug #121791] Error for bad \x escape doesn't mention filename Message-ID: Bug #121791, was updated on 2000-Nov-06 08:34 Here is a current snapshot of the bug. Project: Python Category: Parser/Compiler Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : tim_one Summary: Error for bad \x escape doesn't mention filename Details: Using: GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 I get the following 'error' message: from interscript.languages.interscript_languages import add_translation File "interscript/languages/interscript_languages.py", line 2, in ? from interscript.encoding.utf8 import utf8 ValueError: invalid \x escape in known correct code (i.e. it works on Python 1.5.2). I have examined the function 'parsestr' in 'compile.c', and added debugging prints to find out what is going on. The function _correctly_ processes the string 'utf8' (quotes included), and returns, then the error is generated _without_ entering the routine! This almost certainly must be a bug in egcs-2.91.66. The code in 'parsestr' looks correct to me. It is possible the error can be replicated by downloading and running 'interscript' (without any arguments). Interscript is available at http://interscript.sourceforge.net [Reply to skaller@maxtal.com.au, sorry, I couldn't figure out how to 'log on'] Follow-Ups: Date: 2000-Dec-13 08:56 By: tim_one Comment: I lost my changes when moving to my new machine. Wasn't happy with them anyway -- changing the exception from ValueError to SyntaxError was purely a hack, to worm its way thru the maze of hacks already there. As long as it's got to be a hack, better a pleasant hack . ------------------------------------------------------- Date: 2000-Dec-13 08:02 By: gvanrossum Comment: Tim, I remember you were looking into this. Any luck? ------------------------------------------------------- Date: 2000-Nov-13 14:51 By: tim_one Comment: Just noting that this is a bit of a mess to repair: no "2nd phase" compile-time errors report file names or line numbers unless they're SyntaxErrors. The bad \x escape here is one path thru that code; bad \x escapes in Unicode strings are another; likewise for OverflowError due to "too large" integer literal. A fix is in progress. ------------------------------------------------------- Date: 2000-Nov-06 09:04 By: gvanrossum Comment: The error message is legitimate: in Python 2.0, \x escapes must have exactly two hex characters following, and he uses \x0\x0 in his __init__.py module, which generates the error message. But his bug report is also legitimate: the ValueError doesn't mention the file where this is occurring! I'm changing the bug subject to reflect this -- it has nothing to do with egcs 2.91.66. I'm randomly assigning this to Tim. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=121791&group_id=5470 From noreply@sourceforge.net Wed Dec 13 17:09:42 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 09:09:42 -0800 Subject: [Python-bugs-list] [Bug #117608] test_largefile crashes or IRIX 6 Message-ID: Bug #117608, was updated on 2000-Oct-24 08:51 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: Works For Me Bug Group: Platform-specific Priority: 3 Submitted by: bbaxter Assigned to : bwarsaw Summary: test_largefile crashes or IRIX 6 Details: During "make test", test_largefile caused an error. Here's the result in python: % python python2.0/test/test_largefile.py create large file via seek (may be sparse file) ... Traceback (most recent call last): File "python2.0/test/test_largefile.py", line 60, in ? f.flush() IOError: [Errno 22] Invalid argument Here's the version I'm running: Python 2.0 (#5, Oct 24 2000, 09:51:57) [C] on irix6 Follow-Ups: Date: 2000-Dec-13 09:09 By: sjoerd Comment: Assigned back to Barry so that he can deal with this further. I'm on vacation as of tomorrow. My guess is that the problem is a lack of disk space on the user's test machine. When you seek far away and write a byte on an SGI file system (EFS or XFS) the system actually allocates the blocks. There is no such thing as holes in files on the SGI file systems. I happen to have enough disk space available, so the test runs fine. I assume the submitter of the bug didn't have enough disk space available and so the flush couldn't complete. ------------------------------------------------------- Date: 2000-Dec-12 13:56 By: bwarsaw Comment: Reassigning because I have neither large file support nor an IRIX machine. Guido suggests that Sjoerd might have access to IRIX. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117608&group_id=5470 From noreply@sourceforge.net Wed Dec 13 17:32:48 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 09:32:48 -0800 Subject: [Python-bugs-list] [Bug #125489] fpectl module is undocumented Message-ID: Bug #125489, was updated on 2000-Dec-12 08:02 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: Remind Bug Group: Feature Request Priority: 2 Submitted by: fdrake Assigned to : fdrake Summary: fpectl module is undocumented Details: Lee Busby's fpectl module is undocumented. I'm going to send him an email (using what may be an old email address) to see if he'd like to document this module. Assigned to me to make sure something happens with this. Follow-Ups: Date: 2000-Dec-13 09:32 By: fdrake Comment: Received response from Lee -- he should be able to get to it in the next month. Once I have his text I'll add it to the documentation and build some docstrings as well. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125489&group_id=5470 From noreply@sourceforge.net Wed Dec 13 17:37:58 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 09:37:58 -0800 Subject: [Python-bugs-list] [Bug #124782] configure should attempt to find a default C++ compiler. Message-ID: Bug #124782, was updated on 2000-Dec-06 15:50 Here is a current snapshot of the bug. Project: Python Category: Build Status: Closed Resolution: Fixed Bug Group: None Priority: 3 Submitted by: gvanrossum Assigned to : loewis Summary: configure should attempt to find a default C++ compiler. Details: It's annoying that C++ isn't better supported by default. Currently, you must specify the C++ compiler with the --with-gxx=... flag. The configure script could easily set CXX to g++ if that exists and if we are using GCC, for example. (But why does using the --with-gxx flag automatically create a main program compiled with C++?) Follow-Ups: Date: 2000-Dec-13 09:37 By: loewis Comment: Fixed in configure.in 1.181. ------------------------------------------------------- Date: 2000-Dec-13 07:54 By: gvanrossum Comment: Closed again. Thanks! ------------------------------------------------------- Date: 2000-Dec-13 07:27 By: loewis Comment: I've uploaded patch 102817, which runs something like AC_PROG_CXX. We can't use that directly, as it fails if no C++ compiler is found. Also, if -with-cxx is given, no attempt to autodetermine a C++ compiler is made. ------------------------------------------------------- Date: 2000-Dec-13 06:27 By: gvanrossum Comment: Reopening, because of one remaining issue. I just checked in changes to Modules/makesetup and Misc/Makefile.pre.in to use $(CXX) instead of $(CCC) for the C++ compiler, since CCC doesn't seem to be defined. However this only works if --with-cxx is used; otherwise CXX is not defined either. There was a bug report about this, #124478. The problem is, CXX extensions using the Makefile.pre.in mechanism don't work out of the box unless --with-cxx is used. I don't care if the --with-cxx option is changed (probably better not), but even if it isn't, the CXX variable should be given a default value if a C++ compiler can be guessed (I bet trying g++ when we're using GCC would take care of 90% of the problem :-). ------------------------------------------------------- Date: 2000-Dec-13 06:17 By: loewis Comment: The --with-cxx flag is designed to support extension modules written in C++. In some compilation systems, compiling any object file with C++ requires that the main function is compiled and linked with the C++ compiler. For example, on an a.out system, with g++, g++ will generate a call to __main as the first thing in main(), to allow for construction of global objects. On an advanced compilation system (e.g. ELF, or Win32), this is not necessary - global objects will be constructed even if main was not compiled with a C++ compiler. I believe the sole purpose of --with-cxx flag is to support that case; I can't emagine any other reason to use it. Since such requirement of the C++ compiler is becoming rare, I don't think there is a need to change the behaviour of the Python configure.in. So the real bug is that --with-cxx was not documented; that is corrected in README 1.107. ------------------------------------------------------- Date: 2000-Dec-11 12:47 By: gvanrossum Comment: Martin, do you happen to be a C++ user? Maybe you have an idea what to do with this? If not, assign it back to me or to Nobody. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124782&group_id=5470 From frank63@ms5.hinet.net Thu Dec 14 02:56:33 2000 From: frank63@ms5.hinet.net (cch) Date: Thu, 14 Dec 2000 02:56:33 -0000 Subject: [Python-bugs-list] Re: [Bug #119709] POLLIN undefined Message-ID: <200012131855.CAA24192@ms5.hinet.net> > Date: 2000-Dec-12 13:19 > By: gvanrossum > > Comment: > Did the user ever reply? If not, let's close this one. There are too many potential configuration problems lingering around in the Bugs list that are probably not bugs in Python... > ------------------------------------------------------- > > Date: 2000-Nov-03 12:09 > By: akuchling > > Comment: > Can you provide the exact output from make, please, > and a copy of the config.h generated by Python's configure script? It's possible that both HAVE_POLL_H and HAVE_POLL > are defined but the header files are wrong in some way that POLLIN isn't defined. > > You can provide the output and config.h via private e-mail > to akuchlin@mems-exchange.org. > > ------------------------------------------------------- > > Date: 2000-Nov-02 20:16 > By: fdrake > > Comment: > I think this has been fixed post-2.0, but I'm not sure. Assigned to Andrew since he'll know and, if it's not fixed, will be the one to do so. ;-) > ------------------------------------------------------- > > For detailed info, follow this link: > http://sourceforge.net/bugs/?func=detailbug&bug_id=119709&group_id=5470 Sorry for the delay. make error output: ../selectmodule.c: In function `update_ufd_array': ../selectmodule.c:319: sizeof applied to an incomplete type ../selectmodule.c:327: arithmetic on pointer to an incomplete type ../selectmodule.c:327: dereferencing pointer to incomplete type ../selectmodule.c:328: arithmetic on pointer to an incomplete type ../selectmodule.c:328: dereferencing pointer to incomplete type ../selectmodule.c: In function `poll_register': ../selectmodule.c:345: `POLLIN' undeclared (first use this function) ../selectmodule.c:345: (Each undeclared identifier is reported only once ../selectmodule.c:345: for each function it appears in.) ../selectmodule.c:345: `POLLPRI' undeclared (first use this function) ../selectmodule.c:345: `POLLOUT' undeclared (first use this function) ../selectmodule.c: In function `poll_poll': ../selectmodule.c:436: warning: implicit declaration of function `poll' ../selectmodule.c:452: arithmetic on pointer to an incomplete type ../selectmodule.c:452: dereferencing pointer to incomplete type ../selectmodule.c:461: arithmetic on pointer to an incomplete type ../selectmodule.c:461: dereferencing pointer to incomplete type ../selectmodule.c:468: arithmetic on pointer to an incomplete type ../selectmodule.c:468: dereferencing pointer to incomplete type ../selectmodule.c: In function `initselect': ../selectmodule.c:637: `POLLIN' undeclared (first use this function) ../selectmodule.c:638: `POLLPRI' undeclared (first use this function) ../selectmodule.c:639: `POLLOUT' undeclared (first use this function) ../selectmodule.c:640: `POLLERR' undeclared (first use this function) ../selectmodule.c:641: `POLLHUP' undeclared (first use this function) ../selectmodule.c:642: `POLLNVAL' undeclared (first use this function) make[1]: *** [selectmodule.o] Error 1 make: *** [Modules] Error 2 Python config.h: /* config.h. Generated automatically by configure. */ /* config.h.in. Generated automatically from configure.in by autoheader. */ /* Define if on AIX 3. System headers sometimes define this. We just want to avoid a redefinition error message. */ #ifndef _ALL_SOURCE /* #undef _ALL_SOURCE */ #endif /* Define if type char is unsigned and you are not using gcc. */ #ifndef __CHAR_UNSIGNED__ /* #undef __CHAR_UNSIGNED__ */ #endif /* Define to empty if the keyword does not work. */ /* #undef const */ /* Define to `int' if doesn't define. */ /* #undef gid_t */ /* Define if your struct tm has tm_zone. */ /* #undef HAVE_TM_ZONE */ /* Define if you don't have tm_zone but do have the external array tzname. */ #define HAVE_TZNAME 1 /* Define as __inline if that's what the C compiler calls it. */ /* #undef inline */ /* Define if on MINIX. */ /* #undef _MINIX */ /* Define to `int' if doesn't define. */ /* #undef mode_t */ /* Define to `long' if doesn't define. */ /* #undef off_t */ /* Define to `int' if doesn't define. */ /* #undef pid_t */ /* Define if the system does not provide POSIX.1 features except with this defined. */ /* #undef _POSIX_1_SOURCE */ /* Define if you need to in order for stat and other things to work. */ /* #undef _POSIX_SOURCE */ /* Define as the return type of signal handlers (int or void). */ #define RETSIGTYPE void /* Define to `unsigned' if doesn't define. */ /* #undef size_t */ /* Define if you have the ANSI C header files. */ #define STDC_HEADERS 1 /* Define if you can safely include both and . */ #define TIME_WITH_SYS_TIME 1 /* Define if your declares struct tm. */ /* #undef TM_IN_SYS_TIME */ /* Define to `int' if doesn't define. */ /* #undef uid_t */ /* Define if your processor stores words with the most significant byte first (like Motorola and SPARC, unlike Intel and VAX). */ /* #undef WORDS_BIGENDIAN */ /* Define if your contains bad prototypes for exec*() (as it does on SGI IRIX 4.x) */ /* #undef BAD_EXEC_PROTOTYPES */ /* Define if your compiler botches static forward declarations (as it does on SCI ODT 3.0) */ /* #undef BAD_STATIC_FORWARD */ /* Define for AIX if your compiler is a genuine IBM xlC/xlC_r and you want support for AIX C++ shared extension modules. */ /* #undef AIX_GENUINE_CPLUSPLUS */ /* Define this if you have BeOS threads */ /* #undef BEOS_THREADS */ /* Define if you have the Mach cthreads package */ /* #undef C_THREADS */ /* Define to `long' if doesn't define. */ /* #undef clock_t */ /* Define if getpgrp() must be called as getpgrp(0). */ /* #undef GETPGRP_HAVE_ARG */ /* Define if gettimeofday() does not have second (timezone) argument This is the case on Motorola V4 (R40V4.2) */ /* #undef GETTIMEOFDAY_NO_TZ */ /* Define this if your time.h defines altzone */ /* #undef HAVE_ALTZONE */ /* Define this if you have some version of gethostbyname_r() */ /* #undef HAVE_GETHOSTBYNAME_R */ /* Define this if you have the 3-arg version of gethostbyname_r() */ /* #undef HAVE_GETHOSTBYNAME_R_3_ARG */ /* Define this if you have the 5-arg version of gethostbyname_r() */ /* #undef HAVE_GETHOSTBYNAME_R_5_ARG */ /* Define this if you have the 6-arg version of gethostbyname_r() */ /* #undef HAVE_GETHOSTBYNAME_R_6_ARG */ /* Define this if you have the type long long */ #define HAVE_LONG_LONG 1 /* Define this if you have the type uintptr_t */ /* #undef HAVE_UINTPTR_T */ /* Define if your compiler supports function prototypes */ #define HAVE_PROTOTYPES 1 /* Define if you have GNU PTH threads */ /* #undef HAVE_PTH */ /* Define if your compiler supports variable length function prototypes (e.g. void fprintf(FILE *, char *, ...);) *and* */ #define HAVE_STDARG_PROTOTYPES 1 /* Define if malloc(0) returns a NULL pointer */ /* #undef MALLOC_ZERO_RETURNS_NULL */ /* Define if you have POSIX threads */ /* #undef _POSIX_THREADS */ /* Define to force use of thread-safe errno, h_errno, and other functions */ #define _REENTRANT 1 /* Define if setpgrp() must be called as setpgrp(0, 0). */ /* #undef SETPGRP_HAVE_ARG */ /* Define to empty if the keyword does not work. */ /* #undef signed */ /* Define to `int' if doesn't define. */ #define socklen_t int /* Define if you can safely include both and (which you can't on SCO ODT 3.0). */ /* #undef SYS_SELECT_WITH_SYS_TIME */ /* Define if a va_list is an array of some kind */ /* #undef VA_LIST_IS_ARRAY */ /* Define to empty if the keyword does not work. */ /* #undef volatile */ /* Define if you want SIGFPE handled (see Include/pyfpe.h). */ /* #undef WANT_SIGFPE_HANDLER */ /* Define if the compiler provides a wchar.h header file. */ #define HAVE_WCHAR_H 1 /* Define if you have a useable wchar_t type defined in wchar.h; useable means wchar_t must be 16-bit unsigned type. (see Include/unicodeobject.h). */ /* #undef HAVE_USABLE_WCHAR_T */ /* Define if you want wctype.h functions to be used instead of the one supplied by Python itself. (see Include/unicodectype.h). */ /* #undef WANT_WCTYPE_FUNCTIONS */ /* Define if you want to use SGI (IRIX 4) dynamic linking. This requires the "dl" library by Jack Jansen, ftp://ftp.cwi.nl/pub/dynload/dl-1.6.tar.Z. Don't bother on IRIX 5, it already has dynamic linking using SunOS style shared libraries */ /* #undef WITH_SGI_DL */ /* Define if you want to emulate SGI (IRIX 4) dynamic linking. This is rumoured to work on VAX (Ultrix), Sun3 (SunOS 3.4), Sequent Symmetry (Dynix), and Atari ST. This requires the "dl-dld" library, ftp://ftp.cwi.nl/pub/dynload/dl-dld-1.1.tar.Z, as well as the "GNU dld" library, ftp://ftp.cwi.nl/pub/dynload/dld-3.2.3.tar.Z. Don't bother on SunOS 4 or 5, they already have dynamic linking using shared libraries */ /* #undef WITH_DL_DLD */ /* Define if you want to use the new-style (Openstep, Rhapsody, MacOS) dynamic linker (dyld) instead of the old-style (NextStep) dynamic linker (rld). Dyld is necessary to support frameworks. */ /* #undef WITH_DYLD */ /* Define if you want to compile in rudimentary thread support */ /* #undef WITH_THREAD */ /* Define if you want to compile in cycle garbage collection */ #define WITH_CYCLE_GC 1 /* Define if you want to produce an OpenStep/Rhapsody framework (shared library plus accessory files). */ /* #undef WITH_NEXT_FRAMEWORK */ /* Define if you want to use BSD db. */ #define WITH_LIBDB 1 /* Define if you want to build an interpreter with many run-time checks */ /* #undef Py_DEBUG */ /* The number of bytes in an off_t. */ #define SIZEOF_OFF_T 4 /* The number of bytes in a time_t. */ #define SIZEOF_TIME_T 4 /* The number of bytes in a pthread_t. */ #define SIZEOF_PTHREAD_T 4 /* Defined to enable large file support when an off_t is bigger than a long and long long is available and at least as big as an off_t. You may need to add some flags for configuration and compilation to enable this mode. E.g, for Solaris 2.7: CFLAGS="-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64" OPT="-O2 $CFLAGS" \ configure */ /* #undef HAVE_LARGEFILE_SUPPORT */ /* Defined when any dynamic module loading is enabled */ #define HAVE_DYNAMIC_LOADING 1 /* Define if i>>j for signed int i does not extend the sign bit when i < 0 */ /* #undef SIGNED_RIGHT_SHIFT_ZERO_FILLS */ /* The number of bytes in a char. */ #define SIZEOF_CHAR 1 /* The number of bytes in a double. */ #define SIZEOF_DOUBLE 8 /* The number of bytes in a float. */ #define SIZEOF_FLOAT 4 /* The number of bytes in a fpos_t. */ #define SIZEOF_FPOS_T 4 /* The number of bytes in a int. */ #define SIZEOF_INT 4 /* The number of bytes in a long. */ #define SIZEOF_LONG 4 /* The number of bytes in a long long. */ #define SIZEOF_LONG_LONG 8 /* The number of bytes in a short. */ #define SIZEOF_SHORT 2 /* The number of bytes in a uintptr_t. */ /* #undef SIZEOF_UINTPTR_T */ /* The number of bytes in a void *. */ #define SIZEOF_VOID_P 4 /* Define if you have the _getpty function. */ /* #undef HAVE__GETPTY */ /* Define if you have the alarm function. */ #define HAVE_ALARM 1 /* Define if you have the chown function. */ #define HAVE_CHOWN 1 /* Define if you have the clock function. */ #define HAVE_CLOCK 1 /* Define if you have the confstr function. */ #define HAVE_CONFSTR 1 /* Define if you have the ctermid function. */ #define HAVE_CTERMID 1 /* Define if you have the ctermid_r function. */ /* #undef HAVE_CTERMID_R */ /* Define if you have the dlopen function. */ #define HAVE_DLOPEN 1 /* Define if you have the dup2 function. */ #define HAVE_DUP2 1 /* Define if you have the execv function. */ #define HAVE_EXECV 1 /* Define if you have the fdatasync function. */ #define HAVE_FDATASYNC 1 /* Define if you have the flock function. */ #define HAVE_FLOCK 1 /* Define if you have the fork function. */ #define HAVE_FORK 1 /* Define if you have the forkpty function. */ /* #undef HAVE_FORKPTY */ /* Define if you have the fpathconf function. */ #define HAVE_FPATHCONF 1 /* Define if you have the fseek64 function. */ /* #undef HAVE_FSEEK64 */ /* Define if you have the fseeko function. */ /* #undef HAVE_FSEEKO */ /* Define if you have the fstatvfs function. */ /* #undef HAVE_FSTATVFS */ /* Define if you have the fsync function. */ #define HAVE_FSYNC 1 /* Define if you have the ftell64 function. */ /* #undef HAVE_FTELL64 */ /* Define if you have the ftello function. */ /* #undef HAVE_FTELLO */ /* Define if you have the ftime function. */ #define HAVE_FTIME 1 /* Define if you have the ftruncate function. */ #define HAVE_FTRUNCATE 1 /* Define if you have the getcwd function. */ #define HAVE_GETCWD 1 /* Define if you have the getgroups function. */ #define HAVE_GETGROUPS 1 /* Define if you have the gethostbyname function. */ #define HAVE_GETHOSTBYNAME 1 /* Define if you have the getlogin function. */ #define HAVE_GETLOGIN 1 /* Define if you have the getpeername function. */ #define HAVE_GETPEERNAME 1 /* Define if you have the getpgrp function. */ #define HAVE_GETPGRP 1 /* Define if you have the getpid function. */ #define HAVE_GETPID 1 /* Define if you have the getpwent function. */ #define HAVE_GETPWENT 1 /* Define if you have the gettimeofday function. */ #define HAVE_GETTIMEOFDAY 1 /* Define if you have the getwd function. */ #define HAVE_GETWD 1 /* Define if you have the hypot function. */ #define HAVE_HYPOT 1 /* Define if you have the kill function. */ #define HAVE_KILL 1 /* Define if you have the link function. */ #define HAVE_LINK 1 /* Define if you have the lstat function. */ #define HAVE_LSTAT 1 /* Define if you have the memmove function. */ #define HAVE_MEMMOVE 1 /* Define if you have the mkfifo function. */ #define HAVE_MKFIFO 1 /* Define if you have the mktime function. */ #define HAVE_MKTIME 1 /* Define if you have the mremap function. */ #define HAVE_MREMAP 1 /* Define if you have the nice function. */ #define HAVE_NICE 1 /* Define if you have the openpty function. */ /* #undef HAVE_OPENPTY */ /* Define if you have the pathconf function. */ #define HAVE_PATHCONF 1 /* Define if you have the pause function. */ #define HAVE_PAUSE 1 /* Define if you have the plock function. */ /* #undef HAVE_PLOCK */ /* Define if you have the poll function. */ #define HAVE_POLL 1 /* Define if you have the pthread_init function. */ /* #undef HAVE_PTHREAD_INIT */ /* Define if you have the putenv function. */ #define HAVE_PUTENV 1 /* Define if you have the readlink function. */ #define HAVE_READLINK 1 /* Define if you have the select function. */ #define HAVE_SELECT 1 /* Define if you have the setegid function. */ #define HAVE_SETEGID 1 /* Define if you have the seteuid function. */ #define HAVE_SETEUID 1 /* Define if you have the setgid function. */ #define HAVE_SETGID 1 /* Define if you have the setlocale function. */ #define HAVE_SETLOCALE 1 /* Define if you have the setpgid function. */ #define HAVE_SETPGID 1 /* Define if you have the setpgrp function. */ #define HAVE_SETPGRP 1 /* Define if you have the setregid function. */ #define HAVE_SETREGID 1 /* Define if you have the setreuid function. */ #define HAVE_SETREUID 1 /* Define if you have the setsid function. */ #define HAVE_SETSID 1 /* Define if you have the setuid function. */ #define HAVE_SETUID 1 /* Define if you have the setvbuf function. */ #define HAVE_SETVBUF 1 /* Define if you have the sigaction function. */ #define HAVE_SIGACTION 1 /* Define if you have the siginterrupt function. */ #define HAVE_SIGINTERRUPT 1 /* Define if you have the sigrelse function. */ /* #undef HAVE_SIGRELSE */ /* Define if you have the statvfs function. */ /* #undef HAVE_STATVFS */ /* Define if you have the strdup function. */ #define HAVE_STRDUP 1 /* Define if you have the strerror function. */ #define HAVE_STRERROR 1 /* Define if you have the strftime function. */ #define HAVE_STRFTIME 1 /* Define if you have the strptime function. */ #define HAVE_STRPTIME 1 /* Define if you have the symlink function. */ #define HAVE_SYMLINK 1 /* Define if you have the sysconf function. */ #define HAVE_SYSCONF 1 /* Define if you have the tcgetpgrp function. */ #define HAVE_TCGETPGRP 1 /* Define if you have the tcsetpgrp function. */ #define HAVE_TCSETPGRP 1 /* Define if you have the tempnam function. */ #define HAVE_TEMPNAM 1 /* Define if you have the timegm function. */ #define HAVE_TIMEGM 1 /* Define if you have the times function. */ #define HAVE_TIMES 1 /* Define if you have the tmpfile function. */ #define HAVE_TMPFILE 1 /* Define if you have the tmpnam function. */ #define HAVE_TMPNAM 1 /* Define if you have the tmpnam_r function. */ /* #undef HAVE_TMPNAM_R */ /* Define if you have the truncate function. */ #define HAVE_TRUNCATE 1 /* Define if you have the uname function. */ #define HAVE_UNAME 1 /* Define if you have the waitpid function. */ #define HAVE_WAITPID 1 /* Define if you have the header file. */ #define HAVE_DB_H 1 /* Define if you have the header file. */ /* #undef HAVE_DB1_NDBM_H */ /* Define if you have the header file. */ /* #undef HAVE_DB_185_H */ /* Define if you have the header file. */ #define HAVE_DIRENT_H 1 /* Define if you have the header file. */ #define HAVE_DLFCN_H 1 /* Define if you have the header file. */ #define HAVE_FCNTL_H 1 /* Define if you have the header file. */ /* #undef HAVE_GDBM_NDBM_H */ /* Define if you have the header file. */ /* #undef HAVE_LIBUTIL_H */ /* Define if you have the header file. */ #define HAVE_LIMITS_H 1 /* Define if you have the header file. */ #define HAVE_LOCALE_H 1 /* Define if you have the header file. */ #define HAVE_NCURSES_H 1 /* Define if you have the header file. */ #define HAVE_NDBM_H 1 /* Define if you have the header file. */ /* #undef HAVE_NDIR_H */ /* Define if you have the header file. */ /* #undef HAVE_POLL_H */ /* Define if you have the header file. */ #define HAVE_PTHREAD_H 1 /* Define if you have the header file. */ /* #undef HAVE_PTY_H */ /* Define if you have the header file. */ #define HAVE_SIGNAL_H 1 /* Define if you have the header file. */ #define HAVE_STDARG_H 1 /* Define if you have the header file. */ #define HAVE_STDDEF_H 1 /* Define if you have the header file. */ #define HAVE_STDLIB_H 1 /* Define if you have the header file. */ /* #undef HAVE_SYS_AUDIOIO_H */ /* Define if you have the header file. */ /* #undef HAVE_SYS_DIR_H */ /* Define if you have the header file. */ #define HAVE_SYS_FILE_H 1 /* Define if you have the header file. */ /* #undef HAVE_SYS_LOCK_H */ /* Define if you have the header file. */ /* #undef HAVE_SYS_NDIR_H */ /* Define if you have the header file. */ #define HAVE_SYS_PARAM_H 1 /* Define if you have the header file. */ /* #undef HAVE_SYS_SELECT_H */ /* Define if you have the header file. */ #define HAVE_SYS_SOCKET_H 1 /* Define if you have the header file. */ #define HAVE_SYS_TIME_H 1 /* Define if you have the header file. */ #define HAVE_SYS_TIMES_H 1 /* Define if you have the header file. */ #define HAVE_SYS_UN_H 1 /* Define if you have the header file. */ #define HAVE_SYS_UTSNAME_H 1 /* Define if you have the header file. */ #define HAVE_SYS_WAIT_H 1 /* Define if you have the header file. */ /* #undef HAVE_THREAD_H */ /* Define if you have the header file. */ #define HAVE_UNISTD_H 1 /* Define if you have the header file. */ #define HAVE_UTIME_H 1 /* Define if you have the dl library (-ldl). */ #define HAVE_LIBDL 1 /* Define if you have the dld library (-ldld). */ /* #undef HAVE_LIBDLD */ /* Define if you have the ieee library (-lieee). */ /* #undef HAVE_LIBIEEE */ From noreply@sourceforge.net Wed Dec 13 20:50:31 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 12:50:31 -0800 Subject: [Python-bugs-list] [Bug #125719] malloc() is called when _PyThreadState_Current is NULL Message-ID: Bug #125719, was updated on 2000-Dec-13 12:50 Here is a current snapshot of the bug. Project: Python Category: Threads Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: adeutsch Assigned to : nobody Summary: malloc() is called when _PyThreadState_Current is NULL Details: We are implementing an embedded version of Python on an platform that does not have an unlimited amount of memory. In order to catch out-of-memory situations I defined the macro PyCore_MALLOC_FUNC to be equal to my own routine d_malloc(). d_malloc calls malloc() and checks the return value before returning it. If a NULL pointer is returned by malloc(), d_malloc() calls PyErr_NoMemory. During testing, I discovered that PyCore_MALLOC_FUNC is called in PyOS_StdioReadline, which in turn is called by PyOS_Readline right after a call to Py_BEGIN_ALLOW_THREADS. In other words, at a time when the variable _PyThreadState_Current is set to NULL. If this particular malloc() call fails, my routine will call PyErr_NoMemory(), which in turn calls PyErr_SetObject(), which calls PyErr_SetObject(), which calls PyErr_Restore(), which calls PyThreadState_GET(). Now if PyThreadState_GET() is called at a time when _PyThreadState_Current is equal to NULL, it will generate a fatal error about there being no current thread. The net effect is that an out-of-memory situation can result in a misleading fatal error message about no current thread. Perhaps, I should not be calling PyErr_NoMemory() in this situation, but after all that is what the routine is for. Another alternative would be for the Python source code not to call PyCore_MALLOC_FUNC from PyOS_Readline at all, but instead to call it from PyOS_StdioReadline just before the call to fgets(). For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125719&group_id=5470 From noreply@sourceforge.net Wed Dec 13 21:14:06 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 13:14:06 -0800 Subject: [Python-bugs-list] [Bug #125719] malloc() is called when _PyThreadState_Current is NULL Message-ID: Bug #125719, was updated on 2000-Dec-13 12:50 Here is a current snapshot of the bug. Project: Python Category: Threads Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: adeutsch Assigned to : nobody Summary: malloc() is called when _PyThreadState_Current is NULL Details: We are implementing an embedded version of Python on an platform that does not have an unlimited amount of memory. In order to catch out-of-memory situations I defined the macro PyCore_MALLOC_FUNC to be equal to my own routine d_malloc(). d_malloc calls malloc() and checks the return value before returning it. If a NULL pointer is returned by malloc(), d_malloc() calls PyErr_NoMemory. During testing, I discovered that PyCore_MALLOC_FUNC is called in PyOS_StdioReadline, which in turn is called by PyOS_Readline right after a call to Py_BEGIN_ALLOW_THREADS. In other words, at a time when the variable _PyThreadState_Current is set to NULL. If this particular malloc() call fails, my routine will call PyErr_NoMemory(), which in turn calls PyErr_SetObject(), which calls PyErr_SetObject(), which calls PyErr_Restore(), which calls PyThreadState_GET(). Now if PyThreadState_GET() is called at a time when _PyThreadState_Current is equal to NULL, it will generate a fatal error about there being no current thread. The net effect is that an out-of-memory situation can result in a misleading fatal error message about no current thread. Perhaps, I should not be calling PyErr_NoMemory() in this situation, but after all that is what the routine is for. Another alternative would be for the Python source code not to call PyCore_MALLOC_FUNC from PyOS_Readline at all, but instead to call it from PyOS_StdioReadline just before the call to fgets(). Follow-Ups: Date: 2000-Dec-13 13:14 By: gvanrossum Comment: I believe you are misguided. Python already checks the return value from malloc(), and calls PyErr_NoMemory(). If you find an instance of malloc() or realloc() that is not properly checked, I'd like to hear about it -- *that* would be a bug worth reporting. So you shouldn't be calling PyErr_NoMemory() from inside your d_malloc() function. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125719&group_id=5470 From noreply@sourceforge.net Wed Dec 13 21:47:48 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 13:47:48 -0800 Subject: [Python-bugs-list] [Bug #125719] malloc() is called when _PyThreadState_Current is NULL Message-ID: Bug #125719, was updated on 2000-Dec-13 12:50 Here is a current snapshot of the bug. Project: Python Category: Threads Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: adeutsch Assigned to : nobody Summary: malloc() is called when _PyThreadState_Current is NULL Details: We are implementing an embedded version of Python on an platform that does not have an unlimited amount of memory. In order to catch out-of-memory situations I defined the macro PyCore_MALLOC_FUNC to be equal to my own routine d_malloc(). d_malloc calls malloc() and checks the return value before returning it. If a NULL pointer is returned by malloc(), d_malloc() calls PyErr_NoMemory. During testing, I discovered that PyCore_MALLOC_FUNC is called in PyOS_StdioReadline, which in turn is called by PyOS_Readline right after a call to Py_BEGIN_ALLOW_THREADS. In other words, at a time when the variable _PyThreadState_Current is set to NULL. If this particular malloc() call fails, my routine will call PyErr_NoMemory(), which in turn calls PyErr_SetObject(), which calls PyErr_SetObject(), which calls PyErr_Restore(), which calls PyThreadState_GET(). Now if PyThreadState_GET() is called at a time when _PyThreadState_Current is equal to NULL, it will generate a fatal error about there being no current thread. The net effect is that an out-of-memory situation can result in a misleading fatal error message about no current thread. Perhaps, I should not be calling PyErr_NoMemory() in this situation, but after all that is what the routine is for. Another alternative would be for the Python source code not to call PyCore_MALLOC_FUNC from PyOS_Readline at all, but instead to call it from PyOS_StdioReadline just before the call to fgets(). Follow-Ups: Date: 2000-Dec-13 13:47 By: tim_one Comment: Unclear why you defined PyCore_MALLOC_FUNC: Python internals always check for a null malloc return already. If you bumped into a case where Python didn't, it's a Python bug. ------------------------------------------------------- Date: 2000-Dec-13 13:14 By: gvanrossum Comment: I believe you are misguided. Python already checks the return value from malloc(), and calls PyErr_NoMemory(). If you find an instance of malloc() or realloc() that is not properly checked, I'd like to hear about it -- *that* would be a bug worth reporting. So you shouldn't be calling PyErr_NoMemory() from inside your d_malloc() function. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125719&group_id=5470 From noreply@sourceforge.net Wed Dec 13 23:54:15 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 15:54:15 -0800 Subject: [Python-bugs-list] [Bug #125598] Confusing KeyError-Message when key is tuple of size 1 Message-ID: Bug #125598, was updated on 2000-Dec-13 02:38 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: murple Assigned to : bwarsaw Summary: Confusing KeyError-Message when key is tuple of size 1 Details: Following caused some confusion for me: >>> dic = {1:1,2:"bla"} >>> dic[1] 1 >>> b = (1,) #1000 lines of code >>> dic[b] Traceback (innermost last): File "", line 1, in ? KeyError: 1 # This should be KeyError: (1,) # because 1 is a valid key for dic >>> dic[(1,2)] Traceback (innermost last): File "", line 1, in ? KeyError: (1, 2) >>> Follow-Ups: Date: 2000-Dec-13 15:54 By: bwarsaw Comment: I don't remember the exact details, but this is a byproduct of the backwards compatibility rules for Exception.__str__(). Specifically, if an exception is instantiated with a sequence of length 1, then str(exc) will return str(exc.args[0]). Note that exc.args contains the length-1 tuple it was instantiated with. This bites every built-in exception except EnvironmentError and SyntaxError, which define their own __str__(). Changing this may have unintended consequences, and I'm not sure if it's worth fixing. ------------------------------------------------------- Date: 2000-Dec-13 06:09 By: gvanrossum Comment: This seems a problem in exception reporting. I can reproduce it as follows: >>> raise KeyError, (1,) Traceback (most recent call last): File "", line 1, in ? KeyError: 1 >>> Assigned to Barry since he's the master of this code.] ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125598&group_id=5470 From noreply@sourceforge.net Thu Dec 14 04:22:13 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 20:22:13 -0800 Subject: [Python-bugs-list] [Bug #122780] msvcrt: locking constants aren't defined. Message-ID: Bug #122780, was updated on 2000-Nov-18 10:07 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: kirill_simonov Assigned to : fdrake Summary: msvcrt: locking constants aren't defined. Details: msvcrt.locking(fd, mode, nbytes): mode must be one of the following constants: LK_UNLOCK = 0 # Unlock LK_LOCK = 1 # Lock LK_NBLCK = 2 # Non-blocking lock LK_RLCK = 3 # Lock for read-only LK_NBRLCK = 4 # Non-blocking lock for read-only I think that constants should be defined in msvcrt and written in the docs. Follow-Ups: Date: 2000-Dec-13 20:22 By: fdrake Comment: Added documentation to Doc/lib/libmsvcrt.tex revision 1.4. ------------------------------------------------------- Date: 2000-Dec-11 17:59 By: tim_one Comment: I added the constants to msvcrtmodule.c, rev 1.6. Reassigned to Fred for docs. Fred, I've never used this function and am not sure why Guido accepted it. Nevertheless, the bug report is correct that the locking() function is unusable without these constants or their docs. The MS docs follow. The Python constants have the same names but do *not* have the leading underscore (e.g., LK_LOCK in Python). """ The _locking function locks or unlocks nbytes bytes of the file specified by handle. Locking bytes in a file prevents access to those bytes by other processes. All locking or unlocking begins at the current position of the file pointer and proceeds for the next nbytes bytes. It is possible to lock bytes past end of file. mode must be one of the following manifest constants, which are defined in LOCKING.H: _LK_LOCK Locks the specified bytes. If the bytes cannot be locked, the program immediately tries again after 1 second. If, after 10 attempts, the bytes cannot be locked, the constant returns an error. _LK_NBLCK Locks the specified bytes. If the bytes cannot be locked, the constant returns an error. _LK_NBRLCK Same as _LK_NBLCK. _LK_RLCK Same as _LK_LOCK. _LK_UNLCK Unlocks the specified bytes, which must have been previously locked. Multiple regions of a file that do not overlap can be locked. A region being unlocked must have been previously locked. _locking does not merge adjacent regions; if two locked regions are adjacent, each region must be unlocked separately. Regions should be locked only briefly and should be unlocked before closing a file or exiting the program. """ ------------------------------------------------------- Date: 2000-Nov-21 10:48 By: tim_one Comment: Assigned to me. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=122780&group_id=5470 From noreply@sourceforge.net Thu Dec 14 04:45:36 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Dec 2000 20:45:36 -0800 Subject: [Python-bugs-list] [Bug #125744] httplib does not check if port is valid (easy to fix?) Message-ID: Bug #125744, was updated on 2000-Dec-13 20:45 Here is a current snapshot of the bug. Project: Python Category: Extension Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: dealfaro Assigned to : nobody Summary: httplib does not check if port is valid (easy to fix?) Details: In httplib.py, line 336, the following code appears: def _set_hostport(self, host, port): if port is None: i = string.find(host, ':') if i >= 0: port = int(host[i+1:]) host = host[:i] else: port = self.default_port self.host = host self.port = port Ths code breaks if the host string ends with ":", so that int("") is called. In the old (1.5.2) version of this module, the corresponding int () conversion used to be enclosed in a try/except pair: try: port = string.atoi(port) except string.atoi_error: raise socket.error, "nonnumeric port" and this fixed the problem. Note BTW that now the error reported by int is "ValueError: invalid literal for int():" rather than the above string.atoi_error. I found this problem while downloading web pages, but unfortunately I cannot pinpoint which page caused the problem. Luca de Alfaro For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125744&group_id=5470 From noreply@sourceforge.net Thu Dec 14 08:13:00 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 00:13:00 -0800 Subject: [Python-bugs-list] [Bug #123924] Windows - using OpenSSL, problem with socket in httplib.py Message-ID: Bug #123924, was updated on 2000-Nov-30 06:11 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Closed Resolution: Fixed Bug Group: Platform-specific Priority: 5 Submitted by: ottobehrens Assigned to : gvanrossum Summary: Windows - using OpenSSL, problem with socket in httplib.py Details: We found that when compiling python with USE_SSL on Windows, an exception occurred on the line: ssl = socket.ssl(sock, self.key_file, self.cert_file) The socket.ssl function expected arg 1 to be a socket object and not an instance of a class. We changed it to the following, which resolved the problem. However, this is not a generic solution and breaks again under Linux. on class HTTPSConnection: def connect(self): "Connect to a host on a given (SSL) port." sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((self.host, self.port)) ssl = socket.ssl(sock._sock, self.key_file, self.cert_file) self.sock = FakeSocket(sock, ssl) Follow-Ups: Date: 2000-Dec-14 00:13 By: ottobehrens Comment: Thanks, the solution did work. Could the same problem not repeat where SSL is used in Windows, though? This is specifically httplib.py. I suppose not many people out there are doing other things with SSL besides using it to securely transfer HTTP? ------------------------------------------------------- Date: 2000-Dec-12 09:15 By: ottobehrens Comment: Thanks, the solution did work. Could the same problem not repeat where SSL is used in Windows, though? This is specifically httplib.py. I suppose not many people out there are doing other things with SSL besides using it to securely transfer HTTP? ------------------------------------------------------- Date: 2000-Dec-11 12:32 By: gvanrossum Comment: Checked in as revision 1.24. Now let's hope that this works -- the submitter never wrote back. ------------------------------------------------------- Date: 2000-Nov-30 06:15 By: gvanrossum Comment: Try this patch instead: Index: httplib.py =================================================================== RCS file: /cvsroot/python/python/dist/src/Lib/httplib.py,v retrieving revision 1.24 diff -c -r1.24 httplib.py *** httplib.py 2000/10/12 19:58:36 1.24 --- httplib.py 2000/11/30 14:14:43 *************** *** 613,619 **** sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((self.host, self.port)) ! ssl = socket.ssl(sock, self.key_file, self.cert_file) self.sock = FakeSocket(sock, ssl) --- 613,622 ---- sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((self.host, self.port)) ! realsock = sock ! if hasattr(sock, "_sock"): ! realsock = sock._sock ! ssl = socket.ssl(realsock, self.key_file, self.cert_file) self.sock = FakeSocket(sock, ssl) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=123924&group_id=5470 From t.sargeant@inpharmatica.co.uk Thu Dec 14 10:24:05 2000 From: t.sargeant@inpharmatica.co.uk (Toby Sargeant) Date: Thu, 14 Dec 2000 10:24:05 +0000 Subject: [Python-bugs-list] Re: [Bug #120983] python2.0 dumps core in gc_list_remove In-Reply-To: ; from noreply@sourceforge.net on Wed, Dec 13, 2000 at 08:21:34AM -0800 References: Message-ID: <20001214102404.A21815@inpharmatica.co.uk> On Wed, Dec 13, 2000 at 08:21:34AM -0800, noreply@sourceforge.net wrote: > By: gvanrossum > > Comment: > Neil, this is the only complaint about this. It may well be a user error. > Try direct mail to the submitter; if he doesn't reply or doesn't provide new information, you can close the bug report. > > ------------------------------------------------------- > > By: nascheme > > Comment: > Tobias, is this core dump still occuring? If it is, can you > provide some details on how to reproduce it? > ------------------------------------------------------- I'd love to provide some more details, but the problem is that I've stopped programming for Zope at work, so I don't have the time to really look at it closely any more. It happened for me on linux as well, though, so at least given the way I've been compiling python and zope, it was repeatable, and did go away when I turned off cyclic garbage collection. I agree, however, that given that noone else has reported it, the liklihood is that it's something that I was (or wasn't) doing, and as such the bug probably should be closed. Basically, the steps I went through were to: download and compile python 2.0 into a local directory, enabling most of the shared extensions. add the build target to my path. compile and install PyXML. compile and run zope. edit a few dtml documents using the management interface.o after about 5 minutes of editing, the zope process would fall over. I guess it could be in interaction with a previously installed version of python. Toby. -- [ Toby Sargeant : Inpharmatica : Developer : t.sargeant@inpharmatica.co.uk ] [ http://www.inpharmatica.co.uk : 020 7631 4644 fax 020 7631 4844 ] From noreply@sourceforge.net Thu Dec 14 12:17:34 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 04:17:34 -0800 Subject: [Python-bugs-list] [Bug #125775] Calculations are wrong Message-ID: Bug #125775, was updated on 2000-Dec-14 04:17 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: Calculations are wrong Details: Here are a two quite reprodusable bugs with Python 2.0: >>> 8+6+7+1+0.45+1.5+0.5+2+2.25+2+1+0.5+1.45 33.650000000000006 >>> 8+7+1+0.45+1.5+0.5+2+2.25+2+1+0.5+1.45 27.649999999999999 What's that 0.000000000006? And the 0.many-zeros-1, where did it go? For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125775&group_id=5470 From noreply@sourceforge.net Thu Dec 14 13:37:22 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 05:37:22 -0800 Subject: [Python-bugs-list] [Bug #125719] malloc() is called when _PyThreadState_Current is NULL Message-ID: Bug #125719, was updated on 2000-Dec-13 12:50 Here is a current snapshot of the bug. Project: Python Category: Threads Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: adeutsch Assigned to : nobody Summary: malloc() is called when _PyThreadState_Current is NULL Details: We are implementing an embedded version of Python on an platform that does not have an unlimited amount of memory. In order to catch out-of-memory situations I defined the macro PyCore_MALLOC_FUNC to be equal to my own routine d_malloc(). d_malloc calls malloc() and checks the return value before returning it. If a NULL pointer is returned by malloc(), d_malloc() calls PyErr_NoMemory. During testing, I discovered that PyCore_MALLOC_FUNC is called in PyOS_StdioReadline, which in turn is called by PyOS_Readline right after a call to Py_BEGIN_ALLOW_THREADS. In other words, at a time when the variable _PyThreadState_Current is set to NULL. If this particular malloc() call fails, my routine will call PyErr_NoMemory(), which in turn calls PyErr_SetObject(), which calls PyErr_SetObject(), which calls PyErr_Restore(), which calls PyThreadState_GET(). Now if PyThreadState_GET() is called at a time when _PyThreadState_Current is equal to NULL, it will generate a fatal error about there being no current thread. The net effect is that an out-of-memory situation can result in a misleading fatal error message about no current thread. Perhaps, I should not be calling PyErr_NoMemory() in this situation, but after all that is what the routine is for. Another alternative would be for the Python source code not to call PyCore_MALLOC_FUNC from PyOS_Readline at all, but instead to call it from PyOS_StdioReadline just before the call to fgets(). Follow-Ups: Date: 2000-Dec-14 05:37 By: adeutsch Comment: It is certainly possible that I am misguided. The reason I got into the whole issue of trapping on out-of-memory errors was that I found that on some occasions Python was not doing so and, as a result, was locking up our unit. I just redownloaded the version 2.0 source code and found that in PyRange_New and PySlice_New the pointer returned by PyObject_Init is dereferenced and used without first checking to ensure that it is not NULL. ------------------------------------------------------- Date: 2000-Dec-13 13:47 By: tim_one Comment: Unclear why you defined PyCore_MALLOC_FUNC: Python internals always check for a null malloc return already. If you bumped into a case where Python didn't, it's a Python bug. ------------------------------------------------------- Date: 2000-Dec-13 13:14 By: gvanrossum Comment: I believe you are misguided. Python already checks the return value from malloc(), and calls PyErr_NoMemory(). If you find an instance of malloc() or realloc() that is not properly checked, I'd like to hear about it -- *that* would be a bug worth reporting. So you shouldn't be calling PyErr_NoMemory() from inside your d_malloc() function. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125719&group_id=5470 From noreply@sourceforge.net Thu Dec 14 13:44:17 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 05:44:17 -0800 Subject: [Python-bugs-list] [Bug #125719] malloc() is called when _PyThreadState_Current is NULL Message-ID: Bug #125719, was updated on 2000-Dec-13 12:50 Here is a current snapshot of the bug. Project: Python Category: Threads Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: adeutsch Assigned to : nobody Summary: malloc() is called when _PyThreadState_Current is NULL Details: We are implementing an embedded version of Python on an platform that does not have an unlimited amount of memory. In order to catch out-of-memory situations I defined the macro PyCore_MALLOC_FUNC to be equal to my own routine d_malloc(). d_malloc calls malloc() and checks the return value before returning it. If a NULL pointer is returned by malloc(), d_malloc() calls PyErr_NoMemory. During testing, I discovered that PyCore_MALLOC_FUNC is called in PyOS_StdioReadline, which in turn is called by PyOS_Readline right after a call to Py_BEGIN_ALLOW_THREADS. In other words, at a time when the variable _PyThreadState_Current is set to NULL. If this particular malloc() call fails, my routine will call PyErr_NoMemory(), which in turn calls PyErr_SetObject(), which calls PyErr_SetObject(), which calls PyErr_Restore(), which calls PyThreadState_GET(). Now if PyThreadState_GET() is called at a time when _PyThreadState_Current is equal to NULL, it will generate a fatal error about there being no current thread. The net effect is that an out-of-memory situation can result in a misleading fatal error message about no current thread. Perhaps, I should not be calling PyErr_NoMemory() in this situation, but after all that is what the routine is for. Another alternative would be for the Python source code not to call PyCore_MALLOC_FUNC from PyOS_Readline at all, but instead to call it from PyOS_StdioReadline just before the call to fgets(). Follow-Ups: Date: 2000-Dec-14 05:44 By: adeutsch Comment: I just realized that I missed a layer in my previous comment, PyRange_New and PySlice_New both call PyObject_NEW which is defined in terms of PyObject_Init(PyObject_MALLOC(...)). Here is the source code for both: PyObject * PyRange_New(long start, long len, long step, int reps) { rangeobject *obj = PyObject_NEW(rangeobject, &PyRange_Type); obj->start = start; obj->len = len; obj->step = step; obj->reps = reps; return (PyObject *) obj; } PyObject * PySlice_New(PyObject *start, PyObject *stop, PyObject *step) { PySliceObject *obj = PyObject_NEW(PySliceObject, &PySlice_Type); if (step == NULL) step = Py_None; Py_INCREF(step); if (start == NULL) start = Py_None; Py_INCREF(start); if (stop == NULL) stop = Py_None; Py_INCREF(stop); obj->step = step; obj->start = start; obj->stop = stop; return (PyObject *) obj; } ------------------------------------------------------- Date: 2000-Dec-14 05:37 By: adeutsch Comment: It is certainly possible that I am misguided. The reason I got into the whole issue of trapping on out-of-memory errors was that I found that on some occasions Python was not doing so and, as a result, was locking up our unit. I just redownloaded the version 2.0 source code and found that in PyRange_New and PySlice_New the pointer returned by PyObject_Init is dereferenced and used without first checking to ensure that it is not NULL. ------------------------------------------------------- Date: 2000-Dec-13 13:47 By: tim_one Comment: Unclear why you defined PyCore_MALLOC_FUNC: Python internals always check for a null malloc return already. If you bumped into a case where Python didn't, it's a Python bug. ------------------------------------------------------- Date: 2000-Dec-13 13:14 By: gvanrossum Comment: I believe you are misguided. Python already checks the return value from malloc(), and calls PyErr_NoMemory(). If you find an instance of malloc() or realloc() that is not properly checked, I'd like to hear about it -- *that* would be a bug worth reporting. So you shouldn't be calling PyErr_NoMemory() from inside your d_malloc() function. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125719&group_id=5470 From noreply@sourceforge.net Thu Dec 14 14:37:12 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 06:37:12 -0800 Subject: [Python-bugs-list] [Bug #125744] httplib does not check if port is valid (easy to fix?) Message-ID: Bug #125744, was updated on 2000-Dec-13 20:45 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: dealfaro Assigned to : nobody Summary: httplib does not check if port is valid (easy to fix?) Details: In httplib.py, line 336, the following code appears: def _set_hostport(self, host, port): if port is None: i = string.find(host, ':') if i >= 0: port = int(host[i+1:]) host = host[:i] else: port = self.default_port self.host = host self.port = port Ths code breaks if the host string ends with ":", so that int("") is called. In the old (1.5.2) version of this module, the corresponding int () conversion used to be enclosed in a try/except pair: try: port = string.atoi(port) except string.atoi_error: raise socket.error, "nonnumeric port" and this fixed the problem. Note BTW that now the error reported by int is "ValueError: invalid literal for int():" rather than the above string.atoi_error. I found this problem while downloading web pages, but unfortunately I cannot pinpoint which page caused the problem. Luca de Alfaro Follow-Ups: Date: 2000-Dec-14 06:37 By: gvanrossum Comment: The only effect is that it raises ValueError instead of socket.error. Where is this a problem? (Note that string.atoi_error is an alias for ValueError.) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125744&group_id=5470 From noreply@sourceforge.net Thu Dec 14 14:55:54 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 06:55:54 -0800 Subject: [Python-bugs-list] [Bug #125775] Calculations are wrong Message-ID: Bug #125775, was updated on 2000-Dec-14 04:17 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: nobody Assigned to : nobody Summary: Calculations are wrong Details: Here are a two quite reprodusable bugs with Python 2.0: >>> 8+6+7+1+0.45+1.5+0.5+2+2.25+2+1+0.5+1.45 33.650000000000006 >>> 8+7+1+0.45+1.5+0.5+2+2.25+2+1+0.5+1.45 27.649999999999999 What's that 0.000000000006? And the 0.many-zeros-1, where did it go? Follow-Ups: Date: 2000-Dec-14 06:55 By: gvanrossum Comment: This is not a bug. Binary floating point cannot represent decimal fractions exactly, so some rounding always occurs (even in Python 1.5.2). What changed is that Python 2.0 shows more precision than before in certain circumstances (repr() and the interactive prompt). You can use str() or print to get the old, rounded output: >>> print 0.1+0.1 0.2 >>> Follow the link for a detailed example: http://www.python.org/cgi-bin/moinmoin/RepresentationError ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125775&group_id=5470 From noreply@sourceforge.net Thu Dec 14 15:09:01 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 07:09:01 -0800 Subject: [Python-bugs-list] [Bug #125719] malloc() is called when _PyThreadState_Current is NULL Message-ID: Bug #125719, was updated on 2000-Dec-13 12:50 Here is a current snapshot of the bug. Project: Python Category: Threads Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: adeutsch Assigned to : nobody Summary: malloc() is called when _PyThreadState_Current is NULL Details: We are implementing an embedded version of Python on an platform that does not have an unlimited amount of memory. In order to catch out-of-memory situations I defined the macro PyCore_MALLOC_FUNC to be equal to my own routine d_malloc(). d_malloc calls malloc() and checks the return value before returning it. If a NULL pointer is returned by malloc(), d_malloc() calls PyErr_NoMemory. During testing, I discovered that PyCore_MALLOC_FUNC is called in PyOS_StdioReadline, which in turn is called by PyOS_Readline right after a call to Py_BEGIN_ALLOW_THREADS. In other words, at a time when the variable _PyThreadState_Current is set to NULL. If this particular malloc() call fails, my routine will call PyErr_NoMemory(), which in turn calls PyErr_SetObject(), which calls PyErr_SetObject(), which calls PyErr_Restore(), which calls PyThreadState_GET(). Now if PyThreadState_GET() is called at a time when _PyThreadState_Current is equal to NULL, it will generate a fatal error about there being no current thread. The net effect is that an out-of-memory situation can result in a misleading fatal error message about no current thread. Perhaps, I should not be calling PyErr_NoMemory() in this situation, but after all that is what the routine is for. Another alternative would be for the Python source code not to call PyCore_MALLOC_FUNC from PyOS_Readline at all, but instead to call it from PyOS_StdioReadline just before the call to fgets(). Follow-Ups: Date: 2000-Dec-14 07:09 By: gvanrossum Comment: Thanks for the report. I'm checking in fixes for range and slice. Note that PyObject_Init() checks for a NULL argument, so it is safe already -- but PyRange_New() and PySlice_New() should not dereference the result if it i NULL! ------------------------------------------------------- Date: 2000-Dec-14 05:44 By: adeutsch Comment: I just realized that I missed a layer in my previous comment, PyRange_New and PySlice_New both call PyObject_NEW which is defined in terms of PyObject_Init(PyObject_MALLOC(...)). Here is the source code for both: PyObject * PyRange_New(long start, long len, long step, int reps) { rangeobject *obj = PyObject_NEW(rangeobject, &PyRange_Type); obj->start = start; obj->len = len; obj->step = step; obj->reps = reps; return (PyObject *) obj; } PyObject * PySlice_New(PyObject *start, PyObject *stop, PyObject *step) { PySliceObject *obj = PyObject_NEW(PySliceObject, &PySlice_Type); if (step == NULL) step = Py_None; Py_INCREF(step); if (start == NULL) start = Py_None; Py_INCREF(start); if (stop == NULL) stop = Py_None; Py_INCREF(stop); obj->step = step; obj->start = start; obj->stop = stop; return (PyObject *) obj; } ------------------------------------------------------- Date: 2000-Dec-14 05:37 By: adeutsch Comment: It is certainly possible that I am misguided. The reason I got into the whole issue of trapping on out-of-memory errors was that I found that on some occasions Python was not doing so and, as a result, was locking up our unit. I just redownloaded the version 2.0 source code and found that in PyRange_New and PySlice_New the pointer returned by PyObject_Init is dereferenced and used without first checking to ensure that it is not NULL. ------------------------------------------------------- Date: 2000-Dec-13 13:47 By: tim_one Comment: Unclear why you defined PyCore_MALLOC_FUNC: Python internals always check for a null malloc return already. If you bumped into a case where Python didn't, it's a Python bug. ------------------------------------------------------- Date: 2000-Dec-13 13:14 By: gvanrossum Comment: I believe you are misguided. Python already checks the return value from malloc(), and calls PyErr_NoMemory(). If you find an instance of malloc() or realloc() that is not properly checked, I'd like to hear about it -- *that* would be a bug worth reporting. So you shouldn't be calling PyErr_NoMemory() from inside your d_malloc() function. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125719&group_id=5470 From noreply@sourceforge.net Thu Dec 14 17:40:36 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 09:40:36 -0800 Subject: [Python-bugs-list] [Bug #125808] Fiddling builtin str flips out re.sub Message-ID: Bug #125808, was updated on 2000-Dec-14 09:40 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: montanaro Assigned to : nobody Summary: Fiddling builtin str flips out re.sub Details: Something about replacing __builtin__.str seems to cause re.sub to fail when trying to replace control characters in a string. Given the following PYTHONSTARTUP file: import pprint, __builtin__ class Writer: def __init__(self): self.pp = pprint.PrettyPrinter() def str(self, obj): return self.pp.pformat(obj) __builtin__.str = Writer().str executing the following at an interactive Python prompt: import re ; re.sub(r"[\000-\037\177]", "", "\000") causes the substitution to fail. This is with a version of Python compiled from the latest CVS tree. It also fails with 2.0c1, but not 1.5.2. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125808&group_id=5470 From noreply@sourceforge.net Thu Dec 14 17:58:54 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 09:58:54 -0800 Subject: [Python-bugs-list] [Bug #125808] Fiddling builtin str flips out re.sub Message-ID: Bug #125808, was updated on 2000-Dec-14 09:40 Here is a current snapshot of the bug. Project: Python Category: None Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: montanaro Assigned to : gvanrossum Summary: Fiddling builtin str flips out re.sub Details: Something about replacing __builtin__.str seems to cause re.sub to fail when trying to replace control characters in a string. Given the following PYTHONSTARTUP file: import pprint, __builtin__ class Writer: def __init__(self): self.pp = pprint.PrettyPrinter() def str(self, obj): return self.pp.pformat(obj) __builtin__.str = Writer().str executing the following at an interactive Python prompt: import re ; re.sub(r"[\000-\037\177]", "", "\000") causes the substitution to fail. This is with a version of Python compiled from the latest CVS tree. It also fails with 2.0c1, but not 1.5.2. Follow-Ups: Date: 2000-Dec-14 09:58 By: gvanrossum Comment: That'll teach you to mess with builtins! Your str() implementation does not preserve the important property that for any string s, str(s)==s. Your str() behaves more like repr(): >>> # Builtin str >>> str('abc') 'abc' >>> # Your str >>> str('abc') "'abc'" >>> ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125808&group_id=5470 From noreply@sourceforge.net Thu Dec 14 18:04:49 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 10:04:49 -0800 Subject: [Python-bugs-list] [Bug #125808] Fiddling builtin str flips out re.sub Message-ID: Bug #125808, was updated on 2000-Dec-14 09:40 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: montanaro Assigned to : effbot Summary: Fiddling builtin str flips out re.sub Details: Something about replacing __builtin__.str seems to cause re.sub to fail when trying to replace control characters in a string. Given the following PYTHONSTARTUP file: import pprint, __builtin__ class Writer: def __init__(self): self.pp = pprint.PrettyPrinter() def str(self, obj): return self.pp.pformat(obj) __builtin__.str = Writer().str executing the following at an interactive Python prompt: import re ; re.sub(r"[\000-\037\177]", "", "\000") causes the substitution to fail. This is with a version of Python compiled from the latest CVS tree. It also fails with 2.0c1, but not 1.5.2. Follow-Ups: Date: 2000-Dec-14 10:04 By: tim_one Comment: Assigned to Fredrik, but I gotta say I've got little sympathy, Skip -- given how much of Python's libraries are written in Python, *of course* you'll break things if you replace __builtin__ functions. In particular, the function _class_escape in sre_parse.py ("handle escape code inside character class") uses the builtin str(). Perhaps /F can think of an easy way to use some other method there, but you're playing with fire regardless. ------------------------------------------------------- Date: 2000-Dec-14 09:58 By: gvanrossum Comment: That'll teach you to mess with builtins! Your str() implementation does not preserve the important property that for any string s, str(s)==s. Your str() behaves more like repr(): >>> # Builtin str >>> str('abc') 'abc' >>> # Your str >>> str('abc') "'abc'" >>> ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125808&group_id=5470 From noreply@sourceforge.net Thu Dec 14 18:07:30 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 10:07:30 -0800 Subject: [Python-bugs-list] [Bug #125808] Fiddling builtin str flips out re.sub Message-ID: Bug #125808, was updated on 2000-Dec-14 09:40 Here is a current snapshot of the bug. Project: Python Category: None Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: montanaro Assigned to : nobody Summary: Fiddling builtin str flips out re.sub Details: Something about replacing __builtin__.str seems to cause re.sub to fail when trying to replace control characters in a string. Given the following PYTHONSTARTUP file: import pprint, __builtin__ class Writer: def __init__(self): self.pp = pprint.PrettyPrinter() def str(self, obj): return self.pp.pformat(obj) __builtin__.str = Writer().str executing the following at an interactive Python prompt: import re ; re.sub(r"[\000-\037\177]", "", "\000") causes the substitution to fail. This is with a version of Python compiled from the latest CVS tree. It also fails with 2.0c1, but not 1.5.2. Follow-Ups: Date: 2000-Dec-14 10:07 By: gvanrossum Comment: Closed again (Tim's update somehow reopened it). ------------------------------------------------------- Date: 2000-Dec-14 10:04 By: tim_one Comment: Assigned to Fredrik, but I gotta say I've got little sympathy, Skip -- given how much of Python's libraries are written in Python, *of course* you'll break things if you replace __builtin__ functions. In particular, the function _class_escape in sre_parse.py ("handle escape code inside character class") uses the builtin str(). Perhaps /F can think of an easy way to use some other method there, but you're playing with fire regardless. ------------------------------------------------------- Date: 2000-Dec-14 09:58 By: gvanrossum Comment: That'll teach you to mess with builtins! Your str() implementation does not preserve the important property that for any string s, str(s)==s. Your str() behaves more like repr(): >>> # Builtin str >>> str('abc') 'abc' >>> # Your str >>> str('abc') "'abc'" >>> ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125808&group_id=5470 From noreply@sourceforge.net Thu Dec 14 18:09:25 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 10:09:25 -0800 Subject: [Python-bugs-list] [Bug #125808] Fiddling builtin str flips out re.sub Message-ID: Bug #125808, was updated on 2000-Dec-14 09:40 Here is a current snapshot of the bug. Project: Python Category: None Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: montanaro Assigned to : gvanrossum Summary: Fiddling builtin str flips out re.sub Details: Something about replacing __builtin__.str seems to cause re.sub to fail when trying to replace control characters in a string. Given the following PYTHONSTARTUP file: import pprint, __builtin__ class Writer: def __init__(self): self.pp = pprint.PrettyPrinter() def str(self, obj): return self.pp.pformat(obj) __builtin__.str = Writer().str executing the following at an interactive Python prompt: import re ; re.sub(r"[\000-\037\177]", "", "\000") causes the substitution to fail. This is with a version of Python compiled from the latest CVS tree. It also fails with 2.0c1, but not 1.5.2. Follow-Ups: Date: 2000-Dec-14 10:09 By: tim_one Comment: Oops! Guido & I added comments at the same time, but he committed 6 seconds(!) before I did, so my screen undid his. Undoing mine, to restore his. Let's hope he's not doing the same thing at the same time <0.9 wink>. ------------------------------------------------------- Date: 2000-Dec-14 10:07 By: gvanrossum Comment: Closed again (Tim's update somehow reopened it). ------------------------------------------------------- Date: 2000-Dec-14 10:04 By: tim_one Comment: Assigned to Fredrik, but I gotta say I've got little sympathy, Skip -- given how much of Python's libraries are written in Python, *of course* you'll break things if you replace __builtin__ functions. In particular, the function _class_escape in sre_parse.py ("handle escape code inside character class") uses the builtin str(). Perhaps /F can think of an easy way to use some other method there, but you're playing with fire regardless. ------------------------------------------------------- Date: 2000-Dec-14 09:58 By: gvanrossum Comment: That'll teach you to mess with builtins! Your str() implementation does not preserve the important property that for any string s, str(s)==s. Your str() behaves more like repr(): >>> # Builtin str >>> str('abc') 'abc' >>> # Your str >>> str('abc') "'abc'" >>> ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125808&group_id=5470 From noreply@sourceforge.net Thu Dec 14 18:11:02 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 10:11:02 -0800 Subject: [Python-bugs-list] [Bug #125808] Fiddling builtin str flips out re.sub Message-ID: Bug #125808, was updated on 2000-Dec-14 09:40 Here is a current snapshot of the bug. Project: Python Category: None Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: montanaro Assigned to : gvanrossum Summary: Fiddling builtin str flips out re.sub Details: Something about replacing __builtin__.str seems to cause re.sub to fail when trying to replace control characters in a string. Given the following PYTHONSTARTUP file: import pprint, __builtin__ class Writer: def __init__(self): self.pp = pprint.PrettyPrinter() def str(self, obj): return self.pp.pformat(obj) __builtin__.str = Writer().str executing the following at an interactive Python prompt: import re ; re.sub(r"[\000-\037\177]", "", "\000") causes the substitution to fail. This is with a version of Python compiled from the latest CVS tree. It also fails with 2.0c1, but not 1.5.2. Follow-Ups: Date: 2000-Dec-14 10:11 By: tim_one Comment: Heh -- that time he beat me by 2(!) seconds. ------------------------------------------------------- Date: 2000-Dec-14 10:09 By: tim_one Comment: Oops! Guido & I added comments at the same time, but he committed 6 seconds(!) before I did, so my screen undid his. Undoing mine, to restore his. Let's hope he's not doing the same thing at the same time <0.9 wink>. ------------------------------------------------------- Date: 2000-Dec-14 10:07 By: gvanrossum Comment: Closed again (Tim's update somehow reopened it). ------------------------------------------------------- Date: 2000-Dec-14 10:04 By: tim_one Comment: Assigned to Fredrik, but I gotta say I've got little sympathy, Skip -- given how much of Python's libraries are written in Python, *of course* you'll break things if you replace __builtin__ functions. In particular, the function _class_escape in sre_parse.py ("handle escape code inside character class") uses the builtin str(). Perhaps /F can think of an easy way to use some other method there, but you're playing with fire regardless. ------------------------------------------------------- Date: 2000-Dec-14 09:58 By: gvanrossum Comment: That'll teach you to mess with builtins! Your str() implementation does not preserve the important property that for any string s, str(s)==s. Your str() behaves more like repr(): >>> # Builtin str >>> str('abc') 'abc' >>> # Your str >>> str('abc') "'abc'" >>> ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125808&group_id=5470 From skip@mojam.com (Skip Montanaro) Thu Dec 14 20:52:19 2000 From: skip@mojam.com (Skip Montanaro) (Skip Montanaro) Date: Thu, 14 Dec 2000 14:52:19 -0600 (CST) Subject: [Python-bugs-list] Re: [Bug #125808] Fiddling builtin str flips out re.sub In-Reply-To: References: Message-ID: <14905.13059.900376.551043@beluga.mojam.com> Guido> Comment: That'll teach you to mess with builtins! Guido> Your str() implementation does not preserve the important Guido> property that for any string s, str(s)==s. Your str() behaves Guido> more like repr(): >>>> # Builtin str >>>> str('abc') Guido> 'abc' >>>> # Your str >>>> str('abc') Guido> "'abc'" >>>> Okay, so my str is broken. That still doesn't explain (to me, maybe it's obvious to everyone else) why re.sub isn't working. I can understand it affecting the output of the result (that was my desire in overriding str in an interactive setting), but why is it affecting the input to re.sub? Skip From skip@mojam.com (Skip Montanaro) Thu Dec 14 20:54:11 2000 From: skip@mojam.com (Skip Montanaro) (Skip Montanaro) Date: Thu, 14 Dec 2000 14:54:11 -0600 (CST) Subject: [Python-bugs-list] Re: [Bug #125808] Fiddling builtin str flips out re.sub In-Reply-To: References: Message-ID: <14905.13171.569377.231619@beluga.mojam.com> That still doesn't explain (to me, maybe it's obvious to everyone else) why re.sub isn't working. Oops. I missed Tim's comment. Skip From noreply@sourceforge.net Fri Dec 15 01:00:26 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 17:00:26 -0800 Subject: [Python-bugs-list] [Bug #119558] bsddb module doesn't check return value of malloc() Message-ID: Bug #119558, was updated on 2000-Oct-27 10:46 Here is a current snapshot of the bug. Project: Python Category: Extension Modules Status: Closed Resolution: Fixed Bug Group: None Priority: 4 Submitted by: akuchling Assigned to : akuchling Summary: bsddb module doesn't check return value of malloc() Details: The bsddbmodule often uses code like this: if (krec.size > sizeof(buf)) data = malloc(krec.size); else data = buf; memcpy(data,krec.data,krec.size); If malloc() returns NULL, this will do a memcpy() using NULL as the dest point. Follow-Ups: Date: 2000-Dec-14 17:00 By: akuchling Comment: Fixed by patch #102827 ------------------------------------------------------- Date: 2000-Oct-31 11:10 By: akuchling Comment: I'll take care of it, though I'm not sure when I'll get around to it. It'll make a good evening's project. ------------------------------------------------------- Date: 2000-Oct-29 09:55 By: jhylton Comment: Do you want to fix it? Or should we wait for your new module? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119558&group_id=5470 From noreply@sourceforge.net Fri Dec 15 03:28:53 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 19:28:53 -0800 Subject: [Python-bugs-list] [Bug #125860] Kill the hard disk Message-ID: Bug #125860, was updated on 2000-Dec-14 19:28 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Open Resolution: None Bug Group: 3rd Party Priority: 5 Submitted by: nobody Assigned to : nobody Summary: Kill the hard disk Details: It can kill the computer slowly and step to step, first it broken the hard disk and then kill the program until the computer can't work For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125860&group_id=5470 From noreply@sourceforge.net Fri Dec 15 03:46:16 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Dec 2000 19:46:16 -0800 Subject: [Python-bugs-list] [Bug #125860] Kill the hard disk Message-ID: Bug #125860, was updated on 2000-Dec-14 19:28 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: nobody Assigned to : nobody Summary: Kill the hard disk Details: It can kill the computer slowly and step to step, first it broken the hard disk and then kill the program until the computer can't work Follow-Ups: Date: 2000-Dec-14 19:46 By: tim_one Comment: An anonymous complaint that doesn't make sense isn't worth keeping around. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125860&group_id=5470 From noreply@sourceforge.net Fri Dec 15 10:07:35 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Dec 2000 02:07:35 -0800 Subject: [Python-bugs-list] [Bug #125880] TeX source found in PDF contents list Message-ID: Bug #125880, was updated on 2000-Dec-15 02:07 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: TeX source found in PDF contents list Details: Hello there The 'ext.pdf' document for 2.0 I downloaded from python.org Has some TeX source spilling out in the contents window. Section 1.9 says The Pyprotect unhbox voidb @x kern... instead of 'The Py_BuildValue() Function' (It's OK in the main window title) Regards Jon Nicoll (jkn@nicorp.f9.co.uk) For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125880&group_id=5470 From noreply@sourceforge.net Fri Dec 15 13:53:33 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Dec 2000 05:53:33 -0800 Subject: [Python-bugs-list] [Bug #125891] windows popen4 crashes python when not closed correctly Message-ID: Bug #125891, was updated on 2000-Dec-15 05:53 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: aisaksen Assigned to : nobody Summary: windows popen4 crashes python when not closed correctly Details: If you don't close both the istream file and ostream file return values after calling popen4, then it crashes in somewhere in MSVCRT.DLL Try the code included in this file. If you call Crash(), then python will crash after about 500 times through the loop. NoCrash() works ok, because you close both of the results. This bug happens on both the www.python.org release, as well as the ActivePython build. I'm running Windows 2000, with Visual Studio 6.0 installed. This seems to be a Windows bug. It dies in a call to setvbuf. Recompiling with the HAS_SETVBUF undefined still causes the same crash. It would be nice if python prevented this from happening. Ideally, you should be able to close the pipes, because there is no longer a reference to them. -Aaron Isaksen -- begin code -- import os def Crash(): n = 0 while 1: p = os.popen4('dir') p[0].close() n +=1 print n def NoCrash(): n = 0 while 1: p = os.popen4('dir') p[0].close() p[1].close() n +=1 print n -- end code -- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125891&group_id=5470 From noreply@sourceforge.net Fri Dec 15 19:02:45 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Dec 2000 11:02:45 -0800 Subject: [Python-bugs-list] [Bug #125919] random.shuffle isn't documented Message-ID: Bug #125919, was updated on 2000-Dec-15 11:02 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 7 Submitted by: tim_one Assigned to : fdrake Summary: random.shuffle isn't documented Details: From the docstring: """ x, random=random.random -> shuffle list x in place; return None. Optional arg random is a 0-argument function returning a random float in [0.0, 1.0); by default, the standard random.random. Note that for even rather small len(x), the total number of permutations of x is larger than the period of most random number generators; this implies that "most" permutations of a long sequence can never be generated. """ I would have added this myself to the docs, but don't understand the structure of the docs; e.g., I always thought whrandom was an internal implementation detail for random that wasn't meant to be exposed on its own, and the "Random Number Generator Interface" appears to be a half-baked Grand Generalization that was abandoned the day after it got dreamt up. Suggest the docs in this area would be much clearer if they documented the random module on its own, and dropped the sections on whrandom and the RNGI. Python's randomization facilities are too meager to merit so much complexity. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125919&group_id=5470 From noreply@sourceforge.net Fri Dec 15 19:16:00 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Dec 2000 11:16:00 -0800 Subject: [Python-bugs-list] [Bug #125919] docs for random, whrandom too complex Message-ID: Bug #125919, was updated on 2000-Dec-15 11:02 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 7 Submitted by: tim_one Assigned to : fdrake Summary: docs for random, whrandom too complex Details: From the docstring: """ x, random=random.random -> shuffle list x in place; return None. Optional arg random is a 0-argument function returning a random float in [0.0, 1.0); by default, the standard random.random. Note that for even rather small len(x), the total number of permutations of x is larger than the period of most random number generators; this implies that "most" permutations of a long sequence can never be generated. """ I would have added this myself to the docs, but don't understand the structure of the docs; e.g., I always thought whrandom was an internal implementation detail for random that wasn't meant to be exposed on its own, and the "Random Number Generator Interface" appears to be a half-baked Grand Generalization that was abandoned the day after it got dreamt up. Suggest the docs in this area would be much clearer if they documented the random module on its own, and dropped the sections on whrandom and the RNGI. Python's randomization facilities are too meager to merit so much complexity. Follow-Ups: Date: 2000-Dec-15 11:16 By: fdrake Comment: random.shuffle() was documented before I got the bug notice for this one! Your other comments still need to be dealt with, so I'll leave this open, but re-title it. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125919&group_id=5470 From noreply@sourceforge.net Fri Dec 15 21:36:41 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Dec 2000 13:36:41 -0800 Subject: [Python-bugs-list] [Bug #124344] smtplib quoteaddr() has problems with RFC821 source routing Message-ID: Bug #124344, was updated on 2000-Dec-04 02:50 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: carey Assigned to : bwarsaw Summary: smtplib quoteaddr() has problems with RFC821 source routing Details: RFC821 defines source routed SMTP addresses of the form <@USC-ISIE.ARPA:JQP@MIT-AI.ARPA>. RFC1123 (STD3) deprecates these kinds of addresses, but does not forbid them. If an address like this is passed to smtplib.quoteaddr(), the result is '<@USC-ISIE.ARPA>', which is useless, and illegal according to RFC821. smtplib should probably leave the source routing there, assuming anyone using an address like this knows what they're doing, and since any SMTP server "MUST" still accept this syntax. Alternatively, smtplib could just refuse to deliver to an address like this, with some justification. (RFC1123 section 5.2.19.) In any case, this isn't very important at all. I'll probably write a patch when I have some time, using one of the two solutions outlined above. Follow-Ups: Date: 2000-Dec-15 13:36 By: nobody Comment: it's not just source routed addresses -- it's any address with more than one @ sign. here's another case that needs fixing. >>> quoteaddr('"/dd.NOTES=CN$=Claudio Alves$/OU$=RioJaneiro$/O$=ErnstYoung$/C$=BR@EYI-AMERICAS/"@ah01.uk.eyi.com') '' >>> quoteaddr(quoteaddr('"/dd.NOTES=CN$=Claudio Alves$/OU$=RioJaneiro$/O$=ErnstYoung$/C$=BR@EYI-AMERICAS/"@ah01.uk.eyi.com')) '' ------------------------------------------------------- Date: 2000-Dec-06 11:58 By: fdrake Comment: Assigned to the mail guy. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124344&group_id=5470 From noreply@sourceforge.net Fri Dec 15 22:25:00 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Dec 2000 14:25:00 -0800 Subject: [Python-bugs-list] [Bug #125933] warnings framework documentation Message-ID: Bug #125933, was updated on 2000-Dec-15 14:25 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: Feature Request Priority: 7 Submitted by: fdrake Assigned to : gvanrossum Summary: warnings framework documentation Details: The PyWarn_*() APIs need to be documented: Doc/api/api.tex. The command line parameters need to be documented: Misc/python.man. The Python module needs to be documented: Doc/lib/libwarnings.tex (new file to create). For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125933&group_id=5470 From noreply@sourceforge.net Sat Dec 16 15:53:06 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Dec 2000 07:53:06 -0800 Subject: [Python-bugs-list] [Bug #125981] socket close is not thread safe Message-ID: Bug #125981, was updated on 2000-Dec-16 07:53 Here is a current snapshot of the bug. Project: Python Category: Extension Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: barry-scott Assigned to : nobody Summary: socket close is not thread safe Details: Patch 102875 contains a fix for this problem. I have been seeing random failures of my BaseHttpServer based web server to serve pages. I finally tracked this down to socket.close() being called twice on the same socket fd. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125981&group_id=5470 From noreply@sourceforge.net Sat Dec 16 17:22:32 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Dec 2000 09:22:32 -0800 Subject: [Python-bugs-list] [Bug #125989] cmp() broken on instances Message-ID: Bug #125989, was updated on 2000-Dec-16 09:22 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nascheme Assigned to : nobody Summary: cmp() broken on instances Details: I found this while working on the implementation for PEP 208. Python 1.5.2 and Python 2.0 both give the same behavior (tested on Linux and Solaris). class CoerceNumber: def __init__(self, arg): self.arg = arg def __coerce__(self, other): if isinstance(other, CoerceNumber): return self.arg, other.arg else: return self.arg, other class MethodNumber: def __init__(self,arg): self.arg = arg def __cmp__(self, other): return cmp(self.arg, other) # the order of instantiation matters! m = MethodNumber(1) c = CoerceNumber(2) print "cmp(,) =", cmp(m, c) print "cmp(,) =", cmp(c, m) Randomly assigned to Tim. I don't have time to figure out what's happening right now. Tim, feel free to ignore. It should be fixed by PEP 208 anyhow. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125989&group_id=5470 From noreply@sourceforge.net Sat Dec 16 17:24:44 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Dec 2000 09:24:44 -0800 Subject: [Python-bugs-list] [Bug #125989] cmp() broken on instances Message-ID: Bug #125989, was updated on 2000-Dec-16 09:22 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: None Priority: 3 Submitted by: nascheme Assigned to : tim_one Summary: cmp() broken on instances Details: I found this while working on the implementation for PEP 208. Python 1.5.2 and Python 2.0 both give the same behavior (tested on Linux and Solaris). class CoerceNumber: def __init__(self, arg): self.arg = arg def __coerce__(self, other): if isinstance(other, CoerceNumber): return self.arg, other.arg else: return self.arg, other class MethodNumber: def __init__(self,arg): self.arg = arg def __cmp__(self, other): return cmp(self.arg, other) # the order of instantiation matters! m = MethodNumber(1) c = CoerceNumber(2) print "cmp(,) =", cmp(m, c) print "cmp(,) =", cmp(c, m) Randomly assigned to Tim. I don't have time to figure out what's happening right now. Tim, feel free to ignore. It should be fixed by PEP 208 anyhow. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125989&group_id=5470 From noreply@sourceforge.net Sat Dec 16 19:14:20 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Dec 2000 11:14:20 -0800 Subject: [Python-bugs-list] [Bug #125989] cmp() broken on instances Message-ID: Bug #125989, was updated on 2000-Dec-16 09:22 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 3 Submitted by: nascheme Assigned to : tim_one Summary: cmp() broken on instances Details: I found this while working on the implementation for PEP 208. Python 1.5.2 and Python 2.0 both give the same behavior (tested on Linux and Solaris). class CoerceNumber: def __init__(self, arg): self.arg = arg def __coerce__(self, other): if isinstance(other, CoerceNumber): return self.arg, other.arg else: return self.arg, other class MethodNumber: def __init__(self,arg): self.arg = arg def __cmp__(self, other): return cmp(self.arg, other) # the order of instantiation matters! m = MethodNumber(1) c = CoerceNumber(2) print "cmp(,) =", cmp(m, c) print "cmp(,) =", cmp(c, m) Randomly assigned to Tim. I don't have time to figure out what's happening right now. Tim, feel free to ignore. It should be fixed by PEP 208 anyhow. Follow-Ups: Date: 2000-Dec-16 11:14 By: tim_one Comment: My belief: the behavior of cmp(m, c) is well-defined, but the behavior of cmp(c, m) is not because the __coerce__ function in that case breaks the rules. Here's an excruciating explanation, referring to the Lang Ref's coercion rules at the bottom of: http://www.python.org/doc/current/ref/numeric-types.html cmp(m, c) m is an instance that doesn't define __coerce__ but does define __cmp__, so (rule 1c) m.__cmp__(c) is evaluated. That in turn evaluates cmp(m.arg, c) == cmp(1, c). By rule 2a, in cmp(1, c) the arguments are replaced by the (swapped!) result of c.__coerce__(1) == the swap of (c.arg, 1) == the swap of (2, 1) == (1, 2). cmp(1, 2) then returns -1, by rule 3c. cmp(c, m) By rule 1a, the args are replaced by c.__coerce__(m) == (c.arg, m) = (2, m). Note that this breaks the rules: __coerce__ is *supposed* to coerce to a common type, or return None. It did neither here. Pushing on anyway, 2 does not have a method __cmp__, so by rule 1c we ignore everything we've done and go on to step 2, but starting over with args (c, m) again. None of the rules in step 2 apply (m doesn't have a __coerce__ or an __rcmp__), so we fall into step 3. Now "we only get here if neither x nor y is a class instance" is false, because c.__coerce__ broke the rules. I suspect that what happens then is that the objects get compared by storage address, perhaps by the "/* Sigh -- special case for comparisons */" code in PyInstance_DoBinOp. And that's why the result of cmp(c, m) varies between -1 and +1, but is never 0. If that's all correct, then the code is functioning as documented, so there's no bug. You could argue there's a design error -- although there are so many steps to consider it's hard to say exactly where . ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125989&group_id=5470 From noreply@sourceforge.net Sun Dec 17 05:05:23 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Dec 2000 21:05:23 -0800 Subject: [Python-bugs-list] [Bug #126034] xml.sax.handler.ErrorHandler not documented Message-ID: Bug #126034, was updated on 2000-Dec-16 21:05 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: Feature Request Priority: 6 Submitted by: fdrake Assigned to : fdrake Summary: xml.sax.handler.ErrorHandler not documented Details: The SAX2 ErrorHandler interface needs to be described in Doc/lib/xmlsaxhandler.tex. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126034&group_id=5470 From noreply@sourceforge.net Sun Dec 17 06:17:57 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Dec 2000 22:17:57 -0800 Subject: [Python-bugs-list] [Bug #120983] python2.0 dumps core in gc_list_remove Message-ID: Bug #120983, was updated on 2000-Nov-01 01:17 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: None Bug Group: None Priority: 3 Submitted by: ephedra Assigned to : nascheme Summary: python2.0 dumps core in gc_list_remove Details: Source: downloaded from http://www.python.org OS: freebsd 4.1 Compilation options: default (does not occur when compiled with --without-cycle-gc) Observed while: running Zope2-cvs. I have not tested this on other operating systems, but it seems reproducible, if intermittent on freebsd. I will keep the binary and the corefile in case further information is needed. some information extracted with gdb: #0 0x80a8ffb in gc_list_remove (node=0x89eab40) at ./gcmodule.c:88 ---Type to continue, or q to quit--- 88 node->gc_next->gc_prev = node->gc_prev; (gdb) p node $1 = (struct _gc_head *) 0x89eab40 (gdb) p node->gc_next $2 = (struct _gc_head *) 0x0 #0 0x80a8ffb in gc_list_remove (node=0x89eab40) at ./gcmodule.c:88 #1 0x80a9ac3 in _PyGC_Remove (op=0x89eab40) at ./gcmodule.c:523 #2 0x807e01d in instance_dealloc (inst=0x89eab4c) at classobject.c:552 #3 0x808ea46 in insertdict (mp=0x89f004c, key=0x89e3ba8, hash=134733596, value=0x8064d13) at dictobject.c:343 #4 0x808ee01 in PyDict_SetItem (op=0x89f004c, key=0x89e3ba8, value=0x807df1c) at dictobject.c:477 #5 0x2836e33c in subclass_simple_setattro (self=0x89ea900, name=0x8835760, v=0x89ead6c) at ./../Components/ExtensionClass/ExtensionClass.c:2174 #6 0x283914cc in _setattro (self=0x89ea900, oname=0x8835760, v=0x89ead6c, setattrf=0x2836e2cc ) at ./cPersistence.c:661 #7 0x283915d0 in Per_setattro (self=0x89ea900, oname=0x8835760, v=0x89ead6c) at ./cPersistence.c:701 #8 0x80926c5 in PyObject_SetAttr (v=0x89eab40, name=0x89e3ba8, value=0x807df1c) at object.c:767 #9 0x283ae5df in Wrapper_setattro (self=0x8856f70, oname=0x8835760, v=0x89ead6c) at ./../Components/ExtensionClass/Acquisition.c:600 ... (gdb) up #1 0x80a9ac3 in _PyGC_Remove (op=0x89eab40) at ./gcmodule.c:523 523 gc_list_remove(g); (gdb) p *g $4 = {gc_next = 0xc, gc_prev = 0x80db600, gc_refs = 7} (gdb) up #2 0x807e01d in instance_dealloc (inst=0x89eab4c) at classobject.c:552 552 PyObject_GC_Fini(inst); (gdb) p *inst $6 = {ob_refcnt = 0, ob_type = 0x80d89e0, in_class = 0x88a790c, in_dict = 0x89eeccc} (gdb) p *inst->ob_type $7 = {ob_refcnt = 10, ob_type = 0x80db740, ob_size = 0, tp_name = 0x80cb646 "instance", tp_basicsize = 28, tp_itemsize = 0, tp_dealloc = 0x807df1c , tp_print = 0, tp_getattr = 0, tp_setattr = 0, tp_compare = 0x807e860 , tp_repr = 0x807e690 , tp_as_number = 0x80d8940, tp_as_sequence = 0x80d8900, tp_as_mapping = 0x80d88ec, tp_hash = 0x807e93c , tp_call = 0, tp_str = 0, tp_getattro = 0x807e278 , tp_setattro = 0x807e388 , tp_as_buffer = 0x0, tp_flags = 15, tp_doc = 0x0, tp_traverse = 0x807ead4 , tp_clear = 0, tp_xxx7 = 0, tp_xxx8 = 0} (gdb) p *inst->in_class $8 = {ob_refcnt = 4, ob_type = 0x80d8880, cl_bases = 0x80fbcac, cl_dict = 0x88a794c, cl_name = 0x88a54c0, cl_getattr = 0x0, cl_setattr = 0x0, cl_delattr = 0x0} Follow-Ups: Date: 2000-Dec-16 22:17 By: nascheme Comment: I'm closing this bug. The core dump is likely caused by one of Zope's extension modules. The stack trace doesn't tell me much. The list of container objects has obviously been corrupted but its unlikely that the functions on the stack are responsible. I'll wait for more people to complain before digging deeper. ------------------------------------------------------- Date: 2000-Dec-13 08:21 By: gvanrossum Comment: Neil, this is the only complaint about this. It may well be a user error. Try direct mail to the submitter; if he doesn't reply or doesn't provide new information, you can close the bug report. ------------------------------------------------------- Date: 2000-Nov-17 05:54 By: nascheme Comment: Tobias, is this core dump still occuring? If it is, can you provide some details on how to reproduce it? ------------------------------------------------------- Date: 2000-Nov-01 07:57 By: jhylton Comment: >From a cursory glance, I would guess this is a problem with the extension classes used by Zope, not with the garbage collector. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=120983&group_id=5470 From noreply@sourceforge.net Mon Dec 18 08:52:54 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 00:52:54 -0800 Subject: [Python-bugs-list] [Bug #126161] pickling the string u'\\u' is impossible in Python 2.0 Message-ID: Bug #126161, was updated on 2000-Dec-18 00:52 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: pickling the string u'\\u' is impossible in Python 2.0 Details: >>> import pickle >>> s = unicode('\u') >>> f = open("aaaaa.a", "w") >>> pickle.dump(s, f) >>> f.close() >>> f = open("aaaaa.a", "r") >>> s = pickle.load(f) Traceback (most recent call last): File "", line 1, in ? File "d:\python20\lib\pickle.py", line 901, in load return Unpickler(file).load() File "d:\python20\lib\pickle.py", line 516, in load dispatch[key](self) File "d:\python20\lib\pickle.py", line 630, in load_unicode self.append(unicode(self.readline()[:-1],'raw-unicode-escape')) UnicodeError: Unicode-Escape decoding error: truncated \uXXXX >>> For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126161&group_id=5470 From noreply@sourceforge.net Mon Dec 18 22:04:11 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 14:04:11 -0800 Subject: [Python-bugs-list] [Bug #125891] windows popen4 crashes python when not closed correctly Message-ID: Bug #125891, was updated on 2000-Dec-15 05:53 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: aisaksen Assigned to : tim_one Summary: windows popen4 crashes python when not closed correctly Details: If you don't close both the istream file and ostream file return values after calling popen4, then it crashes in somewhere in MSVCRT.DLL Try the code included in this file. If you call Crash(), then python will crash after about 500 times through the loop. NoCrash() works ok, because you close both of the results. This bug happens on both the www.python.org release, as well as the ActivePython build. I'm running Windows 2000, with Visual Studio 6.0 installed. This seems to be a Windows bug. It dies in a call to setvbuf. Recompiling with the HAS_SETVBUF undefined still causes the same crash. It would be nice if python prevented this from happening. Ideally, you should be able to close the pipes, because there is no longer a reference to them. -Aaron Isaksen -- begin code -- import os def Crash(): n = 0 while 1: p = os.popen4('dir') p[0].close() n +=1 print n def NoCrash(): n = 0 while 1: p = os.popen4('dir') p[0].close() p[1].close() n +=1 print n -- end code -- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125891&group_id=5470 From noreply@sourceforge.net Mon Dec 18 22:04:23 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 14:04:23 -0800 Subject: [Python-bugs-list] [Bug #125880] TeX source found in PDF contents list Message-ID: Bug #125880, was updated on 2000-Dec-15 02:07 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : fdrake Summary: TeX source found in PDF contents list Details: Hello there The 'ext.pdf' document for 2.0 I downloaded from python.org Has some TeX source spilling out in the contents window. Section 1.9 says The Pyprotect unhbox voidb @x kern... instead of 'The Py_BuildValue() Function' (It's OK in the main window title) Regards Jon Nicoll (jkn@nicorp.f9.co.uk) For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125880&group_id=5470 From noreply@sourceforge.net Mon Dec 18 22:05:10 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 14:05:10 -0800 Subject: [Python-bugs-list] [Bug #125981] socket close is not thread safe Message-ID: Bug #125981, was updated on 2000-Dec-16 07:53 Here is a current snapshot of the bug. Project: Python Category: Extension Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: barry-scott Assigned to : gvanrossum Summary: socket close is not thread safe Details: Patch 102875 contains a fix for this problem. I have been seeing random failures of my BaseHttpServer based web server to serve pages. I finally tracked this down to socket.close() being called twice on the same socket fd. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125981&group_id=5470 From noreply@sourceforge.net Mon Dec 18 22:05:20 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 14:05:20 -0800 Subject: [Python-bugs-list] [Bug #126161] pickling the string u'\\u' is impossible in Python 2.0 Message-ID: Bug #126161, was updated on 2000-Dec-18 00:52 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : gvanrossum Summary: pickling the string u'\\u' is impossible in Python 2.0 Details: >>> import pickle >>> s = unicode('\u') >>> f = open("aaaaa.a", "w") >>> pickle.dump(s, f) >>> f.close() >>> f = open("aaaaa.a", "r") >>> s = pickle.load(f) Traceback (most recent call last): File "", line 1, in ? File "d:\python20\lib\pickle.py", line 901, in load return Unpickler(file).load() File "d:\python20\lib\pickle.py", line 516, in load dispatch[key](self) File "d:\python20\lib\pickle.py", line 630, in load_unicode self.append(unicode(self.readline()[:-1],'raw-unicode-escape')) UnicodeError: Unicode-Escape decoding error: truncated \uXXXX >>> For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126161&group_id=5470 From noreply@sourceforge.net Mon Dec 18 22:05:00 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 14:05:00 -0800 Subject: [Python-bugs-list] [Bug #125744] httplib does not check if port is valid (easy to fix?) Message-ID: Bug #125744, was updated on 2000-Dec-13 20:45 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 1 Submitted by: dealfaro Assigned to : gvanrossum Summary: httplib does not check if port is valid (easy to fix?) Details: In httplib.py, line 336, the following code appears: def _set_hostport(self, host, port): if port is None: i = string.find(host, ':') if i >= 0: port = int(host[i+1:]) host = host[:i] else: port = self.default_port self.host = host self.port = port Ths code breaks if the host string ends with ":", so that int("") is called. In the old (1.5.2) version of this module, the corresponding int () conversion used to be enclosed in a try/except pair: try: port = string.atoi(port) except string.atoi_error: raise socket.error, "nonnumeric port" and this fixed the problem. Note BTW that now the error reported by int is "ValueError: invalid literal for int():" rather than the above string.atoi_error. I found this problem while downloading web pages, but unfortunately I cannot pinpoint which page caused the problem. Luca de Alfaro Follow-Ups: Date: 2000-Dec-14 06:37 By: gvanrossum Comment: The only effect is that it raises ValueError instead of socket.error. Where is this a problem? (Note that string.atoi_error is an alias for ValueError.) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125744&group_id=5470 From noreply@sourceforge.net Mon Dec 18 22:24:23 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 14:24:23 -0800 Subject: [Python-bugs-list] [Bug #125981] socket close is not thread safe Message-ID: Bug #125981, was updated on 2000-Dec-16 07:53 Here is a current snapshot of the bug. Project: Python Category: Extension Modules Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: barry-scott Assigned to : gvanrossum Summary: socket close is not thread safe Details: Patch 102875 contains a fix for this problem. I have been seeing random failures of my BaseHttpServer based web server to serve pages. I finally tracked this down to socket.close() being called twice on the same socket fd. Follow-Ups: Date: 2000-Dec-18 14:24 By: gvanrossum Comment: Fixe in socketmodule.c rev. 1.130. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125981&group_id=5470 From noreply@sourceforge.net Mon Dec 18 22:25:07 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 14:25:07 -0800 Subject: [Python-bugs-list] [Bug #125744] httplib does not check if port is valid (easy to fix?) Message-ID: Bug #125744, was updated on 2000-Dec-13 20:45 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 1 Submitted by: dealfaro Assigned to : gvanrossum Summary: httplib does not check if port is valid (easy to fix?) Details: In httplib.py, line 336, the following code appears: def _set_hostport(self, host, port): if port is None: i = string.find(host, ':') if i >= 0: port = int(host[i+1:]) host = host[:i] else: port = self.default_port self.host = host self.port = port Ths code breaks if the host string ends with ":", so that int("") is called. In the old (1.5.2) version of this module, the corresponding int () conversion used to be enclosed in a try/except pair: try: port = string.atoi(port) except string.atoi_error: raise socket.error, "nonnumeric port" and this fixed the problem. Note BTW that now the error reported by int is "ValueError: invalid literal for int():" rather than the above string.atoi_error. I found this problem while downloading web pages, but unfortunately I cannot pinpoint which page caused the problem. Luca de Alfaro Follow-Ups: Date: 2000-Dec-18 14:25 By: dealfaro Comment: There are three (minor?) problems with raising ValueError. 1) Compatibility. I had some code for 1.5.2 that was trying to load web pages checking for various errors, and it was expecting this error to cause a socket error, not a value error. 2) Accuracy. ValueError can be caused by anything. The 'non-numeric port' error is much more informative. I don't want to catch ValueError, because it can be caused in too many situations. I also cannot check myself that the port is fine, because the port and the URL are often given by a redirect (errors 301 and 302, if I remember correctly). This in fact was the situation that caused the problem. Hence, my only real solution was to patch my version of httplib. 3) Style. I am somewhat new to Python, but I was under the impression that, stilistically, a ValueError was used to convey a situation that was the fault of the programmer, while other more specific errors were used for unexpected situations (communication, etc). Since the socket is the result of a URL redirection (errors 301 or 302), the programmer is not in a position to prevent this error by "better checking". Hence, I would consider a network-relted exception to be more appropriate here. But who am I to argue with the creator of Python? ;-) Luca ------------------------------------------------------- Date: 2000-Dec-14 06:37 By: gvanrossum Comment: The only effect is that it raises ValueError instead of socket.error. Where is this a problem? (Note that string.atoi_error is an alias for ValueError.) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125744&group_id=5470 From noreply@sourceforge.net Mon Dec 18 22:38:28 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 14:38:28 -0800 Subject: [Python-bugs-list] [Bug #125744] httplib does not check if port is valid (easy to fix?) Message-ID: Bug #125744, was updated on 2000-Dec-13 20:45 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: dealfaro Assigned to : jhylton Summary: httplib does not check if port is valid (easy to fix?) Details: In httplib.py, line 336, the following code appears: def _set_hostport(self, host, port): if port is None: i = string.find(host, ':') if i >= 0: port = int(host[i+1:]) host = host[:i] else: port = self.default_port self.host = host self.port = port Ths code breaks if the host string ends with ":", so that int("") is called. In the old (1.5.2) version of this module, the corresponding int () conversion used to be enclosed in a try/except pair: try: port = string.atoi(port) except string.atoi_error: raise socket.error, "nonnumeric port" and this fixed the problem. Note BTW that now the error reported by int is "ValueError: invalid literal for int():" rather than the above string.atoi_error. I found this problem while downloading web pages, but unfortunately I cannot pinpoint which page caused the problem. Luca de Alfaro Follow-Ups: Date: 2000-Dec-18 14:38 By: gvanrossum Comment: Thanks for explaining this more. I am surprised that a 301 redirect would give an invalid port -- but surely webmasters aren't perfect. :-) The argument that urllib.Urlopener.open() checks for socket.error but not for other errors is a good one. However I don't see the httplib.py code raising socket.error elsewhere. I'll ask Jeremy. The rest of the module seems to be using a totally different set of exceptions. On the other hand, it *can* raise socket.error, implicitly (when various socket calls are being made). ------------------------------------------------------- Date: 2000-Dec-18 14:25 By: dealfaro Comment: There are three (minor?) problems with raising ValueError. 1) Compatibility. I had some code for 1.5.2 that was trying to load web pages checking for various errors, and it was expecting this error to cause a socket error, not a value error. 2) Accuracy. ValueError can be caused by anything. The 'non-numeric port' error is much more informative. I don't want to catch ValueError, because it can be caused in too many situations. I also cannot check myself that the port is fine, because the port and the URL are often given by a redirect (errors 301 and 302, if I remember correctly). This in fact was the situation that caused the problem. Hence, my only real solution was to patch my version of httplib. 3) Style. I am somewhat new to Python, but I was under the impression that, stilistically, a ValueError was used to convey a situation that was the fault of the programmer, while other more specific errors were used for unexpected situations (communication, etc). Since the socket is the result of a URL redirection (errors 301 or 302), the programmer is not in a position to prevent this error by "better checking". Hence, I would consider a network-relted exception to be more appropriate here. But who am I to argue with the creator of Python? ;-) Luca ------------------------------------------------------- Date: 2000-Dec-14 06:37 By: gvanrossum Comment: The only effect is that it raises ValueError instead of socket.error. Where is this a problem? (Note that string.atoi_error is an alias for ValueError.) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125744&group_id=5470 From noreply@sourceforge.net Mon Dec 18 22:39:34 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 14:39:34 -0800 Subject: [Python-bugs-list] [Bug #125610] SuppReq: please elaborate on your email notif. requests Message-ID: Bug #125610, was updated on 2000-Dec-13 05:34 Here is a current snapshot of the bug. Project: Python Category: None Status: Closed Resolution: None Bug Group: Not a Bug Priority: 5 Submitted by: pfalcon Assigned to : gvanrossum Summary: SuppReq: please elaborate on your email notif. requests Details: We've got the task "Python requests" http://sourceforge.net/pm/task.php?func=detailtask&project_task_id=22577&group_id=1&group_project_id=2 . I believe bigdisk knows what that means but I think I could do that faster, so I'd like to have information from the original source. Please give specific examples how you want it to be. Thanks. Follow-Ups: Date: 2000-Dec-18 14:39 By: gvanrossum Comment: Closing this now -- send mail to guido@python.org if you need more help. ;-) ------------------------------------------------------- Date: 2000-Dec-13 08:27 By: gvanrossum Comment: One more thing: it would be really handy if there was a box *somewhere* (maybe in the left margin?) where you could type a bug_id or patch_id and click OK to go directly to the details page of that item. We all need this regularly, and we all use the hack of editing the URL in "Location" field of the browser. There's *got* to be a better way. :-) ------------------------------------------------------- Date: 2000-Dec-13 06:20 By: gvanrossum Comment: OK, I'll clarify. Note that this applies both to the patch and the bugs products. 1. Word wrap: the comments entered in the database for bugs & patches are often entered with a single very long line per paragraph. When the notification email is sent out, most Unix mail readers don't wrap words correctly. The request is to break any line that is longer than 79 characters in shorter pieces, the way e.g. ESC-q does in Emacs, or the fmt(1) program. 2. clickable submitter name: in the patch or bug details page, the submitter ("Submitted By" field) should be a hyperlink to the developer profile for that user (except if it is Nobody, of course). 3. mention what changed in the email: it would be nice if at the top of the notification email it said what caused the mail to be sent, e.g. "status changed from XXX to YYY" or "assiged to ZZZ" or "new comment added by XXX" or "new patch uploaded" or "priority changed to QQQ". If more than one field changed they should all be summarized. Hope this helps! Thanks for doing this. We love our SourceForge! ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125610&group_id=5470 From noreply@sourceforge.net Tue Dec 19 01:29:31 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 17:29:31 -0800 Subject: [Python-bugs-list] [Bug #126161] pickling the string u'\\u' is impossible in Python 2.0 Message-ID: Bug #126161, was updated on 2000-Dec-18 00:52 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Open Resolution: Fixed Bug Group: None Priority: 5 Submitted by: nobody Assigned to : gvanrossum Summary: pickling the string u'\\u' is impossible in Python 2.0 Details: >>> import pickle >>> s = unicode('\u') >>> f = open("aaaaa.a", "w") >>> pickle.dump(s, f) >>> f.close() >>> f = open("aaaaa.a", "r") >>> s = pickle.load(f) Traceback (most recent call last): File "", line 1, in ? File "d:\python20\lib\pickle.py", line 901, in load return Unpickler(file).load() File "d:\python20\lib\pickle.py", line 516, in load dispatch[key](self) File "d:\python20\lib\pickle.py", line 630, in load_unicode self.append(unicode(self.readline()[:-1],'raw-unicode-escape')) UnicodeError: Unicode-Escape decoding error: truncated \uXXXX >>> Follow-Ups: Date: 2000-Dec-18 17:29 By: gvanrossum Comment: Fixed in pickle.py, CVS rev 1.41. Still need to to cPickle. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126161&group_id=5470 From noreply@sourceforge.net Tue Dec 19 01:50:28 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 17:50:28 -0800 Subject: [Python-bugs-list] [Bug #126254] Traceback objects not properly garbage-collected Message-ID: Bug #126254, was updated on 2000-Dec-18 17:50 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: Traceback objects not properly garbage-collected Details: System info: ============ Python 2.0 (#1, Dec 18 2000, 16:47:02) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Linux phil 2.2.18 #1 Mon Dec 18 14:49:56 PST 2000 i686 unknown Sample code: ============ import sys class fooclass: def __init__(self): print 'CONSTRUCTED' def withtb(self, doit=0): try: raise "foo" except: if doit: tb = sys.exc_info()[2] def __del__(self): print 'DESTROYED' if __name__ == '__main__': foo = fooclass() if len(sys.argv) > 1: foo.withtb(1) else: foo.withtb(0) del foo How to reproduce: ================= Run the above python script: 1. Without any argument: the withtb() method exception handler does not retrieve any traceback object. The program prints `CONSTRUCTED' and `DESTROYED'. 2. With some arguments: the withtb() method exception handler retrieves a traceback object and stores it in the `tb' local variable. However `DESTROYED' never gets printed out. I think that the `foo' object will never be garbage collected anymore. Workaround: =========== Deleting the `tb' object seems to restore things: if doit: tb = sys.exc_info()[2] del tb Other: ====== I've found this problem also in python 1.5.2 and python 1.6. Possible cause: =============== I would tend to think that we're creating a circular loop which cannot be garbage collected: - `tb' holds a reference to the traceback object - the traceback object holds a reference to the local scope - the local scope holds a reference to the `tb' variable The only way out is to break the circular reference by hand, although it's annoying. Phil - phil@commerceflow.com. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126254&group_id=5470 From noreply@sourceforge.net Tue Dec 19 02:09:33 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 18:09:33 -0800 Subject: [Python-bugs-list] [Bug #126161] pickling the string u'\\u' is impossible in Python 2.0 Message-ID: Bug #126161, was updated on 2000-Dec-18 00:52 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: nobody Assigned to : gvanrossum Summary: pickling the string u'\\u' is impossible in Python 2.0 Details: >>> import pickle >>> s = unicode('\u') >>> f = open("aaaaa.a", "w") >>> pickle.dump(s, f) >>> f.close() >>> f = open("aaaaa.a", "r") >>> s = pickle.load(f) Traceback (most recent call last): File "", line 1, in ? File "d:\python20\lib\pickle.py", line 901, in load return Unpickler(file).load() File "d:\python20\lib\pickle.py", line 516, in load dispatch[key](self) File "d:\python20\lib\pickle.py", line 630, in load_unicode self.append(unicode(self.readline()[:-1],'raw-unicode-escape')) UnicodeError: Unicode-Escape decoding error: truncated \uXXXX >>> Follow-Ups: Date: 2000-Dec-18 18:09 By: gvanrossum Comment: Fixed in cPickle too. cPickle.c rev. 2.54. ------------------------------------------------------- Date: 2000-Dec-18 17:29 By: gvanrossum Comment: Fixed in pickle.py, CVS rev 1.41. Still need to to cPickle. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126161&group_id=5470 From noreply@sourceforge.net Tue Dec 19 02:10:58 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 18:10:58 -0800 Subject: [Python-bugs-list] [Bug #123634] Pickle broken on Unicode strings Message-ID: Bug #123634, was updated on 2000-Nov-27 14:03 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: tlau Assigned to : gvanrossum Summary: Pickle broken on Unicode strings Details: Two one-liners that produce incorrect output: >>> cPickle.loads(cPickle.dumps(u'')) Traceback (most recent call last): File "", line 1, in ? cPickle.UnpicklingError: pickle data was truncated >>> cPickle.loads(cPickle.dumps(u'\u03b1 alpha\n\u03b2 beta')) Traceback (most recent call last): File "", line 1, in ? cPickle.UnpicklingError: invalid load key, '\'. The format of the Unicode string in the pickled representation is not escaped, as it is with regular strings. It should be. The latter bug occurs in both pickle and cPickle; the former is only a problem with cPickle. Follow-Ups: Date: 2000-Dec-18 18:10 By: gvanrossum Comment: Fixed in both pickle.py (rev. 1.41) and cPickle.py (rev. 2.54). I've also checked in tests for these and similar endcases. ------------------------------------------------------- Date: 2000-Nov-27 14:36 By: tlau Comment: One more comment: binary-format pickles are not affected, only text-format pickles. Thus the part of my patch that applies to the binary section of the save_unicode function should not be applied. ------------------------------------------------------- Date: 2000-Nov-27 14:35 By: lemburg Comment: Some background (no time to fix this myself): When I added the Unicode handlers, I wanted to avoid the problems that the string dump mechanism has with quoted strings. The encodings used either carry length information (in binary mode: UTF-8) or do not include the \n character (in ascii mode: raw-unicode-escape encoding). Unfortunately, the raw-unicode-escape codec does not escape the newline character which is used by pickle to break the input into tokens.... Proposed fix: change the encoding to "unicode-escape" which doesn't have this problem. This will break code, but only code that is already broken :-/ ------------------------------------------------------- Date: 2000-Nov-27 14:20 By: tlau Comment: Here's my proposed patch to Lib/pickle.py (cPickle should be changed similarly): --- /scratch/tlau/Python-2.0/Lib/pickle.py Mon Oct 16 14:49:51 2000 +++ pickle.py Mon Nov 27 14:07:01 2000 @@ -286,9 +286,9 @@ encoding = object.encode('utf-8') l = len(encoding) s = mdumps(l)[1:] - self.write(BINUNICODE + s + encoding) + self.write(BINUNICODE + `s` + encoding) else: - self.write(UNICODE + object.encode('raw-unicode-escape') + '\n') + self.write(UNICODE + `object.encode('raw-unicode-escape')` + '\n') memo_len = len(memo) self.write(self.put(memo_len)) @@ -627,7 +627,12 @@ dispatch[BINSTRING] = load_binstring def load_unicode(self): - self.append(unicode(self.readline()[:-1],'raw-unicode-escape')) + rep = self.readline()[:-1] + if not self._is_string_secure(rep): + raise ValueError, "insecure string pickle" + rep = eval(rep, + {'__builtins__': {}}) # Let's be careful + self.append(unicode(rep, 'raw-unicode-escape')) dispatch[UNICODE] = load_unicode def load_binunicode(self): ------------------------------------------------------- Date: 2000-Nov-27 14:14 By: gvanrossum Comment: Jim, do you have time to look into this? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=123634&group_id=5470 From noreply@sourceforge.net Tue Dec 19 02:24:06 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 18:24:06 -0800 Subject: [Python-bugs-list] [Bug #122162] split is broken for unicode strings Message-ID: Bug #122162, was updated on 2000-Nov-10 14:22 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: dwickberg Assigned to : gvanrossum Summary: split is broken for unicode strings Details: Calling the split method on a unicode string or with a unicode string is broken if the substring being split on is at the end of the source string. Example: Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. IDLE 0.6 -- press F1 for help >>> a = 'border case test' >>> a.split('test') ['border case ', ''] >>> a.split(u'test') [u'border case test'] >>> u = u'border case test' >>> u.split('test') [u'border case test'] >>> u.split(u'test') [u'border case test'] Follow-Ups: Date: 2000-Dec-18 18:24 By: gvanrossum Comment: Good find! This was an off-by-one error in split_substring. Fixed in unicodeobject.c, rev. 2.69. ------------------------------------------------------- Date: 2000-Nov-10 15:03 By: gvanrossum Comment: Indeed. This only seems to be a problem if 1) the split arg is longer than 1 char 2) the split arg doesn't occur at all Probably a boundary case in the Unicode split. Assigned to Marc-Andre. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=122162&group_id=5470 From noreply@sourceforge.net Tue Dec 19 02:42:24 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 18:42:24 -0800 Subject: [Python-bugs-list] [Bug #124060] Python 2.0 -- Problems with Unicode Translate Message-ID: Bug #124060, was updated on 2000-Dec-01 09:03 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Closed Resolution: Fixed Bug Group: None Priority: 3 Submitted by: alburt Assigned to : gvanrossum Summary: Python 2.0 -- Problems with Unicode Translate Details: I don't know what this new-fangled Unicode stuff is all about. I do know that old code that has: string.translate(s, table) now bombs when "s" is Unicode. The definition of "string.translate" passes on the call with a "deletechars" argument that is not expected by the Unicode version. Using "str(s)" keeps Python 2.0 happy. -- Alastair P.S. Sorry if the bug is already reported but I do not know how to search past bug reports. Follow-Ups: Date: 2000-Dec-18 18:42 By: gvanrossum Comment: I've fixed this in string.py rev. 1.54: string.translate(s, table) now works for all combinations of 8-bit and Unicode strings. Note: string.translate(s, table, deletions) only works for 8-bit strings, because the Unicode object doesn't support the deletions argument in its translate() method. ------------------------------------------------------- Date: 2000-Dec-01 15:25 By: gvanrossum Comment: Marc already explained in imail the Unicode translate() method has a different signature. Maybe the string.translate function could special-case Unicode objects. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124060&group_id=5470 From noreply@sourceforge.net Tue Dec 19 03:16:05 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 19:16:05 -0800 Subject: [Python-bugs-list] [Bug #126264] ref/ref3.tex: Remove claim about eval(repr(obj)) Message-ID: Bug #126264, was updated on 2000-Dec-18 19:16 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: akuchling Assigned to : nobody Summary: ref/ref3.tex: Remove claim about eval(repr(obj)) Details: The description of __repr__ in section 3 of the Language Ref says " This should normally look like a valid Python expression that can be used to recreate an object with the same value." This isn't true, isn't a good idea, and often isn't possible anyway. Rewrite this to emphasize that repr() is usually for debugging. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126264&group_id=5470 From noreply@sourceforge.net Tue Dec 19 03:19:31 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 19:19:31 -0800 Subject: [Python-bugs-list] [Bug #126264] ref/ref3.tex: Remove claim about eval(repr(obj)) Message-ID: Bug #126264, was updated on 2000-Dec-18 19:16 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: akuchling Assigned to : gvanrossum Summary: ref/ref3.tex: Remove claim about eval(repr(obj)) Details: The description of __repr__ in section 3 of the Language Ref says " This should normally look like a valid Python expression that can be used to recreate an object with the same value." This isn't true, isn't a good idea, and often isn't possible anyway. Rewrite this to emphasize that repr() is usually for debugging. Follow-Ups: Date: 2000-Dec-18 19:19 By: akuchling Comment: Assigning to GvR, since I assume the LangRef is his responsibility. I can rewrite the text if the change's intent is approved. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126264&group_id=5470 From noreply@sourceforge.net Tue Dec 19 03:29:17 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 19:29:17 -0800 Subject: [Python-bugs-list] [Bug #126264] ref/ref3.tex: Remove claim about eval(repr(obj)) Message-ID: Bug #126264, was updated on 2000-Dec-18 19:16 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: akuchling Assigned to : gvanrossum Summary: ref/ref3.tex: Remove claim about eval(repr(obj)) Details: The description of __repr__ in section 3 of the Language Ref says " This should normally look like a valid Python expression that can be used to recreate an object with the same value." This isn't true, isn't a good idea, and often isn't possible anyway. Rewrite this to emphasize that repr() is usually for debugging. Follow-Ups: Date: 2000-Dec-18 19:29 By: tim_one Comment: Harrumph. For starters it's true for strings, ints, longs and (as of 1.6) floats, plus lists, tuples and dicts recursively composed of these. And it's a great idea. I believe Guido meant what he wrote here: "should" -- no bug. ------------------------------------------------------- Date: 2000-Dec-18 19:19 By: akuchling Comment: Assigning to GvR, since I assume the LangRef is his responsibility. I can rewrite the text if the change's intent is approved. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126264&group_id=5470 From noreply@sourceforge.net Tue Dec 19 04:08:25 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 20:08:25 -0800 Subject: [Python-bugs-list] [Bug #126034] xml.sax.handler.ErrorHandler not documented Message-ID: Bug #126034, was updated on 2000-Dec-16 21:05 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: Feature Request Priority: 6 Submitted by: fdrake Assigned to : fdrake Summary: xml.sax.handler.ErrorHandler not documented Details: The SAX2 ErrorHandler interface needs to be described in Doc/lib/xmlsaxhandler.tex. Follow-Ups: Date: 2000-Dec-18 20:08 By: fdrake Comment: Documented in Doc/lib/xmlsaxhandler.tex revision 1.4. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126034&group_id=5470 From noreply@sourceforge.net Tue Dec 19 04:08:45 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 20:08:45 -0800 Subject: [Python-bugs-list] [Bug #126264] ref/ref3.tex: Remove claim about eval(repr(obj)) Message-ID: Bug #126264, was updated on 2000-Dec-18 19:16 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: akuchling Assigned to : gvanrossum Summary: ref/ref3.tex: Remove claim about eval(repr(obj)) Details: The description of __repr__ in section 3 of the Language Ref says " This should normally look like a valid Python expression that can be used to recreate an object with the same value." This isn't true, isn't a good idea, and often isn't possible anyway. Rewrite this to emphasize that repr() is usually for debugging. Follow-Ups: Date: 2000-Dec-18 20:08 By: gvanrossum Comment: I disagree with the "isn't a good idea" part. While it's indeed not a good idea to use eval(repr(x)), it *is* a good idea to make repr(x) look like a syntactically correct expression that would recreate an object with the same value as x, given the appropriate environment (e.g. imported the class or factory function). I hate non-standard object types whose repr() is indistinguishable from that of a similar standard object -- e.g. UserList makes this mistake, and xrange() used to pretend it was a tuple. Nevertheless I'll try to think of something to add to the docs. ------------------------------------------------------- Date: 2000-Dec-18 19:29 By: tim_one Comment: Harrumph. For starters it's true for strings, ints, longs and (as of 1.6) floats, plus lists, tuples and dicts recursively composed of these. And it's a great idea. I believe Guido meant what he wrote here: "should" -- no bug. ------------------------------------------------------- Date: 2000-Dec-18 19:19 By: akuchling Comment: Assigning to GvR, since I assume the LangRef is his responsibility. I can rewrite the text if the change's intent is approved. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126264&group_id=5470 From noreply@sourceforge.net Tue Dec 19 04:13:55 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 20:13:55 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : akuchling Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error Follow-Ups: Date: 2000-Dec-18 20:13 By: fdrake Comment: Andrew, please summarize what doc changes are needed, or make the changes (whichever is easier for you is fine). ------------------------------------------------------- Date: 2000-Dec-12 15:18 By: abo Comment: Further comments... After looking at the C code, a few things became clear; I need to read more about C/Python interfacing, and the "unused_data" attribute will only contain data if additional data is fed to a de-compressor at the end of a complete compressed stream. The purpose of the "unused_data" attribute is not clear in the documentation, so that should probably be clarified (mind you, I am looking at pre-2.0 docs so maybe it already has?). The failure to produce all data up to a sync-flush is something else... I'm still looking into it. I'm not sure if it is an inherent limitation of zlib, something that needs to be fixed in zlib, or something that needs to be fixed in the python interface. If it is an inherent limitation, I'd like to characterise it a bit better before documenting it. If it is something that needs to be fixed in either zlib or the python interface, I'd like to fix it. Unfortunately, this is a bit beyond me at the moment, mainly in time, but also a bit in skill (need to read the python/C interfacing documentation). Maybe over the christmas holidays I'll get a chance to fix it. ------------------------------------------------------- Date: 2000-Dec-12 13:32 By: gvanrossum Comment: OK, assigned to Fred. You may ask Andrew what to write. :-) ------------------------------------------------------- Date: 2000-Dec-08 14:50 By: abo Comment: I'm not that sure I'm happy with it just being marked closed. AFAIKT, the implementation definitely doesn't do what the documentation says, so to save people like me time when they hit it, I'prefer the bug at least be assigned to documentation so that the limitation is documented. >From my reading of the documentation as it stands, the fact that there is more pending data in the decompressor should be indicated by it's "unused_data" attribute. The tests seem to show that "decompress()" is only processing 16K of compressed data each call, which would suggest that "unused_data" should contain the rest. However, in all my tests that attribute has always been empty. Perhaps the bug is in there somewhere? Another slight strangeness, even if "unused_data" did contain something, the only way to get it out is by feeding in more compressed data, or issuing a flush(), thus ending the decompression... I guess that since I've been bitten by this, it's up to me to fix it. I've got the source to 2.0 and I'll have a look and see if I can submit a patch. and I was coding this app in python to avoid coding in C :-) ------------------------------------------------------- Date: 2000-Dec-08 09:26 By: akuchling Comment: Python 2.0 demonstrates the problem, too. I'm not sure what this is: a zlibmodule bug/oversight or simply problems with zlib's API. Looking at zlib.h, it implies that you'd have to call inflate() with the flush parameter set to Z_SYNC_FLUSH to get the remaining data. Unfortunately this doesn't seem to help -- .flush() method doesn't support an argument, but when I patch zlibmodule.c to allow one, .flush(Z_SYNC_FLUSH) always fails with a -5: buffer error, perhaps because it expects there to be some new data. (The DEFAULTALLOC constant in zlibmodule.c is 16K, but this seems to be unrelated to the problem showing up with more than 16K of data, since changing DEFAULTALLOC to 32K or 1K makes no difference to the size of data at which the bug shows up.) In short, I have no idea what's at fault, or if it can or should be fixed. Unless you or someone else submits a patch, I'll just leave it alone, and mark this bug as closed and "Won't fix". ------------------------------------------------------- Date: 2000-Dec-08 07:44 By: gvanrossum Comment: I *think* this may have been fixed in Python 2.0. I'm assigning this to Andrew who can confirm that and close the bug report (if it is fixed). ------------------------------------------------------- Date: 2000-Dec-07 23:28 By: abo Comment: Argh... SF killed all my indents... sorry about that. You should be able to figure it out, but if not email me and I can send a copy. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Tue Dec 19 04:18:54 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 20:18:54 -0800 Subject: [Python-bugs-list] [Bug #126264] ref/ref3.tex: Remove claim about eval(repr(obj)) Message-ID: Bug #126264, was updated on 2000-Dec-18 19:16 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: akuchling Assigned to : gvanrossum Summary: ref/ref3.tex: Remove claim about eval(repr(obj)) Details: The description of __repr__ in section 3 of the Language Ref says " This should normally look like a valid Python expression that can be used to recreate an object with the same value." This isn't true, isn't a good idea, and often isn't possible anyway. Rewrite this to emphasize that repr() is usually for debugging. Follow-Ups: Date: 2000-Dec-18 20:18 By: gvanrossum Comment: Checked something in as rev. 1.55. Let me know what you think. ------------------------------------------------------- Date: 2000-Dec-18 20:08 By: gvanrossum Comment: I disagree with the "isn't a good idea" part. While it's indeed not a good idea to use eval(repr(x)), it *is* a good idea to make repr(x) look like a syntactically correct expression that would recreate an object with the same value as x, given the appropriate environment (e.g. imported the class or factory function). I hate non-standard object types whose repr() is indistinguishable from that of a similar standard object -- e.g. UserList makes this mistake, and xrange() used to pretend it was a tuple. Nevertheless I'll try to think of something to add to the docs. ------------------------------------------------------- Date: 2000-Dec-18 19:29 By: tim_one Comment: Harrumph. For starters it's true for strings, ints, longs and (as of 1.6) floats, plus lists, tuples and dicts recursively composed of these. And it's a great idea. I believe Guido meant what he wrote here: "should" -- no bug. ------------------------------------------------------- Date: 2000-Dec-18 19:19 By: akuchling Comment: Assigning to GvR, since I assume the LangRef is his responsibility. I can rewrite the text if the change's intent is approved. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126264&group_id=5470 From noreply@sourceforge.net Tue Dec 19 04:21:12 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 20:21:12 -0800 Subject: [Python-bugs-list] [Bug #126254] Traceback objects not properly garbage-collected Message-ID: Bug #126254, was updated on 2000-Dec-18 17:50 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: nobody Assigned to : gvanrossum Summary: Traceback objects not properly garbage-collected Details: System info: ============ Python 2.0 (#1, Dec 18 2000, 16:47:02) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Linux phil 2.2.18 #1 Mon Dec 18 14:49:56 PST 2000 i686 unknown Sample code: ============ import sys class fooclass: def __init__(self): print 'CONSTRUCTED' def withtb(self, doit=0): try: raise "foo" except: if doit: tb = sys.exc_info()[2] def __del__(self): print 'DESTROYED' if __name__ == '__main__': foo = fooclass() if len(sys.argv) > 1: foo.withtb(1) else: foo.withtb(0) del foo How to reproduce: ================= Run the above python script: 1. Without any argument: the withtb() method exception handler does not retrieve any traceback object. The program prints `CONSTRUCTED' and `DESTROYED'. 2. With some arguments: the withtb() method exception handler retrieves a traceback object and stores it in the `tb' local variable. However `DESTROYED' never gets printed out. I think that the `foo' object will never be garbage collected anymore. Workaround: =========== Deleting the `tb' object seems to restore things: if doit: tb = sys.exc_info()[2] del tb Other: ====== I've found this problem also in python 1.5.2 and python 1.6. Possible cause: =============== I would tend to think that we're creating a circular loop which cannot be garbage collected: - `tb' holds a reference to the traceback object - the traceback object holds a reference to the local scope - the local scope holds a reference to the `tb' variable The only way out is to break the circular reference by hand, although it's annoying. Phil - phil@commerceflow.com. Follow-Ups: Date: 2000-Dec-18 20:21 By: gvanrossum Comment: This is not a bug. Saving the traceback as a local variable creates a circular reference that prevents garbage collection. If you don't understand this answer, please write help@python.org. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126254&group_id=5470 From noreply@sourceforge.net Tue Dec 19 04:52:28 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 20:52:28 -0800 Subject: [Python-bugs-list] [Bug #117158] String literal documentation is not up to date Message-ID: Bug #117158, was updated on 2000-Oct-18 03:41 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: None Priority: 7 Submitted by: edg Assigned to : fdrake Summary: String literal documentation is not up to date Details: Section 2.4.1 of the Reference Manual does not mention unicode strings and the unicode escape sequences \u and \U at all. Moreover, it still states that "\x" escapes consume an arbitrary number (>=2) of hex digits (while it is exactly 2 right now: PEP223). Follow-Ups: Date: 2000-Dec-18 20:52 By: fdrake Comment: Added additional comments on Unicode strings and the \u, \U, \N escape sequences to Doc/ref/ref2.tex revision 1.21. ------------------------------------------------------- Date: 2000-Dec-12 13:03 By: gvanrossum Comment: Can you fix this? Shouldn't be hard, right? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=117158&group_id=5470 From noreply@sourceforge.net Tue Dec 19 06:37:45 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 22:37:45 -0800 Subject: [Python-bugs-list] [Bug #125933] warnings framework documentation Message-ID: Bug #125933, was updated on 2000-Dec-15 14:25 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: Feature Request Priority: 7 Submitted by: fdrake Assigned to : gvanrossum Summary: warnings framework documentation Details: The PyWarn_*() APIs need to be documented: Doc/api/api.tex. The command line parameters need to be documented: Misc/python.man. The Python module needs to be documented: Doc/lib/libwarnings.tex (new file to create). Follow-Ups: Date: 2000-Dec-18 22:37 By: gvanrossum Comment: All done. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125933&group_id=5470 From noreply@sourceforge.net Tue Dec 19 06:39:09 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Dec 2000 22:39:09 -0800 Subject: [Python-bugs-list] [Bug #121930] Parameter mismatch exception tracebacks could be more helpfu Message-ID: Bug #121930, was updated on 2000-Nov-07 18:14 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: prescod Assigned to : jhylton Summary: Parameter mismatch exception tracebacks could be more helpfu Details: Here's an example of the problem: def foo(a,b,c,d,e,f,g,h,i,j): pass a=foo #10,00 lines of code #10,00 lines of code ... j=a # 10,000 lines of code def bar(): j() bar() Traceback (most recent call last): File "", line 1, in ? File "", line 1, in bar TypeError: not enough arguments; expected 10, got 0 Notice that there is *no indication* of the real source-location of the thing that I attempted to call (foo). As soon as there is a layer or two of indirection between function pointers and the code that calls them, it can get really confusing to try and figure out what code is being called. When the callee is Python it would be nice if there were some indication in the error message or traceback of the thing's real name and real source location. Guido says: > This could be fixed with special purpose code for this exception > (probably by setting up a dummy frame and using that). Follow-Ups: Date: 2000-Dec-18 22:39 By: gvanrossum Comment: Jeremy, I was wondering if you could have a look at this -- you may be able to make some changes in the function calling code since you are working on that anyway... ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=121930&group_id=5470 From noreply@sourceforge.net Tue Dec 19 14:11:35 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 06:11:35 -0800 Subject: [Python-bugs-list] [Bug #126264] ref/ref3.tex: Remove claim about eval(repr(obj)) Message-ID: Bug #126264, was updated on 2000-Dec-18 19:16 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: akuchling Assigned to : gvanrossum Summary: ref/ref3.tex: Remove claim about eval(repr(obj)) Details: The description of __repr__ in section 3 of the Language Ref says " This should normally look like a valid Python expression that can be used to recreate an object with the same value." This isn't true, isn't a good idea, and often isn't possible anyway. Rewrite this to emphasize that repr() is usually for debugging. Follow-Ups: Date: 2000-Dec-19 06:11 By: akuchling Comment: That's a bit better, so I won't re-open the bug on you. (Noticed a small typo and fixed it.) ------------------------------------------------------- Date: 2000-Dec-18 20:18 By: gvanrossum Comment: Checked something in as rev. 1.55. Let me know what you think. ------------------------------------------------------- Date: 2000-Dec-18 20:08 By: gvanrossum Comment: I disagree with the "isn't a good idea" part. While it's indeed not a good idea to use eval(repr(x)), it *is* a good idea to make repr(x) look like a syntactically correct expression that would recreate an object with the same value as x, given the appropriate environment (e.g. imported the class or factory function). I hate non-standard object types whose repr() is indistinguishable from that of a similar standard object -- e.g. UserList makes this mistake, and xrange() used to pretend it was a tuple. Nevertheless I'll try to think of something to add to the docs. ------------------------------------------------------- Date: 2000-Dec-18 19:29 By: tim_one Comment: Harrumph. For starters it's true for strings, ints, longs and (as of 1.6) floats, plus lists, tuples and dicts recursively composed of these. And it's a great idea. I believe Guido meant what he wrote here: "should" -- no bug. ------------------------------------------------------- Date: 2000-Dec-18 19:19 By: akuchling Comment: Assigning to GvR, since I assume the LangRef is his responsibility. I can rewrite the text if the change's intent is approved. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126264&group_id=5470 From noreply@sourceforge.net Tue Dec 19 15:28:29 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 07:28:29 -0800 Subject: [Python-bugs-list] [Bug #126345] Modules are not garbage collected Message-ID: Bug #126345, was updated on 2000-Dec-19 07:28 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: loewis Assigned to : nobody Summary: Modules are not garbage collected Details: Module objects currently don't participate in garbage collection. That is a problem for applications using the new or imp modules to create modules on-the-fly, such as the rexec module. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126345&group_id=5470 From noreply@sourceforge.net Tue Dec 19 16:30:30 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 08:30:30 -0800 Subject: [Python-bugs-list] [Bug #110832] urljoin() bug with odd no of '..' (PR#194) Message-ID: Bug #110832, was updated on 2000-Aug-01 14:13 Here is a current snapshot of the bug. Project: Python Category: None Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 6 Submitted by: nobody Assigned to : fdrake Summary: urljoin() bug with odd no of '..' (PR#194) Details: Jitterbug-Id: 194 Submitted-By: DrMalte@ddd.de Date: Sun, 30 Jan 2000 19:40:45 -0500 (EST) Version: 1.5.2 and 1.4 OS: Linux While playing with linbot I noticed some failed requests to 'http://xxx.xxx.xx/../img/xxx.gif' for a document in the root directory containing . The Reason is in urlparse.urljoin() urljoin() fails to remove an odd number of '../' from the path. Demonstration: from urlparse import urljoin print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) # gives 'http://127.0.0.1/../imgs/logo.gif' # should give 'http://127.0.0.1/imgs/logo.gif' print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) # gives 'http://127.0.0.1/imgs/logo.gif' # works # '../../imgs/logo.gif' gives 'http://127.0.0.1/../imgs/logo.gif' and so on The patch for 1.5.2 ( I'm not sure if it works generally, but tests with linbot looked good) *** /usr/local/lib/python1.5/urlparse.py Sat Jun 26 19:11:59 1999 --- urlparse.py Mon Jan 31 01:31:45 2000 *************** *** 170,175 **** --- 170,180 ---- segments[-1] = '' elif len(segments) >= 2 and segments[-1] == '..': segments[-2:] = [''] + + if segments[0] == '': + while segments[1] == '..': # remove all leading '..' + del segments[1] + return urlunparse((scheme, netloc, joinfields(segments, '/'), params, query, fragment)) ==================================================================== Audit trail: Mon Feb 07 12:35:35 2000 guido changed notes Mon Feb 07 12:35:35 2000 guido moved from incoming to request Follow-Ups: Date: 2000-Dec-19 08:30 By: doerwalter Comment: Section 5.2 of RFC 1808 states that in the context of the base URL <> = URLs that have more .. than the base has directory names, should be resolved in the following way: ../../../g = ../../../../g = i.e. they should be preserved, which urljoin does in the first example gives in the bug report: print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) http://127.0.0.1/../imgs/logo.gif but not in the second example: print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) http://127.0.0.1/imgs/logo.gif where the result should have been http://127.0.0.1/../../imgs/logo.gif ------------------------------------------------------- Date: 2000-Aug-23 21:22 By: fdrake Comment: RFC 1808 gives examples of this form in section 5.2, "Abnormal Examples," and gives the current behavior as the desired treatment, stating that all parsers (urljoin() counts given the RFC's terminology) should treat the abnormal examples consistently. ------------------------------------------------------- Date: 2000-Aug-13 01:36 By: moshez Comment: OK, Jeremy -- this one is yours. Either notabug it, or check in the relevant patch (101064 -- assigned to you) ------------------------------------------------------- Date: 2000-Aug-01 14:13 By: nobody Comment: Patch being considered. ------------------------------------------------------- Date: 2000-Aug-01 14:13 By: nobody Comment: From: Guido van Rossum Subject: Re: [Python-bugs-list] urljoin() bug with odd no of '..' (PR#194) Date: Mon, 31 Jan 2000 12:28:55 -0500 Thanks for your bug report and fix. I agree with your diagnosis. Would you please be so kind as to resend your patch with the legal disclaimer from http://www.python.org/1.5/bugrelease.html --Guido van Rossum (home page: http://www.python.org/~guido/) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=110832&group_id=5470 From noreply@sourceforge.net Tue Dec 19 16:36:04 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 08:36:04 -0800 Subject: [Python-bugs-list] [Bug #126351] urlparse.scheme_chars and string.letters Message-ID: Bug #126351, was updated on 2000-Dec-19 08:36 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: doerwalter Assigned to : nobody Summary: urlparse.scheme_chars and string.letters Details: urlparse.scheme_chars has the same bug as urllib.quote (see bug 111961), because it used string.letters which includes more than the upper and lowercase letter, which results in scheme_chars being 'abcdefghijklmnopqrstuvwxyz\337\340\341\342\343\344\345\346\347\350\351\352\353\354\355\356\357\360\361\362\363\364\365\366\370\371\372\373\374\375\376\377ABCDEFGHIJKLMNOPQRSTUVWXYZ\300\301\302\303\304\305\306\307\310\311\312\313\314\315\316\317\320\321\322\323\324\325\326\330\331\332\333\334\335\3360123456789+-.'. RFC 1738 Section 2.1 states the following: Scheme names consist of a sequence of characters. The lower case letters "a"--"z", digits, and the characters plus ("+"), period ("."), and hyphen ("-") are allowed. For resiliency, programs interpreting URLs should treat upper case letters as equivalent to lower case in scheme names (e.g., allow "HTTP" as well as "http"). For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126351&group_id=5470 From noreply@sourceforge.net Tue Dec 19 16:38:23 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 08:38:23 -0800 Subject: [Python-bugs-list] [Bug #110832] urljoin() bug with odd no of '..' (PR#194) Message-ID: Bug #110832, was updated on 2000-Aug-01 14:13 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: None Priority: 1 Submitted by: nobody Assigned to : gvanrossum Summary: urljoin() bug with odd no of '..' (PR#194) Details: Jitterbug-Id: 194 Submitted-By: DrMalte@ddd.de Date: Sun, 30 Jan 2000 19:40:45 -0500 (EST) Version: 1.5.2 and 1.4 OS: Linux While playing with linbot I noticed some failed requests to 'http://xxx.xxx.xx/../img/xxx.gif' for a document in the root directory containing . The Reason is in urlparse.urljoin() urljoin() fails to remove an odd number of '../' from the path. Demonstration: from urlparse import urljoin print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) # gives 'http://127.0.0.1/../imgs/logo.gif' # should give 'http://127.0.0.1/imgs/logo.gif' print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) # gives 'http://127.0.0.1/imgs/logo.gif' # works # '../../imgs/logo.gif' gives 'http://127.0.0.1/../imgs/logo.gif' and so on The patch for 1.5.2 ( I'm not sure if it works generally, but tests with linbot looked good) *** /usr/local/lib/python1.5/urlparse.py Sat Jun 26 19:11:59 1999 --- urlparse.py Mon Jan 31 01:31:45 2000 *************** *** 170,175 **** --- 170,180 ---- segments[-1] = '' elif len(segments) >= 2 and segments[-1] == '..': segments[-2:] = [''] + + if segments[0] == '': + while segments[1] == '..': # remove all leading '..' + del segments[1] + return urlunparse((scheme, netloc, joinfields(segments, '/'), params, query, fragment)) ==================================================================== Audit trail: Mon Feb 07 12:35:35 2000 guido changed notes Mon Feb 07 12:35:35 2000 guido moved from incoming to request Follow-Ups: Date: 2000-Dec-19 08:38 By: gvanrossum Comment: OK, reopened. ------------------------------------------------------- Date: 2000-Dec-19 08:30 By: doerwalter Comment: Section 5.2 of RFC 1808 states that in the context of the base URL <> = URLs that have more .. than the base has directory names, should be resolved in the following way: ../../../g = ../../../../g = i.e. they should be preserved, which urljoin does in the first example gives in the bug report: print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) http://127.0.0.1/../imgs/logo.gif but not in the second example: print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) http://127.0.0.1/imgs/logo.gif where the result should have been http://127.0.0.1/../../imgs/logo.gif ------------------------------------------------------- Date: 2000-Aug-23 21:22 By: fdrake Comment: RFC 1808 gives examples of this form in section 5.2, "Abnormal Examples," and gives the current behavior as the desired treatment, stating that all parsers (urljoin() counts given the RFC's terminology) should treat the abnormal examples consistently. ------------------------------------------------------- Date: 2000-Aug-13 01:36 By: moshez Comment: OK, Jeremy -- this one is yours. Either notabug it, or check in the relevant patch (101064 -- assigned to you) ------------------------------------------------------- Date: 2000-Aug-01 14:13 By: nobody Comment: Patch being considered. ------------------------------------------------------- Date: 2000-Aug-01 14:13 By: nobody Comment: From: Guido van Rossum Subject: Re: [Python-bugs-list] urljoin() bug with odd no of '..' (PR#194) Date: Mon, 31 Jan 2000 12:28:55 -0500 Thanks for your bug report and fix. I agree with your diagnosis. Would you please be so kind as to resend your patch with the legal disclaimer from http://www.python.org/1.5/bugrelease.html --Guido van Rossum (home page: http://www.python.org/~guido/) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=110832&group_id=5470 From noreply@sourceforge.net Tue Dec 19 16:41:35 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 08:41:35 -0800 Subject: [Python-bugs-list] [Bug #110832] urljoin() bug with odd no of '..' (PR#194) Message-ID: Bug #110832, was updated on 2000-Aug-01 14:13 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 6 Submitted by: nobody Assigned to : fdrake Summary: urljoin() bug with odd no of '..' (PR#194) Details: Jitterbug-Id: 194 Submitted-By: DrMalte@ddd.de Date: Sun, 30 Jan 2000 19:40:45 -0500 (EST) Version: 1.5.2 and 1.4 OS: Linux While playing with linbot I noticed some failed requests to 'http://xxx.xxx.xx/../img/xxx.gif' for a document in the root directory containing . The Reason is in urlparse.urljoin() urljoin() fails to remove an odd number of '../' from the path. Demonstration: from urlparse import urljoin print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) # gives 'http://127.0.0.1/../imgs/logo.gif' # should give 'http://127.0.0.1/imgs/logo.gif' print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) # gives 'http://127.0.0.1/imgs/logo.gif' # works # '../../imgs/logo.gif' gives 'http://127.0.0.1/../imgs/logo.gif' and so on The patch for 1.5.2 ( I'm not sure if it works generally, but tests with linbot looked good) *** /usr/local/lib/python1.5/urlparse.py Sat Jun 26 19:11:59 1999 --- urlparse.py Mon Jan 31 01:31:45 2000 *************** *** 170,175 **** --- 170,180 ---- segments[-1] = '' elif len(segments) >= 2 and segments[-1] == '..': segments[-2:] = [''] + + if segments[0] == '': + while segments[1] == '..': # remove all leading '..' + del segments[1] + return urlunparse((scheme, netloc, joinfields(segments, '/'), params, query, fragment)) ==================================================================== Audit trail: Mon Feb 07 12:35:35 2000 guido changed notes Mon Feb 07 12:35:35 2000 guido moved from incoming to request Follow-Ups: Date: 2000-Dec-19 08:41 By: fdrake Comment: Ok, confirmed. Reopening the bug until I get a chance to look at the proposed patch and can update the test suite. ------------------------------------------------------- Date: 2000-Dec-19 08:38 By: gvanrossum Comment: OK, reopened. ------------------------------------------------------- Date: 2000-Dec-19 08:30 By: doerwalter Comment: Section 5.2 of RFC 1808 states that in the context of the base URL <> = URLs that have more .. than the base has directory names, should be resolved in the following way: ../../../g = ../../../../g = i.e. they should be preserved, which urljoin does in the first example gives in the bug report: print urljoin( 'http://127.0.0.1/', '../imgs/logo.gif' ) http://127.0.0.1/../imgs/logo.gif but not in the second example: print urljoin( 'http://127.0.0.1/', '../../imgs/logo.gif' ) http://127.0.0.1/imgs/logo.gif where the result should have been http://127.0.0.1/../../imgs/logo.gif ------------------------------------------------------- Date: 2000-Aug-23 21:22 By: fdrake Comment: RFC 1808 gives examples of this form in section 5.2, "Abnormal Examples," and gives the current behavior as the desired treatment, stating that all parsers (urljoin() counts given the RFC's terminology) should treat the abnormal examples consistently. ------------------------------------------------------- Date: 2000-Aug-13 01:36 By: moshez Comment: OK, Jeremy -- this one is yours. Either notabug it, or check in the relevant patch (101064 -- assigned to you) ------------------------------------------------------- Date: 2000-Aug-01 14:13 By: nobody Comment: Patch being considered. ------------------------------------------------------- Date: 2000-Aug-01 14:13 By: nobody Comment: From: Guido van Rossum Subject: Re: [Python-bugs-list] urljoin() bug with odd no of '..' (PR#194) Date: Mon, 31 Jan 2000 12:28:55 -0500 Thanks for your bug report and fix. I agree with your diagnosis. Would you please be so kind as to resend your patch with the legal disclaimer from http://www.python.org/1.5/bugrelease.html --Guido van Rossum (home page: http://www.python.org/~guido/) ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=110832&group_id=5470 From noreply@sourceforge.net Tue Dec 19 16:49:44 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 08:49:44 -0800 Subject: [Python-bugs-list] [Bug #126351] urlparse.scheme_chars and string.letters Message-ID: Bug #126351, was updated on 2000-Dec-19 08:36 Here is a current snapshot of the bug. Project: Python Category: None Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: doerwalter Assigned to : gvanrossum Summary: urlparse.scheme_chars and string.letters Details: urlparse.scheme_chars has the same bug as urllib.quote (see bug 111961), because it used string.letters which includes more than the upper and lowercase letter, which results in scheme_chars being 'abcdefghijklmnopqrstuvwxyz\337\340\341\342\343\344\345\346\347\350\351\352\353\354\355\356\357\360\361\362\363\364\365\366\370\371\372\373\374\375\376\377ABCDEFGHIJKLMNOPQRSTUVWXYZ\300\301\302\303\304\305\306\307\310\311\312\313\314\315\316\317\320\321\322\323\324\325\326\330\331\332\333\334\335\3360123456789+-.'. RFC 1738 Section 2.1 states the following: Scheme names consist of a sequence of characters. The lower case letters "a"--"z", digits, and the characters plus ("+"), period ("."), and hyphen ("-") are allowed. For resiliency, programs interpreting URLs should treat upper case letters as equivalent to lower case in scheme names (e.g., allow "HTTP" as well as "http"). Follow-Ups: Date: 2000-Dec-19 08:49 By: gvanrossum Comment: Fixed with brute force. urlparse.py rev. 1.26. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126351&group_id=5470 From noreply@sourceforge.net Tue Dec 19 16:50:21 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 08:50:21 -0800 Subject: [Python-bugs-list] [Bug #126345] Modules are not garbage collected Message-ID: Bug #126345, was updated on 2000-Dec-19 07:28 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: loewis Assigned to : nascheme Summary: Modules are not garbage collected Details: Module objects currently don't participate in garbage collection. That is a problem for applications using the new or imp modules to create modules on-the-fly, such as the rexec module. Follow-Ups: Date: 2000-Dec-19 08:50 By: gvanrossum Comment: Neil promised to do this over the holidays. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126345&group_id=5470 From noreply@sourceforge.net Tue Dec 19 19:48:33 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 11:48:33 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : akuchling Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error Follow-Ups: Date: 2000-Dec-19 11:48 By: akuchling Comment: .unused_data is really a red herring; the PyZlib_objdecompress() loops until zst->avail_in is zero, so .unused_data must always be zero by definition. (The attribute is there to support gzip-format files that may contain multiple compressed streams concatenated together.) I still have no idea what the documentation should say; "don't pass more than 16K of compressed data when you're expecting a sync-flush." I can't see a way to explain this coherently without a big long explanation that will confuse people who don't care about this problem. (Add a special note, or known bugs subsection, maybe?) A simple C test program should be written, in order to check if it's the zlib library itself that's doing this. ------------------------------------------------------- Date: 2000-Dec-18 20:13 By: fdrake Comment: Andrew, please summarize what doc changes are needed, or make the changes (whichever is easier for you is fine). ------------------------------------------------------- Date: 2000-Dec-12 15:18 By: abo Comment: Further comments... After looking at the C code, a few things became clear; I need to read more about C/Python interfacing, and the "unused_data" attribute will only contain data if additional data is fed to a de-compressor at the end of a complete compressed stream. The purpose of the "unused_data" attribute is not clear in the documentation, so that should probably be clarified (mind you, I am looking at pre-2.0 docs so maybe it already has?). The failure to produce all data up to a sync-flush is something else... I'm still looking into it. I'm not sure if it is an inherent limitation of zlib, something that needs to be fixed in zlib, or something that needs to be fixed in the python interface. If it is an inherent limitation, I'd like to characterise it a bit better before documenting it. If it is something that needs to be fixed in either zlib or the python interface, I'd like to fix it. Unfortunately, this is a bit beyond me at the moment, mainly in time, but also a bit in skill (need to read the python/C interfacing documentation). Maybe over the christmas holidays I'll get a chance to fix it. ------------------------------------------------------- Date: 2000-Dec-12 13:32 By: gvanrossum Comment: OK, assigned to Fred. You may ask Andrew what to write. :-) ------------------------------------------------------- Date: 2000-Dec-08 14:50 By: abo Comment: I'm not that sure I'm happy with it just being marked closed. AFAIKT, the implementation definitely doesn't do what the documentation says, so to save people like me time when they hit it, I'prefer the bug at least be assigned to documentation so that the limitation is documented. >From my reading of the documentation as it stands, the fact that there is more pending data in the decompressor should be indicated by it's "unused_data" attribute. The tests seem to show that "decompress()" is only processing 16K of compressed data each call, which would suggest that "unused_data" should contain the rest. However, in all my tests that attribute has always been empty. Perhaps the bug is in there somewhere? Another slight strangeness, even if "unused_data" did contain something, the only way to get it out is by feeding in more compressed data, or issuing a flush(), thus ending the decompression... I guess that since I've been bitten by this, it's up to me to fix it. I've got the source to 2.0 and I'll have a look and see if I can submit a patch. and I was coding this app in python to avoid coding in C :-) ------------------------------------------------------- Date: 2000-Dec-08 09:26 By: akuchling Comment: Python 2.0 demonstrates the problem, too. I'm not sure what this is: a zlibmodule bug/oversight or simply problems with zlib's API. Looking at zlib.h, it implies that you'd have to call inflate() with the flush parameter set to Z_SYNC_FLUSH to get the remaining data. Unfortunately this doesn't seem to help -- .flush() method doesn't support an argument, but when I patch zlibmodule.c to allow one, .flush(Z_SYNC_FLUSH) always fails with a -5: buffer error, perhaps because it expects there to be some new data. (The DEFAULTALLOC constant in zlibmodule.c is 16K, but this seems to be unrelated to the problem showing up with more than 16K of data, since changing DEFAULTALLOC to 32K or 1K makes no difference to the size of data at which the bug shows up.) In short, I have no idea what's at fault, or if it can or should be fixed. Unless you or someone else submits a patch, I'll just leave it alone, and mark this bug as closed and "Won't fix". ------------------------------------------------------- Date: 2000-Dec-08 07:44 By: gvanrossum Comment: I *think* this may have been fixed in Python 2.0. I'm assigning this to Andrew who can confirm that and close the bug report (if it is fixed). ------------------------------------------------------- Date: 2000-Dec-07 23:28 By: abo Comment: Argh... SF killed all my indents... sorry about that. You should be able to figure it out, but if not email me and I can send a copy. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Wed Dec 20 00:46:05 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 16:46:05 -0800 Subject: [Python-bugs-list] [Bug #126400] test_format broken Message-ID: Bug #126400, was updated on 2000-Dec-19 16:46 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: None Priority: 7 Submitted by: fdrake Assigned to : akuchling Summary: test_format broken Details: Objects/unicodeobject.c revision 2.70 changed the output of Lib/test/test_format.py; the corresponding output file needs to be regenerated and checked in. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126400&group_id=5470 From noreply@sourceforge.net Wed Dec 20 00:48:40 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 16:48:40 -0800 Subject: [Python-bugs-list] [Bug #120994] Traceback with DISTUTILS_DEBUG set Message-ID: Bug #120994, was updated on 2000-Nov-01 05:43 Here is a current snapshot of the bug. Project: Python Category: Distutils Status: Closed Resolution: None Bug Group: None Priority: 5 Submitted by: gward Assigned to : gward Summary: Traceback with DISTUTILS_DEBUG set Details: Something is wrong in the 'dump_dirs()' method of the "install" command: it bombs with an AttributeError: $ DISTUTILS_DEBUG=1 python setup.py install [...] running install Distribution.get_command_obj(): creating 'install' command object pre-finalize_{unix,other}: prefix: None exec_prefix: None home: None install_base: None install_platbase: None root: None install_purelib: None install_platlib: None install_lib: None install_headers: None install_scripts: None install_data: None compile: None Traceback (most recent call last): File "setup.py", line 28, in ? packages = ['distutils', 'distutils.command'], File "distutils/core.py", line 138, in setup dist.run_commands() File "distutils/dist.py", line 829, in run_commands self.run_command(cmd) File "distutils/dist.py", line 848, in run_command cmd_obj.ensure_finalized() File "distutils/cmd.py", line 112, in ensure_finalized self.finalize_options() File "distutils/command/install.py", line 240, in finalize_options self.dump_dirs("pre-finalize_{unix,other}") File "distutils/command/install.py", line 338, in dump_dirs val = getattr(self, opt_name) File "distutils/cmd.py", line 107, in __getattr__ raise AttributeError, attr AttributeError: no_compile Not sure what's going on here... Follow-Ups: Date: 2000-Dec-19 16:48 By: akuchling Comment: Fixed by a one-line patch (forgotten variable initialization) ------------------------------------------------------- Date: 2000-Nov-07 04:46 By: nobody Comment: Please put the patch in the Patch Manager instead! --Guido ------------------------------------------------------- Date: 2000-Nov-07 02:05 By: calvin Comment: I submitted a patch for this on the mailling list. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=120994&group_id=5470 From noreply@sourceforge.net Wed Dec 20 00:56:18 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 16:56:18 -0800 Subject: [Python-bugs-list] [Bug #126400] test_format broken Message-ID: Bug #126400, was updated on 2000-Dec-19 16:46 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: None Bug Group: None Priority: 7 Submitted by: fdrake Assigned to : akuchling Summary: test_format broken Details: Objects/unicodeobject.c revision 2.70 changed the output of Lib/test/test_format.py; the corresponding output file needs to be regenerated and checked in. Follow-Ups: Date: 2000-Dec-19 16:56 By: akuchling Comment: Doh! Fixed. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126400&group_id=5470 From noreply@sourceforge.net Wed Dec 20 01:00:04 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 17:00:04 -0800 Subject: [Python-bugs-list] [Bug #125452] shlex.shlex hangs when parsing an unclosed quoted string Message-ID: Bug #125452, was updated on 2000-Dec-11 23:12 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : esr Summary: shlex.shlex hangs when parsing an unclosed quoted string Details: import StringIO import shlex s = shlex.shlex(StringIO.StringIO("hello 'world")) you'll see that get_token doesn't test for EOF when it's in the ' state. Just adding that test should fix the problem. Follow-Ups: Date: 2000-Dec-19 17:00 By: akuchling Comment: Patch #102953 has been submitted to fix this. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125452&group_id=5470 From noreply@sourceforge.net Wed Dec 20 01:02:38 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 17:02:38 -0800 Subject: [Python-bugs-list] [Bug #121121] Dynamic loading on Solaris does not work Message-ID: Bug #121121, was updated on 2000-Nov-02 08:33 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: None Bug Group: Platform-specific Priority: 4 Submitted by: tww-account Assigned to : gvanrossum Summary: Dynamic loading on Solaris does not work Details: Dynamic loading of shared libraries (Python/dynload_shlib) does not work under Solaris. This is due to a bug in the autoconf script. The patch at ftp://ftp.thewrittenword.com/outgoing/pub/python-2.0-solaris-dynload.patch fixes it. The problem is that AC_CHECK_LIB(dl, dlopen) will never define HAVE_DLOPEN (AC_CHECK_FUNCS(dlopen) does that) which in turn will never define $ac_cv_func_dlopen. Anyway, using internal autoconf macros is icky. Redo the autoconf test because it will cache the results. -- albert chin (china@thewrittenword.com) Follow-Ups: Date: 2000-Dec-19 17:02 By: akuchling Comment: Closing this bug, as according to the previous comment it's now fixed. ------------------------------------------------------- Date: 2000-Dec-09 05:11 By: tww-account Comment: Tried 2.0.1 from CVS. Everything works now. You can close this bug. Thanks! ------------------------------------------------------- Date: 2000-Nov-13 12:54 By: gvanrossum Comment: Albert, would you be so kind to try again with the CVS version? We didn't follow your suggestions (I can't find your patch on SF -- what's the patch id?) but we did change a few things. According to Greg Ward it now should work on Solaris. I can't test that beucase I have no acccess to a Solaris machine. ------------------------------------------------------- Date: 2000-Nov-02 10:14 By: tww-account Comment: Ok, patch uploaded to the SourceForge patch manager. ------------------------------------------------------- Date: 2000-Nov-02 10:08 By: gvanrossum Comment: Thanks for the patch; but would you be so kind to submit the patch to the SourceForge patch manager? See http://sourceforge.net/patch/?group_id=5470 ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=121121&group_id=5470 From noreply@sourceforge.net Wed Dec 20 01:19:36 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 17:19:36 -0800 Subject: [Python-bugs-list] [Bug #119486] fcntl.lockf() is broken Message-ID: Bug #119486, was updated on 2000-Oct-26 14:18 Here is a current snapshot of the bug. Project: Python Category: Extension Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: flight Assigned to : gvanrossum Summary: fcntl.lockf() is broken Details: Another observation by James Troup : fcntl.lockf() seems to be severly `broken': it's acting like flock, not like lockf, and the code seems to be a copy/paste of flock. (registered in the Debian Bug Tracking System as bug #74777, http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=74777). James includes a first start at filling in a correct lockf function. Looks like it needs some more work, therefore I don't submit it as patch. The patch is against 1.5.2, though there seem to be no changes in 2.0. These are James' words: fcntl.lockf() doesn't work as expected. fcntl.lockf(fd, FCNTL.F_TLOCK); will block.. looking at the source is exteremly confusing. fcntl.lockf() appears to want flock() style arguments?! It almost looks like someone cut'n'wasted from the fcntl_flock() function just above... Anyway, here is a patch which is IMO the Right Thing, i.e. fcnt.lockf() acting like libc lockf() and like it's documented to do... --- python-1.5.2/Modules/fcntlmodule.c.orig Sat Oct 14 15:46:40 2000 +++ python-1.5.2/Modules/fcntlmodule.c Sat Oct 14 18:31:44 2000 @@ -233,30 +233,12 @@ { int fd, code, ret, whence = 0; PyObject *lenobj = NULL, *startobj = NULL; + struct flock l; if (!PyArg_ParseTuple(args, "ii|OOi", &fd, &code, &lenobj, &startobj, &whence)) return NULL; -#ifndef LOCK_SH -#define LOCK_SH 1 /* shared lock */ -#define LOCK_EX 2 /* exclusive lock */ -#define LOCK_NB 4 /* don't block when locking */ -#define LOCK_UN 8 /* unlock */ -#endif - { - struct flock l; - if (code == LOCK_UN) - l.l_type = F_UNLCK; - else if (code & LOCK_SH) - l.l_type = F_RDLCK; - else if (code & LOCK_EX) - l.l_type = F_WRLCK; - else { - PyErr_SetString(PyExc_ValueError, - "unrecognized flock argument"); - return NULL; - } l.l_start = l.l_len = 0; if (startobj != NULL) { #if !defined(HAVE_LARGEFILE_SUPPORT) @@ -281,10 +263,48 @@ return NULL; } l.l_whence = whence; + switch (code) + { + case F_TEST: + /* Test the lock: return 0 if FD is unlocked or locked by this process; + return -1, set errno to EACCES, if another process holds the lock. */ + if (fcntl (fd, F_GETLK, &l) < 0) { + fprintf(stderr, "andrea: 1"); + PyErr_SetFromErrno(PyExc_IOError); + return NULL; + } + if (l.l_type == F_UNLCK || l.l_pid == getpid ()) { + fprintf(stderr, "andrea: 2"); + Py_INCREF(Py_None); + return Py_None; + } + fprintf(stderr, "andrea: 3"); + errno = EACCES; + PyErr_SetFromErrno(PyExc_IOError); + return NULL; + + case F_ULOCK: + l.l_type = F_UNLCK; + code = F_SETLK; + break; + case F_LOCK: + l.l_type = F_WRLCK; + code = F_SETLKW; + break; + case F_TLOCK: + l.l_type = F_WRLCK; + code = F_SETLK; + break; + + default: + PyErr_SetString(PyExc_ValueError, + "unrecognized flock argument"); + return NULL; + } Py_BEGIN_ALLOW_THREADS - ret = fcntl(fd, (code & LOCK_NB) ? F_SETLK : F_SETLKW, &l); + ret = fcntl(fd, code, &l); Py_END_ALLOW_THREADS - } + if (ret < 0) { PyErr_SetFromErrno(PyExc_IOError); return NULL; Follow-Ups: Date: 2000-Dec-19 17:19 By: akuchling Comment: Note that the docs say that lockf "is a wrapper around the \constant{FCNTL.F_SETLK} and \constant{FCNTL.F_SETLKW} \function{fcntl()} calls." Stevens's "Advanced Programming in the Unix Env." concurs, on page 367: "The System V lockf() is just an interface to fcntl()." However, none of us are serious Unix weenies. Unfortunately, the documented Python lockf() provides features above the libc lockf(), so using the system lockf() seems impossible unless we break backwards compatibility. Unfortunately none of us are serious Unix weenies, so writing a lockf() emulation that's completely correct everywhere might be very difficult. Troup's patch attempts to fix the emulation; I haven't looked at it closely. I'd suggest breaking backwards compatibility; if that's out, we should take a close look at the patch. ------------------------------------------------------- Date: 2000-Nov-24 01:32 By: sjoerd Comment: Yeah, I looked at it. In 1996. I don't think I looked at flock since then. Anyway, it compiles on my system (IRIX 6.5.2), and I don't think I will have time in the near future to look any further into this. ------------------------------------------------------- Date: 2000-Oct-26 14:29 By: gvanrossum Comment: Sjoerd, you once looked at this code. Can you comment on this? If you don't have time, please assign back to me. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=119486&group_id=5470 From noreply@sourceforge.net Wed Dec 20 02:48:26 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Dec 2000 18:48:26 -0800 Subject: [Python-bugs-list] [Bug #121479] Compiler warnings on Solaris Message-ID: Bug #121479, was updated on 2000-Nov-03 14:53 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: gward Assigned to : gward Summary: Compiler warnings on Solaris Details: GCC 2.95.2 on Solaris 2.6 reports a bunch of warnings building the latest CVS source. Here's the complete list: intrcheck.c:151: warning: function declaration isn't a prototype intrcheck.c: In function `PyOS_InitInterrupts': intrcheck.c:156: warning: function declaration isn't a prototype intrcheck.c:156: warning: function declaration isn't a prototype floatobject.c:35: warning: function declaration isn't a prototype intobject.c: In function `PyInt_FromString': intobject.c:185: warning: subscript has type `char' bltinmodule.c: In function `builtin_ord': bltinmodule.c:1507: warning: `ord' might be used uninitialized in this function errors.c: In function `PyErr_Format': errors.c:405: warning: subscript has type `char' errors.c:460: warning: subscript has type `char' errors.c:465: warning: subscript has type `char' errors.c:468: warning: subscript has type `char' pythonrun.c: In function `initsigs': pythonrun.c:1134: warning: function declaration isn't a prototype ./posixmodule.c: In function `posix_confstr': ./posixmodule.c:4471: warning: implicit declaration of function `confstr' ./signalmodule.c:88: warning: function declaration isn't a prototype ./signalmodule.c: In function `signal_signal': ./signalmodule.c:212: warning: function declaration isn't a prototype ./signalmodule.c:214: warning: function declaration isn't a prototype ./signalmodule.c:225: warning: function declaration isn't a prototype ./signalmodule.c: In function `initsignal': ./signalmodule.c:332: warning: function declaration isn't a prototype ./signalmodule.c:336: warning: function declaration isn't a prototype ./signalmodule.c:355: warning: function declaration isn't a prototype ./signalmodule.c:357: warning: function declaration isn't a prototype ./signalmodule.c: In function `finisignal': ./signalmodule.c:556: warning: function declaration isn't a prototype ./signalmodule.c:564: warning: function declaration isn't a prototype make[1]: [add2lib] Error 2 (ignored) ./stropmodule.c: In function `strop_atoi': ./stropmodule.c:752: warning: subscript has type `char' ./timemodule.c: In function `time_strptime': ./timemodule.c:385: warning: subscript has type `char' ./socketmodule.c: In function `PySocket_socket': ./socketmodule.c:1768: warning: function declaration isn't a prototype ./socketmodule.c: In function `PySocket_fromfd': ./socketmodule.c:1806: warning: function declaration isn't a prototype I'll look into these one at a time and see how many I can fix. Follow-Ups: Date: 2000-Dec-19 18:48 By: nobody Comment: Patch submitted for the bltinmodule.c warning. The errors.c warnings are because isdigit() & friends expect an int, and the code is using *f, which is a char. isdigit() is a macro on Solaris. Presumably the fix is to use (int)*f on those lines. Same cause for the ones in stropmodule.c and intobject.c, I think. The warnings in socketmodule.c, and presumably the ones in signalmodule.c, intrcheck.c, and pythonrun.c too, seem to be because of Solaris's SIG_IGN. I suspect GCC is getting confused by it. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=121479&group_id=5470 From noreply@sourceforge.net Wed Dec 20 14:48:03 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Dec 2000 06:48:03 -0800 Subject: [Python-bugs-list] [Bug #116677] minidom:Node.appendChild() has wrong semantics Message-ID: Bug #116677, was updated on 2000-Oct-11 19:24 Here is a current snapshot of the bug. Project: Python Category: XML Status: Closed Resolution: None Bug Group: None Priority: 7 Submitted by: akuchling Assigned to : fdrake Summary: minidom:Node.appendChild() has wrong semantics Details: Consider this test program: from xml.dom import minidom doc = minidom.Document() root = doc.createElement('root') ; doc.appendChild( root ) elem = doc.createElement('leaf') root.appendChild( elem ) root.appendChild( elem ) print doc.toxml() print root.childNodes It prints: [, ] 'elem' is now linked into the DOM tree in two places, which is wrong; according to the DOM Level 1 spec, "If the newChild is already in the tree, it is first removed." Follow-Ups: Date: 2000-Dec-20 06:48 By: akuchling Comment: Fixed by the checkin of patch #102492. ------------------------------------------------------- Date: 2000-Dec-12 13:04 By: gvanrossum Comment: Fred, can you check status on this? Possibly it's alrady been fixed. ------------------------------------------------------- Date: 2000-Nov-23 18:29 By: akuchling Comment: Patch #102492 has been submitted to fix this. ------------------------------------------------------- Date: 2000-Nov-21 14:23 By: fdrake Comment: Re-categorized this bug to "XML". This is *not* fixed by Lib/xml/dom/minidom.py revision 1.14. Unfortunately, this bug will be a little harder to fix. I looked to see if I could determine presence in the tree by checking for parentNode != None, but that isn't sufficient. xml.dom.pulldom maintains state by filling in the parentNode attribute, so it has a chain of ancestors; it needs this to find the node to add children to in DOMEventStream.expandNode(). Testing that a node is already in the tree is harder, but not much harder. A reasonable fix for this bug should not be difficult. ------------------------------------------------------- Date: 2000-Oct-16 06:47 By: akuchling Comment: I don't see why this particular deviation is a border case. All the methods for modifying a DOM tree -- appendChild(), insertBefore(), replaceChild() -- all behave the same way, first removing the added node if it's already in the tree somewhere. This will make it more difficult to translate DOM-using code from, say, Java, to Python + minidom, since you'll have to remember to add extra .removeChild() calls. Worse still, the problems caused by this will be hard to track down; portions of your DOM tree are aliased, but .toxml() won't make this clear. ------------------------------------------------------- Date: 2000-Oct-16 00:43 By: loewis Comment: This is indeed a bug in minidom, but I don't think it should be corrected for 2.0; I suggest to reduce the priority of it, or close it as "later". While this is a deviation from the DOM spec, it seems as a border case. As such, it should be documented; users can always explicitly remove the node before appending it elsewhere. ------------------------------------------------------- Date: 2000-Oct-12 07:37 By: nobody Comment: The test_minidom failure turned out to be caused by something else. However, I rechecked my test case and it's still broken with tonight's CVS. ------------------------------------------------------- Date: 2000-Oct-11 20:11 By: akuchling Comment: CVS as of this evening. Did it work before? (Hmm... tonight test_minidom is failing for me for some reason. Wonder if it's related?) ------------------------------------------------------- Date: 2000-Oct-11 19:55 By: fdrake Comment: Andrew: Are you using 2.0c1 or CVS? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116677&group_id=5470 From noreply@sourceforge.net Wed Dec 20 19:18:06 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Dec 2000 11:18:06 -0800 Subject: [Python-bugs-list] [Bug #123634] Pickle broken on Unicode strings Message-ID: Bug #123634, was updated on 2000-Nov-27 14:03 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: tlau Assigned to : gvanrossum Summary: Pickle broken on Unicode strings Details: Two one-liners that produce incorrect output: >>> cPickle.loads(cPickle.dumps(u'')) Traceback (most recent call last): File "", line 1, in ? cPickle.UnpicklingError: pickle data was truncated >>> cPickle.loads(cPickle.dumps(u'\u03b1 alpha\n\u03b2 beta')) Traceback (most recent call last): File "", line 1, in ? cPickle.UnpicklingError: invalid load key, '\'. The format of the Unicode string in the pickled representation is not escaped, as it is with regular strings. It should be. The latter bug occurs in both pickle and cPickle; the former is only a problem with cPickle. Follow-Ups: Date: 2000-Dec-20 11:18 By: nobody Comment: About your fix: this is not the solution I had in mind. I wanted to avoid the problems and performance hit by not using an encoding which requires eval() to build the Unicode object. Wouldn't the solution I proposed be both easier to implement and safe us from adding eval() to pickle et al. ?! -- Marc-Andre ------------------------------------------------------- Date: 2000-Dec-18 18:10 By: gvanrossum Comment: Fixed in both pickle.py (rev. 1.41) and cPickle.py (rev. 2.54). I've also checked in tests for these and similar endcases. ------------------------------------------------------- Date: 2000-Nov-27 14:36 By: tlau Comment: One more comment: binary-format pickles are not affected, only text-format pickles. Thus the part of my patch that applies to the binary section of the save_unicode function should not be applied. ------------------------------------------------------- Date: 2000-Nov-27 14:35 By: lemburg Comment: Some background (no time to fix this myself): When I added the Unicode handlers, I wanted to avoid the problems that the string dump mechanism has with quoted strings. The encodings used either carry length information (in binary mode: UTF-8) or do not include the \n character (in ascii mode: raw-unicode-escape encoding). Unfortunately, the raw-unicode-escape codec does not escape the newline character which is used by pickle to break the input into tokens.... Proposed fix: change the encoding to "unicode-escape" which doesn't have this problem. This will break code, but only code that is already broken :-/ ------------------------------------------------------- Date: 2000-Nov-27 14:20 By: tlau Comment: Here's my proposed patch to Lib/pickle.py (cPickle should be changed similarly): --- /scratch/tlau/Python-2.0/Lib/pickle.py Mon Oct 16 14:49:51 2000 +++ pickle.py Mon Nov 27 14:07:01 2000 @@ -286,9 +286,9 @@ encoding = object.encode('utf-8') l = len(encoding) s = mdumps(l)[1:] - self.write(BINUNICODE + s + encoding) + self.write(BINUNICODE + `s` + encoding) else: - self.write(UNICODE + object.encode('raw-unicode-escape') + '\n') + self.write(UNICODE + `object.encode('raw-unicode-escape')` + '\n') memo_len = len(memo) self.write(self.put(memo_len)) @@ -627,7 +627,12 @@ dispatch[BINSTRING] = load_binstring def load_unicode(self): - self.append(unicode(self.readline()[:-1],'raw-unicode-escape')) + rep = self.readline()[:-1] + if not self._is_string_secure(rep): + raise ValueError, "insecure string pickle" + rep = eval(rep, + {'__builtins__': {}}) # Let's be careful + self.append(unicode(rep, 'raw-unicode-escape')) dispatch[UNICODE] = load_unicode def load_binunicode(self): ------------------------------------------------------- Date: 2000-Nov-27 14:14 By: gvanrossum Comment: Jim, do you have time to look into this? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=123634&group_id=5470 From noreply@sourceforge.net Wed Dec 20 19:32:21 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Dec 2000 11:32:21 -0800 Subject: [Python-bugs-list] [Bug #126510] Python 2.0: raw string,backslash in not handled correct Message-ID: Bug #126510, was updated on 2000-Dec-20 11:32 Here is a current snapshot of the bug. Project: Python Category: Parser/Compiler Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: cpchaos Assigned to : nobody Summary: Python 2.0: raw string,backslash in not handled correct Details: When in raw mode, escape-sequences aren't applied, but they still seem to get interpreted! >>> # this works >>> print "\n" >>> print r"\n" \n >>> #but, this does not work! >>> print r"test\" File "", line 1 print r"test\" ^ SyntaxError: invalid token >>> print r"\" File "", line 1 print r"\" ^ SyntaxError: invalid token I think,the bug is "Paser/tokenizer.c" in function PyTokenizer_Get line 818-826. --snip-- else if (c == '\\') { tripcount = 0; c = tok_nextc(tok); if (c == EOF) { tok->done = E_TOKEN; tok->cur = tok->inp; return ERRORTOKEN; } } --snip-- The call of the tok_nextc(tok) funktion returns the quote-character, but it doesn't realizes the end of the string. So it continues to parse for string-termination ... and finally runs into "syntax error"! For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126510&group_id=5470 From noreply@sourceforge.net Wed Dec 20 19:57:58 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Dec 2000 11:57:58 -0800 Subject: [Python-bugs-list] [Bug #126510] Python 2.0: raw string,backslash in not handled correct Message-ID: Bug #126510, was updated on 2000-Dec-20 11:32 Here is a current snapshot of the bug. Project: Python Category: Parser/Compiler Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: cpchaos Assigned to : nobody Summary: Python 2.0: raw string,backslash in not handled correct Details: When in raw mode, escape-sequences aren't applied, but they still seem to get interpreted! >>> # this works >>> print "\n" >>> print r"\n" \n >>> #but, this does not work! >>> print r"test\" File "", line 1 print r"test\" ^ SyntaxError: invalid token >>> print r"\" File "", line 1 print r"\" ^ SyntaxError: invalid token I think,the bug is "Paser/tokenizer.c" in function PyTokenizer_Get line 818-826. --snip-- else if (c == '\\') { tripcount = 0; c = tok_nextc(tok); if (c == EOF) { tok->done = E_TOKEN; tok->cur = tok->inp; return ERRORTOKEN; } } --snip-- The call of the tok_nextc(tok) funktion returns the quote-character, but it doesn't realizes the end of the string. So it continues to parse for string-termination ... and finally runs into "syntax error"! Follow-Ups: Date: 2000-Dec-20 11:57 By: tim_one Comment: Not a bug. Raw strings cannot end with an odd number of backslashes. See this FAQ entry for more detail: http://www.python.org/cgi-bin/faqw.py?req=show&file=faq06.029.htp ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126510&group_id=5470 From noreply@sourceforge.net Wed Dec 20 23:17:37 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Dec 2000 15:17:37 -0800 Subject: [Python-bugs-list] [Bug #124981] zlib decompress of sync-flushed data fails Message-ID: Bug #124981, was updated on 2000-Dec-07 23:25 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: abo Assigned to : akuchling Summary: zlib decompress of sync-flushed data fails Details: I'm not sure if this is just an undocumented limitation or a genuine bug. I'm using python 1.5.2 on winNT. A single decompress of a large amount (16K+) of compressed data that has been sync-flushed fails to produce all the data up to the sync-flush. The data remains inside the decompressor untill further compressed data or a final flush is issued. Note that the 'unused_data' attribute does not show that there is further data in the decompressor to process (it shows ''). A workaround is to decompress the data in smaller chunks. Note that compressing data in smaller chunks is not required, as the problem is in the decompressor, not the compressor. The following code demonstrates the problem, and raises an exception when the compressed data reaches 17K; from zlib import * from random import * # create compressor and decompressor c=compressobj(9) d=decompressobj() # try data sizes of 1-63K for l in range(1,64): # generate random data stream a='' for i in range(l*1024): a=a+chr(randint(0,255)) # compress, sync-flush, and decompress t=d.decompress(c.compress(a)+c.flush(Z_SYNC_FLUSH)) # if decompressed data is different to input data, barf, if len(t) != len(a): print len(a),len(t),len(d.unused_data) raise error Follow-Ups: Date: 2000-Dec-20 15:17 By: abo Comment: I have had a look at this in more detail (Python C interfacing is actually pretty easy :-). I dunno whether to go into details here, but I have noticed that inflate is being called with Z_NO_FLUSH when the zlib docs suggest Z_SYNC_FLUSH or Z_FINISH. However, the zlib implementation only treats Z_FINISH differently, so this should not make a difference (but it might in future versions of zlib). As you have said, .unused_data will only contain data if something is appended to a complete compressed object in the stream fed to a decompressor. Perhaps the docs need something to clarify this, perhaps in the form of an example? I am going to write some test progs to see if it's in zlib. At this stage I suspect that it is, and I'm hoping over christmas to get a patch done. ------------------------------------------------------- Date: 2000-Dec-19 11:48 By: akuchling Comment: .unused_data is really a red herring; the PyZlib_objdecompress() loops until zst->avail_in is zero, so .unused_data must always be zero by definition. (The attribute is there to support gzip-format files that may contain multiple compressed streams concatenated together.) I still have no idea what the documentation should say; "don't pass more than 16K of compressed data when you're expecting a sync-flush." I can't see a way to explain this coherently without a big long explanation that will confuse people who don't care about this problem. (Add a special note, or known bugs subsection, maybe?) A simple C test program should be written, in order to check if it's the zlib library itself that's doing this. ------------------------------------------------------- Date: 2000-Dec-18 20:13 By: fdrake Comment: Andrew, please summarize what doc changes are needed, or make the changes (whichever is easier for you is fine). ------------------------------------------------------- Date: 2000-Dec-12 15:18 By: abo Comment: Further comments... After looking at the C code, a few things became clear; I need to read more about C/Python interfacing, and the "unused_data" attribute will only contain data if additional data is fed to a de-compressor at the end of a complete compressed stream. The purpose of the "unused_data" attribute is not clear in the documentation, so that should probably be clarified (mind you, I am looking at pre-2.0 docs so maybe it already has?). The failure to produce all data up to a sync-flush is something else... I'm still looking into it. I'm not sure if it is an inherent limitation of zlib, something that needs to be fixed in zlib, or something that needs to be fixed in the python interface. If it is an inherent limitation, I'd like to characterise it a bit better before documenting it. If it is something that needs to be fixed in either zlib or the python interface, I'd like to fix it. Unfortunately, this is a bit beyond me at the moment, mainly in time, but also a bit in skill (need to read the python/C interfacing documentation). Maybe over the christmas holidays I'll get a chance to fix it. ------------------------------------------------------- Date: 2000-Dec-12 13:32 By: gvanrossum Comment: OK, assigned to Fred. You may ask Andrew what to write. :-) ------------------------------------------------------- Date: 2000-Dec-08 14:50 By: abo Comment: I'm not that sure I'm happy with it just being marked closed. AFAIKT, the implementation definitely doesn't do what the documentation says, so to save people like me time when they hit it, I'prefer the bug at least be assigned to documentation so that the limitation is documented. >From my reading of the documentation as it stands, the fact that there is more pending data in the decompressor should be indicated by it's "unused_data" attribute. The tests seem to show that "decompress()" is only processing 16K of compressed data each call, which would suggest that "unused_data" should contain the rest. However, in all my tests that attribute has always been empty. Perhaps the bug is in there somewhere? Another slight strangeness, even if "unused_data" did contain something, the only way to get it out is by feeding in more compressed data, or issuing a flush(), thus ending the decompression... I guess that since I've been bitten by this, it's up to me to fix it. I've got the source to 2.0 and I'll have a look and see if I can submit a patch. and I was coding this app in python to avoid coding in C :-) ------------------------------------------------------- Date: 2000-Dec-08 09:26 By: akuchling Comment: Python 2.0 demonstrates the problem, too. I'm not sure what this is: a zlibmodule bug/oversight or simply problems with zlib's API. Looking at zlib.h, it implies that you'd have to call inflate() with the flush parameter set to Z_SYNC_FLUSH to get the remaining data. Unfortunately this doesn't seem to help -- .flush() method doesn't support an argument, but when I patch zlibmodule.c to allow one, .flush(Z_SYNC_FLUSH) always fails with a -5: buffer error, perhaps because it expects there to be some new data. (The DEFAULTALLOC constant in zlibmodule.c is 16K, but this seems to be unrelated to the problem showing up with more than 16K of data, since changing DEFAULTALLOC to 32K or 1K makes no difference to the size of data at which the bug shows up.) In short, I have no idea what's at fault, or if it can or should be fixed. Unless you or someone else submits a patch, I'll just leave it alone, and mark this bug as closed and "Won't fix". ------------------------------------------------------- Date: 2000-Dec-08 07:44 By: gvanrossum Comment: I *think* this may have been fixed in Python 2.0. I'm assigning this to Andrew who can confirm that and close the bug report (if it is fixed). ------------------------------------------------------- Date: 2000-Dec-07 23:28 By: abo Comment: Argh... SF killed all my indents... sorry about that. You should be able to figure it out, but if not email me and I can send a copy. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=124981&group_id=5470 From noreply@sourceforge.net Thu Dec 21 02:49:09 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Dec 2000 18:49:09 -0800 Subject: [Python-bugs-list] [Bug #126254] Traceback objects not properly garbage-collected Message-ID: Bug #126254, was updated on 2000-Dec-18 17:50 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: nobody Assigned to : gvanrossum Summary: Traceback objects not properly garbage-collected Details: System info: ============ Python 2.0 (#1, Dec 18 2000, 16:47:02) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Linux phil 2.2.18 #1 Mon Dec 18 14:49:56 PST 2000 i686 unknown Sample code: ============ import sys class fooclass: def __init__(self): print 'CONSTRUCTED' def withtb(self, doit=0): try: raise "foo" except: if doit: tb = sys.exc_info()[2] def __del__(self): print 'DESTROYED' if __name__ == '__main__': foo = fooclass() if len(sys.argv) > 1: foo.withtb(1) else: foo.withtb(0) del foo How to reproduce: ================= Run the above python script: 1. Without any argument: the withtb() method exception handler does not retrieve any traceback object. The program prints `CONSTRUCTED' and `DESTROYED'. 2. With some arguments: the withtb() method exception handler retrieves a traceback object and stores it in the `tb' local variable. However `DESTROYED' never gets printed out. I think that the `foo' object will never be garbage collected anymore. Workaround: =========== Deleting the `tb' object seems to restore things: if doit: tb = sys.exc_info()[2] del tb Other: ====== I've found this problem also in python 1.5.2 and python 1.6. Possible cause: =============== I would tend to think that we're creating a circular loop which cannot be garbage collected: - `tb' holds a reference to the traceback object - the traceback object holds a reference to the local scope - the local scope holds a reference to the `tb' variable The only way out is to break the circular reference by hand, although it's annoying. Phil - phil@commerceflow.com. Follow-Ups: Date: 2000-Dec-20 18:49 By: nobody Comment: Could I know why this was deemed to be `Invalid' ? Phil - phil@commerceflow.com. ------------------------------------------------------- Date: 2000-Dec-18 20:21 By: gvanrossum Comment: This is not a bug. Saving the traceback as a local variable creates a circular reference that prevents garbage collection. If you don't understand this answer, please write help@python.org. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126254&group_id=5470 From noreply@sourceforge.net Thu Dec 21 02:50:14 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Dec 2000 18:50:14 -0800 Subject: [Python-bugs-list] [Bug #126254] Traceback objects not properly garbage-collected Message-ID: Bug #126254, was updated on 2000-Dec-18 17:50 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: nobody Assigned to : gvanrossum Summary: Traceback objects not properly garbage-collected Details: System info: ============ Python 2.0 (#1, Dec 18 2000, 16:47:02) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Linux phil 2.2.18 #1 Mon Dec 18 14:49:56 PST 2000 i686 unknown Sample code: ============ import sys class fooclass: def __init__(self): print 'CONSTRUCTED' def withtb(self, doit=0): try: raise "foo" except: if doit: tb = sys.exc_info()[2] def __del__(self): print 'DESTROYED' if __name__ == '__main__': foo = fooclass() if len(sys.argv) > 1: foo.withtb(1) else: foo.withtb(0) del foo How to reproduce: ================= Run the above python script: 1. Without any argument: the withtb() method exception handler does not retrieve any traceback object. The program prints `CONSTRUCTED' and `DESTROYED'. 2. With some arguments: the withtb() method exception handler retrieves a traceback object and stores it in the `tb' local variable. However `DESTROYED' never gets printed out. I think that the `foo' object will never be garbage collected anymore. Workaround: =========== Deleting the `tb' object seems to restore things: if doit: tb = sys.exc_info()[2] del tb Other: ====== I've found this problem also in python 1.5.2 and python 1.6. Possible cause: =============== I would tend to think that we're creating a circular loop which cannot be garbage collected: - `tb' holds a reference to the traceback object - the traceback object holds a reference to the local scope - the local scope holds a reference to the `tb' variable The only way out is to break the circular reference by hand, although it's annoying. Phil - phil@commerceflow.com. Follow-Ups: Date: 2000-Dec-20 18:50 By: nobody Comment: oops, didn't see you comment. Forget about my question... Phil - phil@commerceflow.com. ------------------------------------------------------- Date: 2000-Dec-20 18:49 By: nobody Comment: Could I know why this was deemed to be `Invalid' ? Phil - phil@commerceflow.com. ------------------------------------------------------- Date: 2000-Dec-18 20:21 By: gvanrossum Comment: This is not a bug. Saving the traceback as a local variable creates a circular reference that prevents garbage collection. If you don't understand this answer, please write help@python.org. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126254&group_id=5470 From noreply@sourceforge.net Thu Dec 21 02:59:29 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Dec 2000 18:59:29 -0800 Subject: [Python-bugs-list] [Bug #126254] Traceback objects not properly garbage-collected Message-ID: Bug #126254, was updated on 2000-Dec-18 17:50 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: nobody Assigned to : gvanrossum Summary: Traceback objects not properly garbage-collected Details: System info: ============ Python 2.0 (#1, Dec 18 2000, 16:47:02) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Linux phil 2.2.18 #1 Mon Dec 18 14:49:56 PST 2000 i686 unknown Sample code: ============ import sys class fooclass: def __init__(self): print 'CONSTRUCTED' def withtb(self, doit=0): try: raise "foo" except: if doit: tb = sys.exc_info()[2] def __del__(self): print 'DESTROYED' if __name__ == '__main__': foo = fooclass() if len(sys.argv) > 1: foo.withtb(1) else: foo.withtb(0) del foo How to reproduce: ================= Run the above python script: 1. Without any argument: the withtb() method exception handler does not retrieve any traceback object. The program prints `CONSTRUCTED' and `DESTROYED'. 2. With some arguments: the withtb() method exception handler retrieves a traceback object and stores it in the `tb' local variable. However `DESTROYED' never gets printed out. I think that the `foo' object will never be garbage collected anymore. Workaround: =========== Deleting the `tb' object seems to restore things: if doit: tb = sys.exc_info()[2] del tb Other: ====== I've found this problem also in python 1.5.2 and python 1.6. Possible cause: =============== I would tend to think that we're creating a circular loop which cannot be garbage collected: - `tb' holds a reference to the traceback object - the traceback object holds a reference to the local scope - the local scope holds a reference to the `tb' variable The only way out is to break the circular reference by hand, although it's annoying. Phil - phil@commerceflow.com. Follow-Ups: Date: 2000-Dec-20 18:59 By: tim_one Comment: "Invalid" is a just a word -- it comes with SF bug system and isn't defined anywhere. By convention, we pair Not-A-Bug with Invalid, for lack of something better to do. Not-A-Bug means it's not a bug : you may not like the answer, but Guido is saying it's functioning as designed and he has no plans to change that. ------------------------------------------------------- Date: 2000-Dec-20 18:50 By: nobody Comment: oops, didn't see you comment. Forget about my question... Phil - phil@commerceflow.com. ------------------------------------------------------- Date: 2000-Dec-20 18:49 By: nobody Comment: Could I know why this was deemed to be `Invalid' ? Phil - phil@commerceflow.com. ------------------------------------------------------- Date: 2000-Dec-18 20:21 By: gvanrossum Comment: This is not a bug. Saving the traceback as a local variable creates a circular reference that prevents garbage collection. If you don't understand this answer, please write help@python.org. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126254&group_id=5470 From noreply@sourceforge.net Thu Dec 21 06:25:24 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Dec 2000 22:25:24 -0800 Subject: [Python-bugs-list] [Bug #126564] Default of static linking 'bsddb' breaks 3rd party modules Message-ID: Bug #126564, was updated on 2000-Dec-20 22:25 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: nobody Assigned to : nobody Summary: Default of static linking 'bsddb' breaks 3rd party modules Details: Python 2.0 builds the 'bsddb' module into the python interpreter *static* by default. When built this way on systems such as debian potato linux and some versions of redhat linux (to name a few) it links statically with an early BerkeleyDB 2.1.x. This causes problems to the current and under-development bsddb 3.x third party modules. They import but the functions they call are from the wrong library so they often coredump or return unexpected error codes. See the py-bsddb project on sourceforge. Also see http://electricrain.com/greg/python/ for the current stable py-bsddb3 module. Short term solution: Make the default build method for this module *shared* instead of static. Long term solution: the py-bsddb project should be able to replace the old bsddb module in the distribution. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126564&group_id=5470 From noreply@sourceforge.net Thu Dec 21 12:03:17 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Dec 2000 04:03:17 -0800 Subject: [Python-bugs-list] [Bug #126586] Floating point is broken in Python 2.0 Message-ID: Bug #126586, was updated on 2000-Dec-21 04:03 Here is a current snapshot of the bug. Project: Python Category: IDLE Status: Open Resolution: None Bug Group: Not a Bug Priority: 5 Submitted by: nobody Assigned to : nobody Summary: Floating point is broken in Python 2.0 Details: Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. IDLE 0.6 -- press F1 for help >>> p=0.6 >>> p 0.59999999999999998 >>> For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126586&group_id=5470 From noreply@sourceforge.net Thu Dec 21 12:21:56 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Dec 2000 04:21:56 -0800 Subject: [Python-bugs-list] [Bug #126587] sre matchobject,groupdict() seems to memory leak Message-ID: Bug #126587, was updated on 2000-Dec-21 04:21 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: sre matchobject,groupdict() seems to memory leak Details: """ This script grinds Windows NT to a halt because of excessive memory usage. The problem disappears if pre is used instead of sre. """ import re r=re.compile(r"(?P...)") while 1: mo=r.match("blabla") mo.groupdict() For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126587&group_id=5470 From noreply@sourceforge.net Thu Dec 21 16:15:07 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Dec 2000 08:15:07 -0800 Subject: [Python-bugs-list] [Bug #126586] Floating point is broken in Python 2.0 Message-ID: Bug #126586, was updated on 2000-Dec-21 04:03 Here is a current snapshot of the bug. Project: Python Category: IDLE Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: nobody Assigned to : nobody Summary: Floating point is broken in Python 2.0 Details: Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. IDLE 0.6 -- press F1 for help >>> p=0.6 >>> p 0.59999999999999998 >>> Follow-Ups: Date: 2000-Dec-21 08:15 By: tim_one Comment: This is not a bug. Binary floating point cannot represent decimal fractions exactly, so some rounding always occurs (even in Python 1.5.2). What changed is that Python 2.0 shows more precision than before in certain circumstances (repr() and the interactive prompt). You can use str() or print to get the old, rounded output: >>> print 0.1+0.1 0.2 >>> Follow the link for a detailed example: http://www.python.org/cgi-bin/moinmoin/RepresentationError ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126586&group_id=5470 From noreply@sourceforge.net Fri Dec 22 00:23:43 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Dec 2000 16:23:43 -0800 Subject: [Python-bugs-list] [Bug #126587] sre matchobject,groupdict() seems to memory leak Message-ID: Bug #126587, was updated on 2000-Dec-21 04:21 Here is a current snapshot of the bug. Project: Python Category: Regular Expressions Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : effbot Summary: sre matchobject,groupdict() seems to memory leak Details: """ This script grinds Windows NT to a halt because of excessive memory usage. The problem disappears if pre is used instead of sre. """ import re r=re.compile(r"(?P...)") while 1: mo=r.match("blabla") mo.groupdict() For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126587&group_id=5470 From noreply@sourceforge.net Fri Dec 22 09:40:59 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Dec 2000 01:40:59 -0800 Subject: [Python-bugs-list] [Bug #126665] makepy crashes parsing "Hauppauge WinTV OCX (b.11) Message-ID: Bug #126665, was updated on 2000-Dec-22 01:40 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: jcable Assigned to : nobody Summary: makepy crashes parsing "Hauppauge WinTV OCX (b.11) Details: Probably a problem with the OCX, but clearly a failure in makepy too: PythonWin 2.0 (#8, Oct 19 2000, 11:30:05) [MSC 32 bit (Intel)] on win32. Portions Copyright 1994-2000 Mark Hammond (MarkH@ActiveState.com) - see 'Help/About PythonWin' for further copyright information. >>> Generating to c:\python20\win32com\gen_py\2B143B63-055B-11D2-A96D-00A0C92A2D0Fx0x11x17.py Traceback (most recent call last): File "c:\python20\pythonwin\pywin\framework\scriptutils.py", line 301, in RunScript exec codeObject in __main__.__dict__ File "C:\Python20\win32com\client\makepy.py", line 357, in ? rc = main() File "C:\Python20\win32com\client\makepy.py", line 350, in main GenerateFromTypeLibSpec(arg, f, verboseLevel = verboseLevel, bForDemand = bForDemand, bBuildHidden = hiddenSpec) File "C:\Python20\win32com\client\makepy.py", line 254, in GenerateFromTypeLibSpec gen.generate(fileUse, bForDemand) File "c:\python20\win32com\client\genpy.py", line 665, in generate self.do_generate() File "c:\python20\win32com\client\genpy.py", line 719, in do_generate oleItems, enumItems, recordItems = self.BuildOleItemsFromType() File "c:\python20\win32com\client\genpy.py", line 620, in BuildOleItemsFromType refType = info.GetRefTypeInfo(info.GetRefTypeOfImplType(j)) com_error: (-2147312566, 'Error loading type library/DLL.', None, None) >>> The offending ocx can be found at: http://homepage.ntlworld.com/julian.cable/hcwWinTV.ocx For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126665&group_id=5470 From noreply@sourceforge.net Fri Dec 22 14:06:15 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Dec 2000 06:06:15 -0800 Subject: [Python-bugs-list] [Bug #116285] Unicode encoders don't report errors properly Message-ID: Bug #116285, was updated on 2000-Oct-06 17:32 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Open Resolution: Remind Bug Group: None Priority: 3 Submitted by: loewis Assigned to : lemburg Summary: Unicode encoders don't report errors properly Details: In current CVS, u"\366".encode("koi8-r") gives '\366'. This is incorrect - koi8-r does not support LATIN SMALL LETTER O WITH DIAERESIS, so it should raise a UnicodeError instead. Follow-Ups: Date: 2000-Dec-22 06:06 By: loewis Comment: A fix for that bug is in http://sourceforge.net/patch/?func=detailpatch&patch_id=103002&group_id=5470 Set group back to None since we are in the 2.1 cycle now. ------------------------------------------------------- Date: 2000-Oct-12 13:26 By: lemburg Comment: Reopened so that the bug doesn't get forgotten in 2.1. Instead of closing the bug, I will set the priority to 3 which should signal "not vital for the Python 2.0 release". ------------------------------------------------------- Date: 2000-Oct-12 11:24 By: lemburg Comment: Closed for 2.0. This request should be reopened for the 2.1 cycle. As Martin pointed out in private mail, the situation with correct error handling is not all that bad: the encoders default to latin-1 mappings (ie. 1-1) when converting Unicode to the encoding in case no mapping is given for the character. The fix would be to add explicit encoding mappings for all supplied standard codecs which map all Latin-1 characters which do not have a corresponding character in the encoding to None. This will then cause the codec to raise an error saying that the mapping is undefined. ------------------------------------------------------- Date: 2000-Oct-09 01:14 By: lemburg Comment: Note that this is due to the way the character mapping codec works: if the dictionary doesn't include a mapping for a certain character it simply copies that character without raising an error. All standard codecs in Python 2.0 which use the generic character codec only contain explicit mappings from the encoding to Unicode (for the decoding part). When encoding from Unicode to the encoding, the decoding map is simply reversed. To produce correct error output in all possible cases, the reverse mapping would have to include all Unicode characters which cannot be mapped to a encoding character (and map these to None). This is not feasable, so the "bug" is hard to fix... certainly not for Python 2.0. I'm setting the bug report to "Feature Request" meaning that it should be reopened for the 2.1 cycle. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116285&group_id=5470 From noreply@sourceforge.net Fri Dec 22 14:39:46 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Dec 2000 06:39:46 -0800 Subject: [Python-bugs-list] [Bug #126587] sre matchobject,groupdict() seems to memory leak Message-ID: Bug #126587, was updated on 2000-Dec-21 04:21 Here is a current snapshot of the bug. Project: Python Category: Regular Expressions Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: nobody Assigned to : effbot Summary: sre matchobject,groupdict() seems to memory leak Details: """ This script grinds Windows NT to a halt because of excessive memory usage. The problem disappears if pre is used instead of sre. """ import re r=re.compile(r"(?P...)") while 1: mo=r.match("blabla") mo.groupdict() Follow-Ups: Date: 2000-Dec-22 06:39 By: akuchling Comment: Fixed in CVS revision 2.47 of _sre.c. Thanks for reporting this bug! ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126587&group_id=5470 From noreply@sourceforge.net Fri Dec 22 14:43:31 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Dec 2000 06:43:31 -0800 Subject: [Python-bugs-list] [Bug #123225] asyncore.py should use select.poll(), not "import poll" Message-ID: Bug #123225, was updated on 2000-Nov-22 20:41 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: Later Bug Group: Not a Bug Priority: 5 Submitted by: nobody Assigned to : jhylton Summary: asyncore.py should use select.poll(), not "import poll" Details: The Python2.0 asyncore module is able to use the poll system call (claimed to be much more efficient than select for large numbers of requests), however, it tries to "import poll", which fails (there is no pollmodule supplied with Python2.0), whereas it could use the poll available with the select module (select.poll()). Follow-Ups: Date: 2000-Dec-22 06:43 By: akuchling Comment: A patch to use Python 2.0's select.poll() has been sent off to the medusa@egroups.com mailing list. Assuming the patch is accepted, this bug will be fixed when we pick up the latest version of asyncore.py before Python 2.1 is finalized. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=123225&group_id=5470 From noreply@sourceforge.net Sat Dec 23 03:16:43 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Dec 2000 19:16:43 -0800 Subject: [Python-bugs-list] [Bug #121479] Compiler warnings on Solaris Message-ID: Bug #121479, was updated on 2000-Nov-03 14:53 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: gward Assigned to : gward Summary: Compiler warnings on Solaris Details: GCC 2.95.2 on Solaris 2.6 reports a bunch of warnings building the latest CVS source. Here's the complete list: intrcheck.c:151: warning: function declaration isn't a prototype intrcheck.c: In function `PyOS_InitInterrupts': intrcheck.c:156: warning: function declaration isn't a prototype intrcheck.c:156: warning: function declaration isn't a prototype floatobject.c:35: warning: function declaration isn't a prototype intobject.c: In function `PyInt_FromString': intobject.c:185: warning: subscript has type `char' bltinmodule.c: In function `builtin_ord': bltinmodule.c:1507: warning: `ord' might be used uninitialized in this function errors.c: In function `PyErr_Format': errors.c:405: warning: subscript has type `char' errors.c:460: warning: subscript has type `char' errors.c:465: warning: subscript has type `char' errors.c:468: warning: subscript has type `char' pythonrun.c: In function `initsigs': pythonrun.c:1134: warning: function declaration isn't a prototype ./posixmodule.c: In function `posix_confstr': ./posixmodule.c:4471: warning: implicit declaration of function `confstr' ./signalmodule.c:88: warning: function declaration isn't a prototype ./signalmodule.c: In function `signal_signal': ./signalmodule.c:212: warning: function declaration isn't a prototype ./signalmodule.c:214: warning: function declaration isn't a prototype ./signalmodule.c:225: warning: function declaration isn't a prototype ./signalmodule.c: In function `initsignal': ./signalmodule.c:332: warning: function declaration isn't a prototype ./signalmodule.c:336: warning: function declaration isn't a prototype ./signalmodule.c:355: warning: function declaration isn't a prototype ./signalmodule.c:357: warning: function declaration isn't a prototype ./signalmodule.c: In function `finisignal': ./signalmodule.c:556: warning: function declaration isn't a prototype ./signalmodule.c:564: warning: function declaration isn't a prototype make[1]: [add2lib] Error 2 (ignored) ./stropmodule.c: In function `strop_atoi': ./stropmodule.c:752: warning: subscript has type `char' ./timemodule.c: In function `time_strptime': ./timemodule.c:385: warning: subscript has type `char' ./socketmodule.c: In function `PySocket_socket': ./socketmodule.c:1768: warning: function declaration isn't a prototype ./socketmodule.c: In function `PySocket_fromfd': ./socketmodule.c:1806: warning: function declaration isn't a prototype I'll look into these one at a time and see how many I can fix. Follow-Ups: Date: 2000-Dec-22 19:16 By: nobody Comment: ha ------------------------------------------------------- Date: 2000-Dec-19 18:48 By: nobody Comment: Patch submitted for the bltinmodule.c warning. The errors.c warnings are because isdigit() & friends expect an int, and the code is using *f, which is a char. isdigit() is a macro on Solaris. Presumably the fix is to use (int)*f on those lines. Same cause for the ones in stropmodule.c and intobject.c, I think. The warnings in socketmodule.c, and presumably the ones in signalmodule.c, intrcheck.c, and pythonrun.c too, seem to be because of Solaris's SIG_IGN. I suspect GCC is getting confused by it. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=121479&group_id=5470 From noreply@sourceforge.net Sat Dec 23 05:52:49 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Dec 2000 21:52:49 -0800 Subject: [Python-bugs-list] [Bug #126700] Demo/curses/tclock.py raises error. Message-ID: Bug #126700, was updated on 2000-Dec-22 21:52 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Open Resolution: None Bug Group: None Priority: 3 Submitted by: fdrake Assigned to : akuchling Summary: Demo/curses/tclock.py raises error. Details: Demo/curses/tclock.py raises the following exception: Traceback (most recent call last): File "tclock.py", line 149, in ? curses.wrapper(main) File "/home/fdrake/projects/python/Lib/curses/wrapper.py", line 44, in wrapper res = apply(func, (stdscr,) + rest) File "tclock.py", line 96, in main stdscr.addstr(cy - sdy, cx + sdx, "%d" % (i + 1)) _curses.error: addstr() returned ERR This is on Linux-Mandrake 7.1, ncurses packages ncurses-5.0-13mdk and ncurses-devel-5.0-13mdk. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126700&group_id=5470 From noreply@sourceforge.net Sat Dec 23 14:19:18 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Dec 2000 06:19:18 -0800 Subject: [Python-bugs-list] [Bug #126706] many std modules assume string.letters is [a-zA-Z] Message-ID: Bug #126706, was updated on 2000-Dec-23 06:19 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: many std modules assume string.letters is [a-zA-Z] Details: there are many modules in the standard library that use string.letters to mean A-Za-z, but that assumption is incorrect when locales are in use. also the readline library seems to cause the locale to be set according to the current environment variables, even if i don't call locale.*: % python2.0 -c 'import string; print string.letters' abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ % python2.0 Python 2.0 (#3, Oct 19 2000, 01:42:41) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Type "copyright", "credits" or "license" for more information. >>> print string.letters abcdefghijklmnopqrstuvwxyzµßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿABCDEFGHIJKLMNOPQRSTUVWXYZÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞ >>> here's what grep says on the standard library. most of these uses seem incorrect to me: % grep string.letters **/*.py Cookie.py:_LegalChars = string.letters + string.digits + "!#$%&'*+-.^_`|~"cmd.py:IDENTCHARS = string.letters + string.digits + '_' dospath.py: varchars = string.letters + string.digits + '_-' lib-old/codehack.py:identchars = string.letters + string.digits + '_' # Identifier characters ntpath.py: varchars = string.letters + string.digits + '_-' nturl2path.py: if len(comp) != 2 or comp[0][-1] not in string.letters: pipes.py:_safechars = string.letters + string.digits + '!@%_-+=:,./' # Safe unquoted pre.py: alphanum=string.letters+'_'+string.digits tokenize.py: namechars, numchars = string.letters + '_', string.digits urlparse.py:scheme_chars = string.letters + string.digits + '+-.' For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126706&group_id=5470 From noreply@sourceforge.net Sat Dec 23 14:51:05 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Dec 2000 06:51:05 -0800 Subject: [Python-bugs-list] [Bug #126700] Demo/curses/tclock.py raises error. Message-ID: Bug #126700, was updated on 2000-Dec-22 21:52 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Open Resolution: None Bug Group: None Priority: 3 Submitted by: fdrake Assigned to : akuchling Summary: Demo/curses/tclock.py raises error. Details: Demo/curses/tclock.py raises the following exception: Traceback (most recent call last): File "tclock.py", line 149, in ? curses.wrapper(main) File "/home/fdrake/projects/python/Lib/curses/wrapper.py", line 44, in wrapper res = apply(func, (stdscr,) + rest) File "tclock.py", line 96, in main stdscr.addstr(cy - sdy, cx + sdx, "%d" % (i + 1)) _curses.error: addstr() returned ERR This is on Linux-Mandrake 7.1, ncurses packages ncurses-5.0-13mdk and ncurses-devel-5.0-13mdk. Follow-Ups: Date: 2000-Dec-23 06:51 By: akuchling Comment: Ah, you must have run it in a tall skinny window. Fixed in revision 1.2, I think. Please close this bug report if it now works for you. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126700&group_id=5470 From noreply@sourceforge.net Sat Dec 23 16:58:57 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Dec 2000 08:58:57 -0800 Subject: [Python-bugs-list] [Bug #126700] Demo/curses/tclock.py raises error. Message-ID: Bug #126700, was updated on 2000-Dec-22 21:52 Here is a current snapshot of the bug. Project: Python Category: demos and tools Status: Closed Resolution: Fixed Bug Group: None Priority: 3 Submitted by: fdrake Assigned to : akuchling Summary: Demo/curses/tclock.py raises error. Details: Demo/curses/tclock.py raises the following exception: Traceback (most recent call last): File "tclock.py", line 149, in ? curses.wrapper(main) File "/home/fdrake/projects/python/Lib/curses/wrapper.py", line 44, in wrapper res = apply(func, (stdscr,) + rest) File "tclock.py", line 96, in main stdscr.addstr(cy - sdy, cx + sdx, "%d" % (i + 1)) _curses.error: addstr() returned ERR This is on Linux-Mandrake 7.1, ncurses packages ncurses-5.0-13mdk and ncurses-devel-5.0-13mdk. Follow-Ups: Date: 2000-Dec-23 08:58 By: fdrake Comment: Andrew fixed this in Demo/curses/tclock.py revision 1.2; closing the bug. Your diagnosis was right on the mark. ------------------------------------------------------- Date: 2000-Dec-23 06:51 By: akuchling Comment: Ah, you must have run it in a tall skinny window. Fixed in revision 1.2, I think. Please close this bug report if it now works for you. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126700&group_id=5470 From noreply@sourceforge.net Sun Dec 24 16:08:04 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Dec 2000 08:08:04 -0800 Subject: [Python-bugs-list] [Bug #126766] popen('python -c"...."') tends to hang Message-ID: Bug #126766, was updated on 2000-Dec-24 08:08 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: sabren Assigned to : nobody Summary: popen('python -c"...."') tends to hang Details: eg: import os os.popen('python -c"x=1;print x"').readlines() .. On my machine, using popen to call a second instance of python almost always causes python to freeze. No window pops up, but if I press alt-tab, there's an icon for w9xpopen.exe oddly: >>> os.popen('python -c"print"').readlines() and >> os.popen('python -c""').readlines() both work fine. ... This bug is different from #114780 in that it is repeatable and consistent. It happens on open, so is different from #125891. Eg: >>> proc = os.popen('python -c"x=1; print x"') will cause the crash. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126766&group_id=5470 From noreply@sourceforge.net Sun Dec 24 16:11:45 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Dec 2000 08:11:45 -0800 Subject: [Python-bugs-list] [Bug #126766] popen('python -c"...."') tends to hang Message-ID: Bug #126766, was updated on 2000-Dec-24 08:08 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: sabren Assigned to : nobody Summary: popen('python -c"...."') tends to hang Details: eg: import os os.popen('python -c"x=1;print x"').readlines() .. On my machine, using popen to call a second instance of python almost always causes python to freeze. No window pops up, but if I press alt-tab, there's an icon for w9xpopen.exe oddly: >>> os.popen('python -c"print"').readlines() and >> os.popen('python -c""').readlines() both work fine. ... This bug is different from #114780 in that it is repeatable and consistent. It happens on open, so is different from #125891. Eg: >>> proc = os.popen('python -c"x=1; print x"') will cause the crash. Follow-Ups: Date: 2000-Dec-24 08:11 By: sabren Comment: .. er.. whoops.. It hangs/freezes, not crashes. And in fact, it occasionally returns control to python after several minutes. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126766&group_id=5470 From noreply@sourceforge.net Sun Dec 24 16:40:09 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Dec 2000 08:40:09 -0800 Subject: [Python-bugs-list] [Bug #116008] Subsection Hypertext Links are broken in HTML Docs Message-ID: Bug #116008, was updated on 2000-Oct-04 07:33 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: pefu Assigned to : fdrake Summary: Subsection Hypertext Links are broken in HTML Docs Details: For example load ftp://python.beopen.com/pub/docco/devel/tut/node3.html into your favorite HTML browser and click on the link labeled "1.1 Where >From Here". It doesn't work as it used to work before in the 1.5.2 docs. Unfortunately I can't tell which change to the latex2html engine broke this. Follow-Ups: Date: 2000-Dec-24 08:40 By: anthon Comment: This is stil broken, and not caused by latex2html. If you render the HTML without the python specific initialisation it works fine. The problem seems to be in Doc/perl/python.perl. As anchor_invisible_mark is set to emtpy string the gets optimized away. Setting it to   (the change comes on a newline before a
, you don't see it) seems to take away the problem. A solution would be to not optimize empty structs away if they are
. Since I now little PERL, I had no clue where to start for that. ------------------------------------------------------- Date: 2000-Dec-12 13:12 By: gvanrossum Comment: Is this still broken, Fred? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=116008&group_id=5470 From noreply@sourceforge.net Sun Dec 24 17:40:28 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Dec 2000 09:40:28 -0800 Subject: [Python-bugs-list] [Bug #126766] popen('python -c"...."') tends to hang Message-ID: Bug #126766, was updated on 2000-Dec-24 08:08 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: sabren Assigned to : mhammond Summary: popen('python -c"...."') tends to hang Details: eg: import os os.popen('python -c"x=1;print x"').readlines() .. On my machine, using popen to call a second instance of python almost always causes python to freeze. No window pops up, but if I press alt-tab, there's an icon for w9xpopen.exe oddly: >>> os.popen('python -c"print"').readlines() and >> os.popen('python -c""').readlines() both work fine. ... This bug is different from #114780 in that it is repeatable and consistent. It happens on open, so is different from #125891. Eg: >>> proc = os.popen('python -c"x=1; print x"') will cause the crash. Follow-Ups: Date: 2000-Dec-24 09:40 By: tim_one Comment: Mark, any idea? The first example also appears to hang for me consistently (W98SE). In a debug build under the debugger, breaking during the hang yields a gibberish disassembly window (i.e., it's not showing code!), so I didn't get anywhere after 5 minutes of thrashing. ------------------------------------------------------- Date: 2000-Dec-24 08:11 By: sabren Comment: .. er.. whoops.. It hangs/freezes, not crashes. And in fact, it occasionally returns control to python after several minutes. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126766&group_id=5470 From noreply@sourceforge.net Mon Dec 25 09:53:21 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Dec 2000 01:53:21 -0800 Subject: [Python-bugs-list] [Bug #126790] SIGSEGV in chunk_malloc/Python 1.5.2 Linux i386 Message-ID: Bug #126790, was updated on 2000-Dec-25 01:53 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: ajung Assigned to : nobody Summary: SIGSEGV in chunk_malloc/Python 1.5.2 Linux i386 Details: Inside a Zope application python dies while calling the constructor of a class with a SIGSEGV. GDB traceback is available from http://www.andreas-jung.com/tmp/tb.gz (160KB) For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126790&group_id=5470 From noreply@sourceforge.net Tue Dec 26 16:01:01 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Dec 2000 08:01:01 -0800 Subject: [Python-bugs-list] [Bug #126836] curses.ascii.isspace(' ') == 0 (!) Message-ID: Bug #126836, was updated on 2000-Dec-26 08:01 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: mwh Assigned to : nobody Summary: curses.ascii.isspace(' ') == 0 (!) Details: I was surprised, at least. My internet access is restricted to my parents' iMac at the moment, so I can't generate a patch - but this should be too hard... For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126836&group_id=5470 From noreply@sourceforge.net Tue Dec 26 16:03:34 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Dec 2000 08:03:34 -0800 Subject: [Python-bugs-list] [Bug #126790] SIGSEGV in chunk_malloc/Python 1.5.2 Linux i386 Message-ID: Bug #126790, was updated on 2000-Dec-25 01:53 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: ajung Assigned to : nobody Summary: SIGSEGV in chunk_malloc/Python 1.5.2 Linux i386 Details: Inside a Zope application python dies while calling the constructor of a class with a SIGSEGV. GDB traceback is available from http://www.andreas-jung.com/tmp/tb.gz (160KB) Follow-Ups: Date: 2000-Dec-26 08:03 By: akuchling Comment: Not a bug. This looks like a runaway recursion in Python code, because the stack is 32583 levels deep. You could try using sys.setrecursionlimit() to decrease how deep Python code can recurse, which should help you get a Python traceback instead of a core dump. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126790&group_id=5470 From noreply@sourceforge.net Tue Dec 26 16:10:10 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Dec 2000 08:10:10 -0800 Subject: [Python-bugs-list] [Bug #126836] curses.ascii.isspace(' ') == 0 (!) Message-ID: Bug #126836, was updated on 2000-Dec-26 08:01 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: mwh Assigned to : akuchling Summary: curses.ascii.isspace(' ') == 0 (!) Details: I was surprised, at least. My internet access is restricted to my parents' iMac at the moment, so I can't generate a patch - but this should be too hard... Follow-Ups: Date: 2000-Dec-26 08:10 By: akuchling Comment: Eek, you're right! Fixed in revision 1.4 of curses/ascii.py. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126836&group_id=5470 From noreply@sourceforge.net Tue Dec 26 16:15:15 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Dec 2000 08:15:15 -0800 Subject: [Python-bugs-list] [Bug #126706] many std modules assume string.letters is [a-zA-Z] Message-ID: Bug #126706, was updated on 2000-Dec-23 06:19 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: many std modules assume string.letters is [a-zA-Z] Details: there are many modules in the standard library that use string.letters to mean A-Za-z, but that assumption is incorrect when locales are in use. also the readline library seems to cause the locale to be set according to the current environment variables, even if i don't call locale.*: % python2.0 -c 'import string; print string.letters' abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ % python2.0 Python 2.0 (#3, Oct 19 2000, 01:42:41) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Type "copyright", "credits" or "license" for more information. >>> print string.letters abcdefghijklmnopqrstuvwxyzµßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿABCDEFGHIJKLMNOPQRSTUVWXYZÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞ >>> here's what grep says on the standard library. most of these uses seem incorrect to me: % grep string.letters **/*.py Cookie.py:_LegalChars = string.letters + string.digits + "!#$%&'*+-.^_`|~"cmd.py:IDENTCHARS = string.letters + string.digits + '_' dospath.py: varchars = string.letters + string.digits + '_-' lib-old/codehack.py:identchars = string.letters + string.digits + '_' # Identifier characters ntpath.py: varchars = string.letters + string.digits + '_-' nturl2path.py: if len(comp) != 2 or comp[0][-1] not in string.letters: pipes.py:_safechars = string.letters + string.digits + '!@%_-+=:,./' # Safe unquoted pre.py: alphanum=string.letters+'_'+string.digits tokenize.py: namechars, numchars = string.letters + '_', string.digits urlparse.py:scheme_chars = string.letters + string.digits + '+-.' Follow-Ups: Date: 2000-Dec-26 08:15 By: akuchling Comment: The docs for the string module say that, for example, string.lowercase is " A string containing all the characters that are considered lowercase letters." This implies that the strings are locale-aware; code that uses string.lowercase to mean only a-z is therefore in error. (.digits is not locale-aware.) Solution: I'd suggest adding new, not locale-aware, constants. string.alphabet, string.lower_alphabet, string.upper_alphabet, maybe? Code should then be changed to use these new constants. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126706&group_id=5470 From noreply@sourceforge.net Tue Dec 26 20:06:14 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Dec 2000 12:06:14 -0800 Subject: [Python-bugs-list] [Bug #126850] file.seek() docs should mention append mode Message-ID: Bug #126850, was updated on 2000-Dec-26 12:06 Here is a current snapshot of the bug. Project: Python Category: None Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: file.seek() docs should mention append mode Details: f.seek() does nothing for files opened in 'a' mode (at least on linux). it is mentioned in passing in open() docs, but it should be mentioned in the seek() docs also. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126850&group_id=5470 From noreply@sourceforge.net Tue Dec 26 20:07:02 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Dec 2000 12:07:02 -0800 Subject: [Python-bugs-list] [Bug #126851] ftplib.py should default to passive mode Message-ID: Bug #126851, was updated on 2000-Dec-26 12:07 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: flight Assigned to : nobody Summary: ftplib.py should default to passive mode Details: For the Debian package, there has been the request that the ftplib module should by default use passive FTP. Any comments [Forwarded from the Debian bug tracking system, bug#71823] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=71823&repeatmerged=yes Sender: Mike Fisk Package: python-base Version: 1.5.2-10 This is an upstream bug that has existed for quite a while (probably forever). With many systems living behind firewalls (including their own ipchains filters), passive FTP should be the default for FTP clients. It always has been for Netscape and there hasn't been much uproar about that being bad. Python's ftplib.py supports passive mode, but defaults to non-passive mode. ftplib.py is used by other Debian packages such as the downloader in xanim-modules. The result, when living behind many firewalls, is that you can't download anything using ftplib.py or urllib.py. The patch to fix this is trivial: --- /usr/lib/python1.5/ftplib.py Sat Sep 16 14:31:35 2000 +++ /tmp/ftplib.py Sat Sep 16 14:31:24 2000 @@ -112,7 +112,7 @@ - port: port to connect to (integer, default previous port)''' if host: self.host = host if port: self.port = port - self.passiveserver = 0 + self.passiveserver = 1 self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.connect(self.host, self.port) self.file = self.sock.makefile('rb') -- Mike Fisk, RADIANT Team, Network Engineering Group, Los Alamos National Lab See http://home.lanl.gov/mfisk/ for contact information For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126851&group_id=5470 From noreply@sourceforge.net Tue Dec 26 20:18:57 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Dec 2000 12:18:57 -0800 Subject: [Python-bugs-list] [Bug #126706] many std modules assume string.letters is [a-zA-Z] Message-ID: Bug #126706, was updated on 2000-Dec-23 06:19 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: many std modules assume string.letters is [a-zA-Z] Details: there are many modules in the standard library that use string.letters to mean A-Za-z, but that assumption is incorrect when locales are in use. also the readline library seems to cause the locale to be set according to the current environment variables, even if i don't call locale.*: % python2.0 -c 'import string; print string.letters' abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ % python2.0 Python 2.0 (#3, Oct 19 2000, 01:42:41) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Type "copyright", "credits" or "license" for more information. >>> print string.letters abcdefghijklmnopqrstuvwxyzµßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿABCDEFGHIJKLMNOPQRSTUVWXYZÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞ >>> here's what grep says on the standard library. most of these uses seem incorrect to me: % grep string.letters **/*.py Cookie.py:_LegalChars = string.letters + string.digits + "!#$%&'*+-.^_`|~"cmd.py:IDENTCHARS = string.letters + string.digits + '_' dospath.py: varchars = string.letters + string.digits + '_-' lib-old/codehack.py:identchars = string.letters + string.digits + '_' # Identifier characters ntpath.py: varchars = string.letters + string.digits + '_-' nturl2path.py: if len(comp) != 2 or comp[0][-1] not in string.letters: pipes.py:_safechars = string.letters + string.digits + '!@%_-+=:,./' # Safe unquoted pre.py: alphanum=string.letters+'_'+string.digits tokenize.py: namechars, numchars = string.letters + '_', string.digits urlparse.py:scheme_chars = string.letters + string.digits + '+-.' Follow-Ups: Date: 2000-Dec-26 12:18 By: nobody Comment: string.ascii_letters etc is more precise than alphabet, imho. -- erno@iki.fi ------------------------------------------------------- Date: 2000-Dec-26 08:15 By: akuchling Comment: The docs for the string module say that, for example, string.lowercase is " A string containing all the characters that are considered lowercase letters." This implies that the strings are locale-aware; code that uses string.lowercase to mean only a-z is therefore in error. (.digits is not locale-aware.) Solution: I'd suggest adding new, not locale-aware, constants. string.alphabet, string.lower_alphabet, string.upper_alphabet, maybe? Code should then be changed to use these new constants. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126706&group_id=5470 From noreply@sourceforge.net Tue Dec 26 23:00:26 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Dec 2000 15:00:26 -0800 Subject: [Python-bugs-list] [Bug #126863] getopt long option handling broken Message-ID: Bug #126863, was updated on 2000-Dec-26 15:00 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: getopt long option handling broken Details: This problem is still present in the CVS version. [Forwarded from the Debian bug tracking system, Bug#80243] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=80243&repeatmerged=yes Sender: Matt Kraai Package: python-base Version: 1.5.2-10 If a long option which takes an argument is a prefix of a longer option, and if the first new character of the longer option is less than '=' in ascii, getopt returns an incorrect message that the prefix is not unique. For example, Script started on Thu Dec 21 14:19:44 2000 kraai@opensource:~$ python Python 1.5.2 (#0, Apr 3 2000, 14:46:48) [GCC 2.95.2 20000313 (Debian GNU/Linux)] on linux2 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> import getopt >>> getopt.getopt(["--foo", "bar"], "", ["foo=", "foobar"]) ([('--foo', 'bar')], []) >>> getopt.getopt(["--foo", "bar"], "", ["foo=", "foo-bar"]) Traceback (innermost last): File "", line 1, in ? File "/usr/lib/python1.5/getopt.py", line 58, in getopt opts, args = do_longs(opts, args[0][2:], longopts, args[1:]) File "/usr/lib/python1.5/getopt.py", line 71, in do_longs has_arg, opt = long_has_args(opt, longopts) File "/usr/lib/python1.5/getopt.py", line 93, in long_has_args raise error, 'option --%s not a unique prefix' % opt getopt.error: option --foo not a unique prefix >>> kraai@opensource:~$ Script done on Thu Dec 21 14:20:02 2000 The problem is that the trailing '=' causes the foo-bar option to precede the foo one, whereas the code assumes that the shortest option is first. The appended patch fixes this by sorting based on the option itself, not including the extra '='. I assume there is a better way to do this. --- getopt.py.orig Mon Apr 3 06:49:15 2000 +++ getopt.py Thu Dec 21 13:31:21 2000 @@ -49,7 +49,7 @@ longopts = [longopts] else: longopts = list(longopts) - longopts.sort() + longopts.sort(longopt_compare) while args and args[0][:1] == '-' and args[0] != '-': if args[0] == '--': args = args[1:] @@ -115,6 +115,18 @@ if opt == shortopts[i] != ':': return shortopts[i+1:i+2] == ':' raise error, 'option -%s not recognized' % opt + +def longopt_compare(opt1, opt2): + if opt1[-1] == '=': + opt1 = opt1[:-1] + if opt2[-1] == '=': + opt2 = opt2[:-1] + if opt1 < opt2: + return -1 + elif opt1 == opt2: + return 0 + else: + return 1 if __name__ == '__main__': import sys Matt For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126863&group_id=5470 From noreply@sourceforge.net Tue Dec 26 23:36:10 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Dec 2000 15:36:10 -0800 Subject: [Python-bugs-list] [Bug #126866] (xml.dom.minidom.Document()).toxml() breakable Message-ID: Bug #126866, was updated on 2000-Dec-26 15:36 Here is a current snapshot of the bug. Project: Python Category: XML Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: iainlamb Assigned to : nobody Summary: (xml.dom.minidom.Document()).toxml() breakable Details: Consider this code: from xml.dom.minidom import Document import sys e = Exception() try: raise e except: type = sys.exc_info()[0] d = Document() node = d.createTextNode(type) d.appendChild(node) print d.toxml() It's derived from a case where I inadvertently passed a non-string object (I was trying to represent the exception type) into createTextNode(). Run it and you'll get: Traceback (most recent call last): File "", line 11, in ? File "c:\python\lib\xml\dom\minidom.py", line 83, in toxml self.writexml(writer) File "c:\python\lib\xml\dom\minidom.py", line 461, in writexml node.writexml(writer) File "c:\python\lib\xml\dom\minidom.py", line 400, in writexml _write_data(writer, self.data) File "c:\python\lib\xml\dom\minidom.py", line 153, in _write_data data = string.replace(data, "&", "&") File "c:\python\lib\string.py", line 363, in replace return s.replace(old, new, maxsplit) AttributeError: replace I suggest you convert the text node's contents to a string before making the call to string.replace() in minidom.py Thanks for a cool dom implementation! - Iain Lamb For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126866&group_id=5470 From noreply@sourceforge.net Wed Dec 27 03:25:36 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Dec 2000 19:25:36 -0800 Subject: [Python-bugs-list] [Bug #126850] file.seek() docs should mention append mode Message-ID: Bug #126850, was updated on 2000-Dec-26 12:06 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : fdrake Summary: file.seek() docs should mention append mode Details: f.seek() does nothing for files opened in 'a' mode (at least on linux). it is mentioned in passing in open() docs, but it should be mentioned in the seek() docs also. Follow-Ups: Date: 2000-Dec-26 19:25 By: tim_one Comment: Assigned to Fred. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126850&group_id=5470 From noreply@sourceforge.net Wed Dec 27 08:08:21 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Dec 2000 00:08:21 -0800 Subject: [Python-bugs-list] [Bug #126863] getopt long option handling broken Message-ID: Bug #126863, was updated on 2000-Dec-26 15:00 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : fdrake Summary: getopt long option handling broken Details: This problem is still present in the CVS version. [Forwarded from the Debian bug tracking system, Bug#80243] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=80243&repeatmerged=yes Sender: Matt Kraai Package: python-base Version: 1.5.2-10 If a long option which takes an argument is a prefix of a longer option, and if the first new character of the longer option is less than '=' in ascii, getopt returns an incorrect message that the prefix is not unique. For example, Script started on Thu Dec 21 14:19:44 2000 kraai@opensource:~$ python Python 1.5.2 (#0, Apr 3 2000, 14:46:48) [GCC 2.95.2 20000313 (Debian GNU/Linux)] on linux2 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> import getopt >>> getopt.getopt(["--foo", "bar"], "", ["foo=", "foobar"]) ([('--foo', 'bar')], []) >>> getopt.getopt(["--foo", "bar"], "", ["foo=", "foo-bar"]) Traceback (innermost last): File "", line 1, in ? File "/usr/lib/python1.5/getopt.py", line 58, in getopt opts, args = do_longs(opts, args[0][2:], longopts, args[1:]) File "/usr/lib/python1.5/getopt.py", line 71, in do_longs has_arg, opt = long_has_args(opt, longopts) File "/usr/lib/python1.5/getopt.py", line 93, in long_has_args raise error, 'option --%s not a unique prefix' % opt getopt.error: option --foo not a unique prefix >>> kraai@opensource:~$ Script done on Thu Dec 21 14:20:02 2000 The problem is that the trailing '=' causes the foo-bar option to precede the foo one, whereas the code assumes that the shortest option is first. The appended patch fixes this by sorting based on the option itself, not including the extra '='. I assume there is a better way to do this. --- getopt.py.orig Mon Apr 3 06:49:15 2000 +++ getopt.py Thu Dec 21 13:31:21 2000 @@ -49,7 +49,7 @@ longopts = [longopts] else: longopts = list(longopts) - longopts.sort() + longopts.sort(longopt_compare) while args and args[0][:1] == '-' and args[0] != '-': if args[0] == '--': args = args[1:] @@ -115,6 +115,18 @@ if opt == shortopts[i] != ':': return shortopts[i+1:i+2] == ':' raise error, 'option -%s not recognized' % opt + +def longopt_compare(opt1, opt2): + if opt1[-1] == '=': + opt1 = opt1[:-1] + if opt2[-1] == '=': + opt2 = opt2[:-1] + if opt1 < opt2: + return -1 + elif opt1 == opt2: + return 0 + else: + return 1 if __name__ == '__main__': import sys Matt Follow-Ups: Date: 2000-Dec-27 00:08 By: tim_one Comment: Assigned to Fred and changed category to Doc: Fred, the getopt docs don't say anything now about accepting a unique prefix for long option names. The logic error here is fixed in CVS now, getopt.py rev 1.12 and test_getopt.py rev 1.3. Function long_has_args was excruciating, so rewrote it. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126863&group_id=5470 From noreply@sourceforge.net Wed Dec 27 11:47:21 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Dec 2000 03:47:21 -0800 Subject: [Python-bugs-list] [Bug #126863] getopt long option handling broken Message-ID: Bug #126863, was updated on 2000-Dec-26 15:00 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : fdrake Summary: getopt long option handling broken Details: This problem is still present in the CVS version. [Forwarded from the Debian bug tracking system, Bug#80243] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=80243&repeatmerged=yes Sender: Matt Kraai Package: python-base Version: 1.5.2-10 If a long option which takes an argument is a prefix of a longer option, and if the first new character of the longer option is less than '=' in ascii, getopt returns an incorrect message that the prefix is not unique. For example, Script started on Thu Dec 21 14:19:44 2000 kraai@opensource:~$ python Python 1.5.2 (#0, Apr 3 2000, 14:46:48) [GCC 2.95.2 20000313 (Debian GNU/Linux)] on linux2 Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam >>> import getopt >>> getopt.getopt(["--foo", "bar"], "", ["foo=", "foobar"]) ([('--foo', 'bar')], []) >>> getopt.getopt(["--foo", "bar"], "", ["foo=", "foo-bar"]) Traceback (innermost last): File "", line 1, in ? File "/usr/lib/python1.5/getopt.py", line 58, in getopt opts, args = do_longs(opts, args[0][2:], longopts, args[1:]) File "/usr/lib/python1.5/getopt.py", line 71, in do_longs has_arg, opt = long_has_args(opt, longopts) File "/usr/lib/python1.5/getopt.py", line 93, in long_has_args raise error, 'option --%s not a unique prefix' % opt getopt.error: option --foo not a unique prefix >>> kraai@opensource:~$ Script done on Thu Dec 21 14:20:02 2000 The problem is that the trailing '=' causes the foo-bar option to precede the foo one, whereas the code assumes that the shortest option is first. The appended patch fixes this by sorting based on the option itself, not including the extra '='. I assume there is a better way to do this. --- getopt.py.orig Mon Apr 3 06:49:15 2000 +++ getopt.py Thu Dec 21 13:31:21 2000 @@ -49,7 +49,7 @@ longopts = [longopts] else: longopts = list(longopts) - longopts.sort() + longopts.sort(longopt_compare) while args and args[0][:1] == '-' and args[0] != '-': if args[0] == '--': args = args[1:] @@ -115,6 +115,18 @@ if opt == shortopts[i] != ':': return shortopts[i+1:i+2] == ':' raise error, 'option -%s not recognized' % opt + +def longopt_compare(opt1, opt2): + if opt1[-1] == '=': + opt1 = opt1[:-1] + if opt2[-1] == '=': + opt2 = opt2[:-1] + if opt1 < opt2: + return -1 + elif opt1 == opt2: + return 0 + else: + return 1 if __name__ == '__main__': import sys Matt Follow-Ups: Date: 2000-Dec-27 03:47 By: flight Comment: Ooops, forgot to login before sending this bug. In case of questions, please ask me ------------------------------------------------------- Date: 2000-Dec-27 00:08 By: tim_one Comment: Assigned to Fred and changed category to Doc: Fred, the getopt docs don't say anything now about accepting a unique prefix for long option names. The logic error here is fixed in CVS now, getopt.py rev 1.12 and test_getopt.py rev 1.3. Function long_has_args was excruciating, so rewrote it. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126863&group_id=5470 From noreply@sourceforge.net Wed Dec 27 22:09:29 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Dec 2000 14:09:29 -0800 Subject: [Python-bugs-list] [Bug #126665] makepy crashes parsing "Hauppauge WinTV OCX (b.11) Message-ID: Bug #126665, was updated on 2000-Dec-22 01:40 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Closed Resolution: Wont Fix Bug Group: Platform-specific Priority: 5 Submitted by: jcable Assigned to : mhammond Summary: makepy crashes parsing "Hauppauge WinTV OCX (b.11) Details: Probably a problem with the OCX, but clearly a failure in makepy too: PythonWin 2.0 (#8, Oct 19 2000, 11:30:05) [MSC 32 bit (Intel)] on win32. Portions Copyright 1994-2000 Mark Hammond (MarkH@ActiveState.com) - see 'Help/About PythonWin' for further copyright information. >>> Generating to c:\python20\win32com\gen_py\2B143B63-055B-11D2-A96D-00A0C92A2D0Fx0x11x17.py Traceback (most recent call last): File "c:\python20\pythonwin\pywin\framework\scriptutils.py", line 301, in RunScript exec codeObject in __main__.__dict__ File "C:\Python20\win32com\client\makepy.py", line 357, in ? rc = main() File "C:\Python20\win32com\client\makepy.py", line 350, in main GenerateFromTypeLibSpec(arg, f, verboseLevel = verboseLevel, bForDemand = bForDemand, bBuildHidden = hiddenSpec) File "C:\Python20\win32com\client\makepy.py", line 254, in GenerateFromTypeLibSpec gen.generate(fileUse, bForDemand) File "c:\python20\win32com\client\genpy.py", line 665, in generate self.do_generate() File "c:\python20\win32com\client\genpy.py", line 719, in do_generate oleItems, enumItems, recordItems = self.BuildOleItemsFromType() File "c:\python20\win32com\client\genpy.py", line 620, in BuildOleItemsFromType refType = info.GetRefTypeInfo(info.GetRefTypeOfImplType(j)) com_error: (-2147312566, 'Error loading type library/DLL.', None, None) >>> The offending ocx can be found at: http://homepage.ntlworld.com/julian.cable/hcwWinTV.ocx Follow-Ups: Date: 2000-Dec-27 14:09 By: gvanrossum Comment: This is a Pythonwin problem, this is not the proper place to report this bug. Go to the activestate.com website to find out more. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126665&group_id=5470 From noreply@sourceforge.net Wed Dec 27 22:13:39 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Dec 2000 14:13:39 -0800 Subject: [Python-bugs-list] [Bug #126851] ftplib.py should default to passive mode Message-ID: Bug #126851, was updated on 2000-Dec-26 12:07 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: Feature Request Priority: 4 Submitted by: flight Assigned to : nobody Summary: ftplib.py should default to passive mode Details: For the Debian package, there has been the request that the ftplib module should by default use passive FTP. Any comments [Forwarded from the Debian bug tracking system, bug#71823] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=71823&repeatmerged=yes Sender: Mike Fisk Package: python-base Version: 1.5.2-10 This is an upstream bug that has existed for quite a while (probably forever). With many systems living behind firewalls (including their own ipchains filters), passive FTP should be the default for FTP clients. It always has been for Netscape and there hasn't been much uproar about that being bad. Python's ftplib.py supports passive mode, but defaults to non-passive mode. ftplib.py is used by other Debian packages such as the downloader in xanim-modules. The result, when living behind many firewalls, is that you can't download anything using ftplib.py or urllib.py. The patch to fix this is trivial: --- /usr/lib/python1.5/ftplib.py Sat Sep 16 14:31:35 2000 +++ /tmp/ftplib.py Sat Sep 16 14:31:24 2000 @@ -112,7 +112,7 @@ - port: port to connect to (integer, default previous port)''' if host: self.host = host if port: self.port = port - self.passiveserver = 0 + self.passiveserver = 1 self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.connect(self.host, self.port) self.file = self.sock.makefile('rb') -- Mike Fisk, RADIANT Team, Network Engineering Group, Los Alamos National Lab See http://home.lanl.gov/mfisk/ for contact information Follow-Ups: Date: 2000-Dec-27 14:13 By: twouters Comment: For what it's worth, I mildly agree that passive mode should be the default. However, it does have potential for breaking stuff: using passive-ftp *into* a firewall, instead of out of one, doesn't work. And I'm pretty sure that Python's ftplib is used much more often in that manner than is Netscape or whatever other ftp client defaults to passive. It's probably not much, but I think it's enough to think twice about changing the default ;P ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126851&group_id=5470 From noreply@sourceforge.net Wed Dec 27 22:21:24 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Dec 2000 14:21:24 -0800 Subject: [Python-bugs-list] [Bug #126254] Traceback objects not properly garbage-collected Message-ID: Bug #126254, was updated on 2000-Dec-18 17:50 Here is a current snapshot of the bug. Project: Python Category: Python Interpreter Core Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: nobody Assigned to : gvanrossum Summary: Traceback objects not properly garbage-collected Details: System info: ============ Python 2.0 (#1, Dec 18 2000, 16:47:02) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Linux phil 2.2.18 #1 Mon Dec 18 14:49:56 PST 2000 i686 unknown Sample code: ============ import sys class fooclass: def __init__(self): print 'CONSTRUCTED' def withtb(self, doit=0): try: raise "foo" except: if doit: tb = sys.exc_info()[2] def __del__(self): print 'DESTROYED' if __name__ == '__main__': foo = fooclass() if len(sys.argv) > 1: foo.withtb(1) else: foo.withtb(0) del foo How to reproduce: ================= Run the above python script: 1. Without any argument: the withtb() method exception handler does not retrieve any traceback object. The program prints `CONSTRUCTED' and `DESTROYED'. 2. With some arguments: the withtb() method exception handler retrieves a traceback object and stores it in the `tb' local variable. However `DESTROYED' never gets printed out. I think that the `foo' object will never be garbage collected anymore. Workaround: =========== Deleting the `tb' object seems to restore things: if doit: tb = sys.exc_info()[2] del tb Other: ====== I've found this problem also in python 1.5.2 and python 1.6. Possible cause: =============== I would tend to think that we're creating a circular loop which cannot be garbage collected: - `tb' holds a reference to the traceback object - the traceback object holds a reference to the local scope - the local scope holds a reference to the `tb' variable The only way out is to break the circular reference by hand, although it's annoying. Phil - phil@commerceflow.com. Follow-Ups: Date: 2000-Dec-27 14:21 By: gvanrossum Comment: Note that the bug report was about 1.5.2 and 1.6. This should be fixed by the cycle collection in 2.0, shouldn't it? ------------------------------------------------------- Date: 2000-Dec-20 18:59 By: tim_one Comment: "Invalid" is a just a word -- it comes with SF bug system and isn't defined anywhere. By convention, we pair Not-A-Bug with Invalid, for lack of something better to do. Not-A-Bug means it's not a bug : you may not like the answer, but Guido is saying it's functioning as designed and he has no plans to change that. ------------------------------------------------------- Date: 2000-Dec-20 18:50 By: nobody Comment: oops, didn't see you comment. Forget about my question... Phil - phil@commerceflow.com. ------------------------------------------------------- Date: 2000-Dec-20 18:49 By: nobody Comment: Could I know why this was deemed to be `Invalid' ? Phil - phil@commerceflow.com. ------------------------------------------------------- Date: 2000-Dec-18 20:21 By: gvanrossum Comment: This is not a bug. Saving the traceback as a local variable creates a circular reference that prevents garbage collection. If you don't understand this answer, please write help@python.org. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126254&group_id=5470 From noreply@sourceforge.net Wed Dec 27 22:21:04 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Dec 2000 14:21:04 -0800 Subject: [Python-bugs-list] [Bug #126619] Dict methods have no __doc__ strings Message-ID: Bug #126619, was updated on 2000-Dec-21 09:59 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Closed Resolution: Fixed Bug Group: Feature Request Priority: 5 Submitted by: nobody Assigned to : tim_one Summary: Dict methods have no __doc__ strings Details: Should be easy to fix, but all dict methods do not have __doc__ strings. They should for consistency. I personally rarely ever use documentation outside of Python now, and good __doc__s has been the reason why. Follow-Ups: Date: 2000-Dec-27 14:21 By: twouters Comment: I wholeheartedly agree, but fortunately dict-method docstrings were added in dictobject.c revision 2.71, by Tim. :) Marking the bugreport/feature-request closed. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126619&group_id=5470 From noreply@sourceforge.net Wed Dec 27 22:40:20 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Dec 2000 14:40:20 -0800 Subject: [Python-bugs-list] [Bug #123634] Pickle broken on Unicode strings Message-ID: Bug #123634, was updated on 2000-Nov-27 14:03 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: tlau Assigned to : gvanrossum Summary: Pickle broken on Unicode strings Details: Two one-liners that produce incorrect output: >>> cPickle.loads(cPickle.dumps(u'')) Traceback (most recent call last): File "", line 1, in ? cPickle.UnpicklingError: pickle data was truncated >>> cPickle.loads(cPickle.dumps(u'\u03b1 alpha\n\u03b2 beta')) Traceback (most recent call last): File "", line 1, in ? cPickle.UnpicklingError: invalid load key, '\'. The format of the Unicode string in the pickled representation is not escaped, as it is with regular strings. It should be. The latter bug occurs in both pickle and cPickle; the former is only a problem with cPickle. Follow-Ups: Date: 2000-Dec-27 14:40 By: gvanrossum Comment: Your fix is backwards incompatible. Mine is compatible for strings not containing backslashes. I don't understand your comment about avoiding eval(): the code doesn't use eval() (and didn't before I changed it), while your patch *adds* use of eval(). ------------------------------------------------------- Date: 2000-Dec-20 11:18 By: nobody Comment: About your fix: this is not the solution I had in mind. I wanted to avoid the problems and performance hit by not using an encoding which requires eval() to build the Unicode object. Wouldn't the solution I proposed be both easier to implement and safe us from adding eval() to pickle et al. ?! -- Marc-Andre ------------------------------------------------------- Date: 2000-Dec-18 18:10 By: gvanrossum Comment: Fixed in both pickle.py (rev. 1.41) and cPickle.py (rev. 2.54). I've also checked in tests for these and similar endcases. ------------------------------------------------------- Date: 2000-Nov-27 14:36 By: tlau Comment: One more comment: binary-format pickles are not affected, only text-format pickles. Thus the part of my patch that applies to the binary section of the save_unicode function should not be applied. ------------------------------------------------------- Date: 2000-Nov-27 14:35 By: lemburg Comment: Some background (no time to fix this myself): When I added the Unicode handlers, I wanted to avoid the problems that the string dump mechanism has with quoted strings. The encodings used either carry length information (in binary mode: UTF-8) or do not include the \n character (in ascii mode: raw-unicode-escape encoding). Unfortunately, the raw-unicode-escape codec does not escape the newline character which is used by pickle to break the input into tokens.... Proposed fix: change the encoding to "unicode-escape" which doesn't have this problem. This will break code, but only code that is already broken :-/ ------------------------------------------------------- Date: 2000-Nov-27 14:20 By: tlau Comment: Here's my proposed patch to Lib/pickle.py (cPickle should be changed similarly): --- /scratch/tlau/Python-2.0/Lib/pickle.py Mon Oct 16 14:49:51 2000 +++ pickle.py Mon Nov 27 14:07:01 2000 @@ -286,9 +286,9 @@ encoding = object.encode('utf-8') l = len(encoding) s = mdumps(l)[1:] - self.write(BINUNICODE + s + encoding) + self.write(BINUNICODE + `s` + encoding) else: - self.write(UNICODE + object.encode('raw-unicode-escape') + '\n') + self.write(UNICODE + `object.encode('raw-unicode-escape')` + '\n') memo_len = len(memo) self.write(self.put(memo_len)) @@ -627,7 +627,12 @@ dispatch[BINSTRING] = load_binstring def load_unicode(self): - self.append(unicode(self.readline()[:-1],'raw-unicode-escape')) + rep = self.readline()[:-1] + if not self._is_string_secure(rep): + raise ValueError, "insecure string pickle" + rep = eval(rep, + {'__builtins__': {}}) # Let's be careful + self.append(unicode(rep, 'raw-unicode-escape')) dispatch[UNICODE] = load_unicode def load_binunicode(self): ------------------------------------------------------- Date: 2000-Nov-27 14:14 By: gvanrossum Comment: Jim, do you have time to look into this? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=123634&group_id=5470 From noreply@sourceforge.net Wed Dec 27 23:16:06 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Dec 2000 15:16:06 -0800 Subject: [Python-bugs-list] [Bug #126851] ftplib.py should default to passive mode Message-ID: Bug #126851, was updated on 2000-Dec-26 12:07 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: Feature Request Priority: 4 Submitted by: flight Assigned to : nobody Summary: ftplib.py should default to passive mode Details: For the Debian package, there has been the request that the ftplib module should by default use passive FTP. Any comments [Forwarded from the Debian bug tracking system, bug#71823] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=71823&repeatmerged=yes Sender: Mike Fisk Package: python-base Version: 1.5.2-10 This is an upstream bug that has existed for quite a while (probably forever). With many systems living behind firewalls (including their own ipchains filters), passive FTP should be the default for FTP clients. It always has been for Netscape and there hasn't been much uproar about that being bad. Python's ftplib.py supports passive mode, but defaults to non-passive mode. ftplib.py is used by other Debian packages such as the downloader in xanim-modules. The result, when living behind many firewalls, is that you can't download anything using ftplib.py or urllib.py. The patch to fix this is trivial: --- /usr/lib/python1.5/ftplib.py Sat Sep 16 14:31:35 2000 +++ /tmp/ftplib.py Sat Sep 16 14:31:24 2000 @@ -112,7 +112,7 @@ - port: port to connect to (integer, default previous port)''' if host: self.host = host if port: self.port = port - self.passiveserver = 0 + self.passiveserver = 1 self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.connect(self.host, self.port) self.file = self.sock.makefile('rb') -- Mike Fisk, RADIANT Team, Network Engineering Group, Los Alamos National Lab See http://home.lanl.gov/mfisk/ for contact information Follow-Ups: Date: 2000-Dec-27 15:16 By: gvanrossum Comment: Hmm... I like the proposed patch. I don't know about ftp'ing into a firewall -- why would that be common? Typically ftp servers live outside firewalls because ftp is considered insecure... ------------------------------------------------------- Date: 2000-Dec-27 14:13 By: twouters Comment: For what it's worth, I mildly agree that passive mode should be the default. However, it does have potential for breaking stuff: using passive-ftp *into* a firewall, instead of out of one, doesn't work. And I'm pretty sure that Python's ftplib is used much more often in that manner than is Netscape or whatever other ftp client defaults to passive. It's probably not much, but I think it's enough to think twice about changing the default ;P ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126851&group_id=5470 From noreply@sourceforge.net Wed Dec 27 23:16:22 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Dec 2000 15:16:22 -0800 Subject: [Python-bugs-list] [Bug #126851] ftplib.py should default to passive mode Message-ID: Bug #126851, was updated on 2000-Dec-26 12:07 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: Feature Request Priority: 4 Submitted by: flight Assigned to : gvanrossum Summary: ftplib.py should default to passive mode Details: For the Debian package, there has been the request that the ftplib module should by default use passive FTP. Any comments [Forwarded from the Debian bug tracking system, bug#71823] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=71823&repeatmerged=yes Sender: Mike Fisk Package: python-base Version: 1.5.2-10 This is an upstream bug that has existed for quite a while (probably forever). With many systems living behind firewalls (including their own ipchains filters), passive FTP should be the default for FTP clients. It always has been for Netscape and there hasn't been much uproar about that being bad. Python's ftplib.py supports passive mode, but defaults to non-passive mode. ftplib.py is used by other Debian packages such as the downloader in xanim-modules. The result, when living behind many firewalls, is that you can't download anything using ftplib.py or urllib.py. The patch to fix this is trivial: --- /usr/lib/python1.5/ftplib.py Sat Sep 16 14:31:35 2000 +++ /tmp/ftplib.py Sat Sep 16 14:31:24 2000 @@ -112,7 +112,7 @@ - port: port to connect to (integer, default previous port)''' if host: self.host = host if port: self.port = port - self.passiveserver = 0 + self.passiveserver = 1 self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.connect(self.host, self.port) self.file = self.sock.makefile('rb') -- Mike Fisk, RADIANT Team, Network Engineering Group, Los Alamos National Lab See http://home.lanl.gov/mfisk/ for contact information Follow-Ups: Date: 2000-Dec-27 15:16 By: gvanrossum Comment: Hmm... I like the proposed patch. I don't know about ftp'ing into a firewall -- why would that be common? Typically ftp servers live outside firewalls because ftp is considered insecure... ------------------------------------------------------- Date: 2000-Dec-27 14:13 By: twouters Comment: For what it's worth, I mildly agree that passive mode should be the default. However, it does have potential for breaking stuff: using passive-ftp *into* a firewall, instead of out of one, doesn't work. And I'm pretty sure that Python's ftplib is used much more often in that manner than is Netscape or whatever other ftp client defaults to passive. It's probably not much, but I think it's enough to think twice about changing the default ;P ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126851&group_id=5470 From noreply@sourceforge.net Thu Dec 28 09:37:40 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Dec 2000 01:37:40 -0800 Subject: [Python-bugs-list] [Bug #123634] Pickle broken on Unicode strings Message-ID: Bug #123634, was updated on 2000-Nov-27 14:03 Here is a current snapshot of the bug. Project: Python Category: Unicode Status: Closed Resolution: Fixed Bug Group: None Priority: 5 Submitted by: tlau Assigned to : gvanrossum Summary: Pickle broken on Unicode strings Details: Two one-liners that produce incorrect output: >>> cPickle.loads(cPickle.dumps(u'')) Traceback (most recent call last): File "", line 1, in ? cPickle.UnpicklingError: pickle data was truncated >>> cPickle.loads(cPickle.dumps(u'\u03b1 alpha\n\u03b2 beta')) Traceback (most recent call last): File "", line 1, in ? cPickle.UnpicklingError: invalid load key, '\'. The format of the Unicode string in the pickled representation is not escaped, as it is with regular strings. It should be. The latter bug occurs in both pickle and cPickle; the former is only a problem with cPickle. Follow-Ups: Date: 2000-Dec-28 01:37 By: nobody Comment: Sorry, I looked at the fix proposed by "tlau". The CVS version is just fine :-) -- Marc-Andre ------------------------------------------------------- Date: 2000-Dec-27 14:40 By: gvanrossum Comment: Your fix is backwards incompatible. Mine is compatible for strings not containing backslashes. I don't understand your comment about avoiding eval(): the code doesn't use eval() (and didn't before I changed it), while your patch *adds* use of eval(). ------------------------------------------------------- Date: 2000-Dec-20 11:18 By: nobody Comment: About your fix: this is not the solution I had in mind. I wanted to avoid the problems and performance hit by not using an encoding which requires eval() to build the Unicode object. Wouldn't the solution I proposed be both easier to implement and safe us from adding eval() to pickle et al. ?! -- Marc-Andre ------------------------------------------------------- Date: 2000-Dec-18 18:10 By: gvanrossum Comment: Fixed in both pickle.py (rev. 1.41) and cPickle.py (rev. 2.54). I've also checked in tests for these and similar endcases. ------------------------------------------------------- Date: 2000-Nov-27 14:36 By: tlau Comment: One more comment: binary-format pickles are not affected, only text-format pickles. Thus the part of my patch that applies to the binary section of the save_unicode function should not be applied. ------------------------------------------------------- Date: 2000-Nov-27 14:35 By: lemburg Comment: Some background (no time to fix this myself): When I added the Unicode handlers, I wanted to avoid the problems that the string dump mechanism has with quoted strings. The encodings used either carry length information (in binary mode: UTF-8) or do not include the \n character (in ascii mode: raw-unicode-escape encoding). Unfortunately, the raw-unicode-escape codec does not escape the newline character which is used by pickle to break the input into tokens.... Proposed fix: change the encoding to "unicode-escape" which doesn't have this problem. This will break code, but only code that is already broken :-/ ------------------------------------------------------- Date: 2000-Nov-27 14:20 By: tlau Comment: Here's my proposed patch to Lib/pickle.py (cPickle should be changed similarly): --- /scratch/tlau/Python-2.0/Lib/pickle.py Mon Oct 16 14:49:51 2000 +++ pickle.py Mon Nov 27 14:07:01 2000 @@ -286,9 +286,9 @@ encoding = object.encode('utf-8') l = len(encoding) s = mdumps(l)[1:] - self.write(BINUNICODE + s + encoding) + self.write(BINUNICODE + `s` + encoding) else: - self.write(UNICODE + object.encode('raw-unicode-escape') + '\n') + self.write(UNICODE + `object.encode('raw-unicode-escape')` + '\n') memo_len = len(memo) self.write(self.put(memo_len)) @@ -627,7 +627,12 @@ dispatch[BINSTRING] = load_binstring def load_unicode(self): - self.append(unicode(self.readline()[:-1],'raw-unicode-escape')) + rep = self.readline()[:-1] + if not self._is_string_secure(rep): + raise ValueError, "insecure string pickle" + rep = eval(rep, + {'__builtins__': {}}) # Let's be careful + self.append(unicode(rep, 'raw-unicode-escape')) dispatch[UNICODE] = load_unicode def load_binunicode(self): ------------------------------------------------------- Date: 2000-Nov-27 14:14 By: gvanrossum Comment: Jim, do you have time to look into this? ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=123634&group_id=5470 From noreply@sourceforge.net Thu Dec 28 15:47:18 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Dec 2000 07:47:18 -0800 Subject: [Python-bugs-list] [Bug #126766] popen('python -c"...."') tends to hang Message-ID: Bug #126766, was updated on 2000-Dec-24 08:08 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: sabren Assigned to : mhammond Summary: popen('python -c"...."') tends to hang Details: eg: import os os.popen('python -c"x=1;print x"').readlines() .. On my machine, using popen to call a second instance of python almost always causes python to freeze. No window pops up, but if I press alt-tab, there's an icon for w9xpopen.exe oddly: >>> os.popen('python -c"print"').readlines() and >> os.popen('python -c""').readlines() both work fine. ... This bug is different from #114780 in that it is repeatable and consistent. It happens on open, so is different from #125891. Eg: >>> proc = os.popen('python -c"x=1; print x"') will cause the crash. Follow-Ups: Date: 2000-Dec-28 07:47 By: gvanrossum Comment: Any chance you have Norton AntiVirus 2000 running? See http://sourceforge.net/bugs/?group_id=5470&func=detailbug&bug_id=114598 I've basically given up on popen for windows. :-( os.spawn*() works great though -- if you don't need to read the output. :-) ------------------------------------------------------- Date: 2000-Dec-24 09:40 By: tim_one Comment: Mark, any idea? The first example also appears to hang for me consistently (W98SE). In a debug build under the debugger, breaking during the hang yields a gibberish disassembly window (i.e., it's not showing code!), so I didn't get anywhere after 5 minutes of thrashing. ------------------------------------------------------- Date: 2000-Dec-24 08:11 By: sabren Comment: .. er.. whoops.. It hangs/freezes, not crashes. And in fact, it occasionally returns control to python after several minutes. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126766&group_id=5470 From noreply@sourceforge.net Thu Dec 28 23:34:52 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Dec 2000 15:34:52 -0800 Subject: [Python-bugs-list] [Bug #126766] popen('python -c"...."') tends to hang Message-ID: Bug #126766, was updated on 2000-Dec-24 08:08 Here is a current snapshot of the bug. Project: Python Category: Windows Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: sabren Assigned to : mhammond Summary: popen('python -c"...."') tends to hang Details: eg: import os os.popen('python -c"x=1;print x"').readlines() .. On my machine, using popen to call a second instance of python almost always causes python to freeze. No window pops up, but if I press alt-tab, there's an icon for w9xpopen.exe oddly: >>> os.popen('python -c"print"').readlines() and >> os.popen('python -c""').readlines() both work fine. ... This bug is different from #114780 in that it is repeatable and consistent. It happens on open, so is different from #125891. Eg: >>> proc = os.popen('python -c"x=1; print x"') will cause the crash. Follow-Ups: Date: 2000-Dec-28 15:34 By: mhammond Comment: I have done lots of playing with this over the last month or so. The problem appears to be something to do with "python.exe" used as the target of such a popen command. (Ironically, we tried to use popen to capture remote Python invocations for Komodo - as I guess you are for IDLE) I have experimented with 3 "different" popen implementations: Python's, one written in Python using the win32 API directly, and one using the Netscape NSPR libraries. They all, however, end up doing the same basic thing with the same basic Windows API functions. They all behace the same WRT reading input. Python.exe (and one or 2 other exes) appear to hang when they are in an "interactive loop", and the spawning process is trying to read the input pipe. My experiements at breaking into the debugger shows Windows blocked inside the ReadFile() function. Note that Perl.exe does _not_ appear to provoke this (but Perl doesnt have a builtin interactive loop, so is harder to prove). Also note that "cmd.exe" also does _not_ provoke this - ie, both Perl.exe and cmd.exe both work fine, correctly reading and writing either pipe when the process is "interactive" So - after all this, my best guess was that there is something weird in the stdio Python uses to read stdin, and that Perl and cmd.exe both avoid this. Random, 100% speculative guess is that the "/MD" stdio is broken, but others wont be. This is as far as I got - I was going to experiment with /MD, other input techniques etc, but never got to it. It should be simple to reproduce with a simple test .exe, but alas never got to that either. I should have more time over the next few weeks. ------------------------------------------------------- Date: 2000-Dec-28 07:47 By: gvanrossum Comment: Any chance you have Norton AntiVirus 2000 running? See http://sourceforge.net/bugs/?group_id=5470&func=detailbug&bug_id=114598 I've basically given up on popen for windows. :-( os.spawn*() works great though -- if you don't need to read the output. :-) ------------------------------------------------------- Date: 2000-Dec-24 09:40 By: tim_one Comment: Mark, any idea? The first example also appears to hang for me consistently (W98SE). In a debug build under the debugger, breaking during the hang yields a gibberish disassembly window (i.e., it's not showing code!), so I didn't get anywhere after 5 minutes of thrashing. ------------------------------------------------------- Date: 2000-Dec-24 08:11 By: sabren Comment: .. er.. whoops.. It hangs/freezes, not crashes. And in fact, it occasionally returns control to python after several minutes. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126766&group_id=5470 From noreply@sourceforge.net Fri Dec 29 04:30:49 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Dec 2000 20:30:49 -0800 Subject: [Python-bugs-list] [Bug #127055] bisect module needs updated docs Message-ID: Bug #127055, was updated on 2000-Dec-28 20:30 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: tim_one Assigned to : fdrake Summary: bisect module needs updated docs Details: I'd just take the docstrings out of {bisect,insort}_{left,right} and LaTex'ize them, but am not sure how. Would also mention that "insort" and "bisect" are aliases for insort_right and bisect_right, for backward compatibility. Yell at me if I can be of real help! For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=127055&group_id=5470 From noreply@sourceforge.net Fri Dec 29 20:33:26 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Dec 2000 12:33:26 -0800 Subject: [Python-bugs-list] [Bug #127098] Explanation of try/else in Lang Ref is flawed Message-ID: Bug #127098, was updated on 2000-Dec-29 12:33 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: tim_one Assigned to : fdrake Summary: Explanation of try/else in Lang Ref is flawed Details: Suggested replacement: """ The optional 'else' clause is executed when the 'try' clause terminates by any means other than an exception or executing a 'return', 'continue' or 'break' statement. Exceptions in the 'else' clause are not handled by the prereceding 'except' clauses. """ See Python-Dev for discussion. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=127098&group_id=5470 From noreply@sourceforge.net Fri Dec 29 20:36:32 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 29 Dec 2000 12:36:32 -0800 Subject: [Python-bugs-list] [Bug #127098] Explanation of try/else in Lang Ref is flawed Message-ID: Bug #127098, was updated on 2000-Dec-29 12:33 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: tim_one Assigned to : fdrake Summary: Explanation of try/else in Lang Ref is flawed Details: Suggested replacement: """ The optional 'else' clause is executed when the 'try' clause terminates by any means other than an exception or executing a 'return', 'continue' or 'break' statement. Exceptions in the 'else' clause are not handled by the prereceding 'except' clauses. """ See Python-Dev for discussion. Follow-Ups: Date: 2000-Dec-29 12:36 By: tim_one Comment: Except I should have spelled "preceding" correctly. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=127098&group_id=5470 From noreply@sourceforge.net Sat Dec 30 16:30:01 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 30 Dec 2000 08:30:01 -0800 Subject: [Python-bugs-list] [Bug #125610] SuppReq: please elaborate on your email notif. requests Message-ID: Bug #125610, was updated on 2000-Dec-13 05:34 Here is a current snapshot of the bug. Project: Python Category: None Status: Closed Resolution: None Bug Group: Not a Bug Priority: 5 Submitted by: pfalcon Assigned to : gvanrossum Summary: SuppReq: please elaborate on your email notif. requests Details: We've got the task "Python requests" http://sourceforge.net/pm/task.php?func=detailtask&project_task_id=22577&group_id=1&group_project_id=2 . I believe bigdisk knows what that means but I think I could do that faster, so I'd like to have information from the original source. Please give specific examples how you want it to be. Thanks. Follow-Ups: Date: 2000-Dec-30 08:30 By: pfalcon Comment: I just wanted to let you know that these recommendation are being worked on. Email wrapping went live with pre-Xmas sync, and patch for clickable submitter/assignee names in all tools has been submitted. I'll do changes highlighting after holidays. I think that carrying along all the change trace (as on the web) will be not exactly what most people would want, so I have an idea to make just latest changes highlighted (specifically, quoting corresponding lines in email). If it won't be what you want, please tell me. I totally agree about querying bugs by specific #. I consider fully obvious and intuitive to use search box for that - after all, that's also *search*. And searching stuff is in my scope, so I'll go for that too ;-) . Happy New Year! ------------------------------------------------------- Date: 2000-Dec-18 14:39 By: gvanrossum Comment: Closing this now -- send mail to guido@python.org if you need more help. ;-) ------------------------------------------------------- Date: 2000-Dec-13 08:27 By: gvanrossum Comment: One more thing: it would be really handy if there was a box *somewhere* (maybe in the left margin?) where you could type a bug_id or patch_id and click OK to go directly to the details page of that item. We all need this regularly, and we all use the hack of editing the URL in "Location" field of the browser. There's *got* to be a better way. :-) ------------------------------------------------------- Date: 2000-Dec-13 06:20 By: gvanrossum Comment: OK, I'll clarify. Note that this applies both to the patch and the bugs products. 1. Word wrap: the comments entered in the database for bugs & patches are often entered with a single very long line per paragraph. When the notification email is sent out, most Unix mail readers don't wrap words correctly. The request is to break any line that is longer than 79 characters in shorter pieces, the way e.g. ESC-q does in Emacs, or the fmt(1) program. 2. clickable submitter name: in the patch or bug details page, the submitter ("Submitted By" field) should be a hyperlink to the developer profile for that user (except if it is Nobody, of course). 3. mention what changed in the email: it would be nice if at the top of the notification email it said what caused the mail to be sent, e.g. "status changed from XXX to YYY" or "assiged to ZZZ" or "new comment added by XXX" or "new patch uploaded" or "priority changed to QQQ". If more than one field changed they should all be summarized. Hope this helps! Thanks for doing this. We love our SourceForge! ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=125610&group_id=5470 From noreply@sourceforge.net Sun Dec 31 02:16:15 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 30 Dec 2000 18:16:15 -0800 Subject: [Python-bugs-list] [Bug #127072] log, exp & sqrt (math) function innaccuracies Message-ID: Bug #127072, was updated on 2000-Dec-29 05:50 Here is a current snapshot of the bug. Project: Python Category: Extension Modules Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: salmoni Assigned to : tim_one Summary: log, exp & sqrt (math) function innaccuracies Details: Perform: y = int(math.exp(math.log(math.sqrt(2) * math.sqrt(2)))) nb, exp(log(x)) = x sqrt(x) * sqrt(x) = x This produces 0 when the answer should be 1 - the floating point number is 0.99999999999999956 but it should be 1.0 This was reproduced on Windows (95) and Linux (RedHat 6.0) both using Python 2.0. Alan James Salmoni Follow-Ups: Date: 2000-Dec-30 18:16 By: fdrake Comment: Tim's the math guy, so this is his. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=127072&group_id=5470 From noreply@sourceforge.net Sun Dec 31 02:20:43 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 30 Dec 2000 18:20:43 -0800 Subject: [Python-bugs-list] [Bug #126564] Default of static linking 'bsddb' breaks 3rd party modules Message-ID: Bug #126564, was updated on 2000-Dec-20 22:25 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: nobody Assigned to : montanaro Summary: Default of static linking 'bsddb' breaks 3rd party modules Details: Python 2.0 builds the 'bsddb' module into the python interpreter *static* by default. When built this way on systems such as debian potato linux and some versions of redhat linux (to name a few) it links statically with an early BerkeleyDB 2.1.x. This causes problems to the current and under-development bsddb 3.x third party modules. They import but the functions they call are from the wrong library so they often coredump or return unexpected error codes. See the py-bsddb project on sourceforge. Also see http://electricrain.com/greg/python/ for the current stable py-bsddb3 module. Short term solution: Make the default build method for this module *shared* instead of static. Long term solution: the py-bsddb project should be able to replace the old bsddb module in the distribution. Follow-Ups: Date: 2000-Dec-30 18:20 By: fdrake Comment: Lots of the recent work on this is Skip's effort, so he can probably handle this more quickly than the rest of us. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126564&group_id=5470 From noreply@sourceforge.net Sun Dec 31 02:26:44 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 30 Dec 2000 18:26:44 -0800 Subject: [Python-bugs-list] [Bug #126706] many std modules assume string.letters is [a-zA-Z] Message-ID: Bug #126706, was updated on 2000-Dec-23 06:19 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: many std modules assume string.letters is [a-zA-Z] Details: there are many modules in the standard library that use string.letters to mean A-Za-z, but that assumption is incorrect when locales are in use. also the readline library seems to cause the locale to be set according to the current environment variables, even if i don't call locale.*: % python2.0 -c 'import string; print string.letters' abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ % python2.0 Python 2.0 (#3, Oct 19 2000, 01:42:41) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Type "copyright", "credits" or "license" for more information. >>> print string.letters abcdefghijklmnopqrstuvwxyzµßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿABCDEFGHIJKLMNOPQRSTUVWXYZÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞ >>> here's what grep says on the standard library. most of these uses seem incorrect to me: % grep string.letters **/*.py Cookie.py:_LegalChars = string.letters + string.digits + "!#$%&'*+-.^_`|~"cmd.py:IDENTCHARS = string.letters + string.digits + '_' dospath.py: varchars = string.letters + string.digits + '_-' lib-old/codehack.py:identchars = string.letters + string.digits + '_' # Identifier characters ntpath.py: varchars = string.letters + string.digits + '_-' nturl2path.py: if len(comp) != 2 or comp[0][-1] not in string.letters: pipes.py:_safechars = string.letters + string.digits + '!@%_-+=:,./' # Safe unquoted pre.py: alphanum=string.letters+'_'+string.digits tokenize.py: namechars, numchars = string.letters + '_', string.digits urlparse.py:scheme_chars = string.letters + string.digits + '+-.' Follow-Ups: Date: 2000-Dec-30 18:26 By: fdrake Comment: Andrew, does it make sense to introduce new constants in string for this? It seems that each instance is referring to slightly different specifications or standards (documented or not), so perhaps the constants should be defined locally within each of the modules. This also avoids unnecessary dependencies. ------------------------------------------------------- Date: 2000-Dec-26 12:18 By: nobody Comment: string.ascii_letters etc is more precise than alphabet, imho. -- erno@iki.fi ------------------------------------------------------- Date: 2000-Dec-26 08:15 By: akuchling Comment: The docs for the string module say that, for example, string.lowercase is " A string containing all the characters that are considered lowercase letters." This implies that the strings are locale-aware; code that uses string.lowercase to mean only a-z is therefore in error. (.digits is not locale-aware.) Solution: I'd suggest adding new, not locale-aware, constants. string.alphabet, string.lower_alphabet, string.upper_alphabet, maybe? Code should then be changed to use these new constants. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126706&group_id=5470 From noreply@sourceforge.net Sun Dec 31 03:35:58 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 30 Dec 2000 19:35:58 -0800 Subject: [Python-bugs-list] [Bug #127151] mkhowto --iconserver doesn't do anything Message-ID: Bug #127151, was updated on 2000-Dec-30 19:35 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: akuchling Assigned to : nobody Summary: mkhowto --iconserver doesn't do anything Details: The --iconserver option to mkhowto doesn't seem to work. The init file contains a $ICONSERVER='whatever' line, but LaTeX2HTML (the correct version, 99.2b8) seems to just be ignoring the setting. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=127151&group_id=5470 From noreply@sourceforge.net Sun Dec 31 10:39:17 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 31 Dec 2000 02:39:17 -0800 Subject: [Python-bugs-list] [Bug #127072] log, exp & sqrt (math) function innaccuracies Message-ID: Bug #127072, was updated on 2000-Dec-29 05:50 Here is a current snapshot of the bug. Project: Python Category: Extension Modules Status: Closed Resolution: Invalid Bug Group: Not a Bug Priority: 5 Submitted by: salmoni Assigned to : tim_one Summary: log, exp & sqrt (math) function innaccuracies Details: Perform: y = int(math.exp(math.log(math.sqrt(2) * math.sqrt(2)))) nb, exp(log(x)) = x sqrt(x) * sqrt(x) = x This produces 0 when the answer should be 1 - the floating point number is 0.99999999999999956 but it should be 1.0 This was reproduced on Windows (95) and Linux (RedHat 6.0) both using Python 2.0. Alan James Salmoni Follow-Ups: Date: 2000-Dec-31 02:39 By: tim_one Comment: Sorry, that's how floating-point arithmetic works. It doesn't matter whether you use Python, Perl, C, C++, Java, Basic, Fortran, Ada, Scheme, LISP, Haskell, ... or a pocket calculator. log, exp and sqrt couldn't give exact results in all cases (not even most) even if Python used unbounded-precision rational arithmetic instead of floating-point. If you need symbolic simplification of expressions involving transcendentals, look to products like Mathematica or Macsyma. Note that failing examples need not be so elaborate; e.g., >>> x = math.sqrt(2) >>> x*x - 2 4.4408920985006262e-016 >>> Note that since sqrt must map all floats in (approximately) [0, 1e308] into the smaller contained range [0, 1e154], there *must* be distinct x and y such that sqrt(x) == sqrt(y), and this must be true of any system of finite floating-point arithmetic. A similar argument applies (but with even more force) to exp. ------------------------------------------------------- Date: 2000-Dec-30 18:16 By: fdrake Comment: Tim's the math guy, so this is his. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=127072&group_id=5470 From noreply@sourceforge.net Sun Dec 31 02:18:21 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 30 Dec 2000 18:18:21 -0800 Subject: [Python-bugs-list] [Bug #126866] (xml.dom.minidom.Document()).toxml() breakable Message-ID: Bug #126866, was updated on 2000-Dec-26 15:36 Here is a current snapshot of the bug. Project: Python Category: XML Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: iainlamb Assigned to : fdrake Summary: (xml.dom.minidom.Document()).toxml() breakable Details: Consider this code: from xml.dom.minidom import Document import sys e = Exception() try: raise e except: type = sys.exc_info()[0] d = Document() node = d.createTextNode(type) d.appendChild(node) print d.toxml() It's derived from a case where I inadvertently passed a non-string object (I was trying to represent the exception type) into createTextNode(). Run it and you'll get: Traceback (most recent call last): File "", line 11, in ? File "c:\python\lib\xml\dom\minidom.py", line 83, in toxml self.writexml(writer) File "c:\python\lib\xml\dom\minidom.py", line 461, in writexml node.writexml(writer) File "c:\python\lib\xml\dom\minidom.py", line 400, in writexml _write_data(writer, self.data) File "c:\python\lib\xml\dom\minidom.py", line 153, in _write_data data = string.replace(data, "&", "&") File "c:\python\lib\string.py", line 363, in replace return s.replace(old, new, maxsplit) AttributeError: replace I suggest you convert the text node's contents to a string before making the call to string.replace() in minidom.py Thanks for a cool dom implementation! - Iain Lamb Follow-Ups: Date: 2000-Dec-30 18:18 By: fdrake Comment: This will be fairly easy to fix; look for me to check it in early next week (when I'm near a real workstation!). ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126866&group_id=5470 From noreply@sourceforge.net Sun Dec 31 03:36:59 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 30 Dec 2000 19:36:59 -0800 Subject: [Python-bugs-list] [Bug #126706] many std modules assume string.letters is [a-zA-Z] Message-ID: Bug #126706, was updated on 2000-Dec-23 06:19 Here is a current snapshot of the bug. Project: Python Category: Python Library Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: nobody Assigned to : nobody Summary: many std modules assume string.letters is [a-zA-Z] Details: there are many modules in the standard library that use string.letters to mean A-Za-z, but that assumption is incorrect when locales are in use. also the readline library seems to cause the locale to be set according to the current environment variables, even if i don't call locale.*: % python2.0 -c 'import string; print string.letters' abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ % python2.0 Python 2.0 (#3, Oct 19 2000, 01:42:41) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Type "copyright", "credits" or "license" for more information. >>> print string.letters abcdefghijklmnopqrstuvwxyzµßàáâãäåæçèéêëìíîïðñòóôõöøùúûüýþÿABCDEFGHIJKLMNOPQRSTUVWXYZÀÁÂÃÄÅÆÇÈÉÊËÌÍÎÏÐÑÒÓÔÕÖØÙÚÛÜÝÞ >>> here's what grep says on the standard library. most of these uses seem incorrect to me: % grep string.letters **/*.py Cookie.py:_LegalChars = string.letters + string.digits + "!#$%&'*+-.^_`|~"cmd.py:IDENTCHARS = string.letters + string.digits + '_' dospath.py: varchars = string.letters + string.digits + '_-' lib-old/codehack.py:identchars = string.letters + string.digits + '_' # Identifier characters ntpath.py: varchars = string.letters + string.digits + '_-' nturl2path.py: if len(comp) != 2 or comp[0][-1] not in string.letters: pipes.py:_safechars = string.letters + string.digits + '!@%_-+=:,./' # Safe unquoted pre.py: alphanum=string.letters+'_'+string.digits tokenize.py: namechars, numchars = string.letters + '_', string.digits urlparse.py:scheme_chars = string.letters + string.digits + '+-.' Follow-Ups: Date: 2000-Dec-30 19:36 By: akuchling Comment: The set of all letters, though, will be commonly used, though maybe we need an alphanumeric constant for A-Za-z0-9 + underscore. I like the .ascii_letters suggestion. ------------------------------------------------------- Date: 2000-Dec-30 18:26 By: fdrake Comment: Andrew, does it make sense to introduce new constants in string for this? It seems that each instance is referring to slightly different specifications or standards (documented or not), so perhaps the constants should be defined locally within each of the modules. This also avoids unnecessary dependencies. ------------------------------------------------------- Date: 2000-Dec-26 12:18 By: nobody Comment: string.ascii_letters etc is more precise than alphabet, imho. -- erno@iki.fi ------------------------------------------------------- Date: 2000-Dec-26 08:15 By: akuchling Comment: The docs for the string module say that, for example, string.lowercase is " A string containing all the characters that are considered lowercase letters." This implies that the strings are locale-aware; code that uses string.lowercase to mean only a-z is therefore in error. (.digits is not locale-aware.) Solution: I'd suggest adding new, not locale-aware, constants. string.alphabet, string.lower_alphabet, string.upper_alphabet, maybe? Code should then be changed to use these new constants. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126706&group_id=5470 From noreply@sourceforge.net Sun Dec 31 03:39:06 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 30 Dec 2000 19:39:06 -0800 Subject: [Python-bugs-list] [Bug #127151] mkhowto --iconserver doesn't do anything Message-ID: Bug #127151, was updated on 2000-Dec-30 19:35 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Open Resolution: None Bug Group: None Priority: 5 Submitted by: akuchling Assigned to : fdrake Summary: mkhowto --iconserver doesn't do anything Details: The --iconserver option to mkhowto doesn't seem to work. The init file contains a $ICONSERVER='whatever' line, but LaTeX2HTML (the correct version, 99.2b8) seems to just be ignoring the setting. For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=127151&group_id=5470 From noreply@sourceforge.net Sun Dec 31 18:42:28 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 31 Dec 2000 10:42:28 -0800 Subject: [Python-bugs-list] [Bug #126564] Default of static linking 'bsddb' breaks 3rd party modules Message-ID: Bug #126564, was updated on 2000-Dec-20 22:25 Here is a current snapshot of the bug. Project: Python Category: Build Status: Open Resolution: None Bug Group: Platform-specific Priority: 5 Submitted by: nobody Assigned to : montanaro Summary: Default of static linking 'bsddb' breaks 3rd party modules Details: Python 2.0 builds the 'bsddb' module into the python interpreter *static* by default. When built this way on systems such as debian potato linux and some versions of redhat linux (to name a few) it links statically with an early BerkeleyDB 2.1.x. This causes problems to the current and under-development bsddb 3.x third party modules. They import but the functions they call are from the wrong library so they often coredump or return unexpected error codes. See the py-bsddb project on sourceforge. Also see http://electricrain.com/greg/python/ for the current stable py-bsddb3 module. Short term solution: Make the default build method for this module *shared* instead of static. Long term solution: the py-bsddb project should be able to replace the old bsddb module in the distribution. Follow-Ups: Date: 2000-Dec-31 10:42 By: montanaro Comment: It's easy enough for me (or someone else) to change "#*shared*" to "*shared*" in Setup.config.in, but how do we know that either won't fix the problem or won't cause other people to have problems? Also, it would be really helpful if people logged in before submitting bugs so the people who try to fix the bugs could correspond with them about the problems they report... Skip ------------------------------------------------------- Date: 2000-Dec-30 18:20 By: fdrake Comment: Lots of the recent work on this is Skip's effort, so he can probably handle this more quickly than the rest of us. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=126564&group_id=5470 From noreply@sourceforge.net Sun Dec 31 22:54:41 2000 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 31 Dec 2000 14:54:41 -0800 Subject: [Python-bugs-list] [Bug #127098] Explanation of try/else in Lang Ref is flawed Message-ID: Bug #127098, was updated on 2000-Dec-29 12:33 Here is a current snapshot of the bug. Project: Python Category: Documentation Status: Closed Resolution: None Bug Group: None Priority: 5 Submitted by: tim_one Assigned to : twouters Summary: Explanation of try/else in Lang Ref is flawed Details: Suggested replacement: """ The optional 'else' clause is executed when the 'try' clause terminates by any means other than an exception or executing a 'return', 'continue' or 'break' statement. Exceptions in the 'else' clause are not handled by the prereceding 'except' clauses. """ See Python-Dev for discussion. Follow-Ups: Date: 2000-Dec-31 14:54 By: twouters Comment: Fixed in revision 1.21 of Doc/ref/ref7.tex. ------------------------------------------------------- Date: 2000-Dec-29 12:36 By: tim_one Comment: Except I should have spelled "preceding" correctly. ------------------------------------------------------- For detailed info, follow this link: http://sourceforge.net/bugs/?func=detailbug&bug_id=127098&group_id=5470