From noreply@sourceforge.net Fri Feb 1 03:32:23 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 31 Jan 2002 19:32:23 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-499529 ] email.Utils.msgid() Message-ID: Feature Requests item #499529, was opened at 2002-01-04 10:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=499529&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Duplicate Priority: 5 Submitted By: Jason R. Mastaler (jasonrm) >Assigned to: Barry Warsaw (bwarsaw) Summary: email.Utils.msgid() Initial Comment: It seems the Python library does not include a general method for creating an rfc2822 compliant Message-ID string. This would be useful for e-mail applications that need to generate their own Message-IDs. If you are interested in adding this method, I can include a patch against Utils.py from the email module. I've already implemented this functionality in Python for one of my applications. It produces Message-ID strings that look like: <20020104184922.4077.30184.tmda@ns.mastaler.com> date + random integer + process id + a string @ FQDN Let me know if you are interested, and also preference on the name of the method (you may prefer something other than msgid()). ---------------------------------------------------------------------- >Comment By: Barry Warsaw (bwarsaw) Date: 2002-01-31 19:32 Message: Logged In: YES user_id=12800 +1 I'm going to close this report, so please add the patch to the mimelib project (under the rfe you've already got open). I think the function probably ought to be called make_msgid() in Utils.py w/ let's say a single argument for the string part. What should the default be? Hmm, I can't think of anything, so I guess the empty string. I wonder if we should have an optional arg to use something other than a FQDN? Naw, let's keep this simple. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=499529&group_id=5470 From noreply@sourceforge.net Fri Feb 1 07:31:11 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 31 Jan 2002 23:31:11 -0800 Subject: [Python-bugs-list] [ python-Bugs-511603 ] Error calling str on subclass of int Message-ID: Bugs item #511603, was opened at 2002-01-31 23:31 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511603&group_id=5470 Category: Type/class unification Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Nicholas Socci (nsocci) Assigned to: Nobody/Anonymous (nobody) Summary: Error calling str on subclass of int Initial Comment: Not sure if this is a bug or my misunderstanding of str() and repr(). This work: class Bfloat(float): def __repr__(self): return(str(self)) bf=Bfloat(1.0) print bf so does subclassing long, float, and str but the following causes and infinite recursion class Bint(int): def __repr__(self): return(str(self)) bi = Bint(1) Version Info: Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511603&group_id=5470 From noreply@sourceforge.net Fri Feb 1 07:52:57 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 31 Jan 2002 23:52:57 -0800 Subject: [Python-bugs-list] [ python-Bugs-511603 ] Error calling str on subclass of int Message-ID: Bugs item #511603, was opened at 2002-01-31 23:31 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511603&group_id=5470 Category: Type/class unification Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Nicholas Socci (nsocci) Assigned to: Nobody/Anonymous (nobody) Summary: Error calling str on subclass of int Initial Comment: Not sure if this is a bug or my misunderstanding of str() and repr(). This work: class Bfloat(float): def __repr__(self): return(str(self)) bf=Bfloat(1.0) print bf so does subclassing long, float, and str but the following causes and infinite recursion class Bint(int): def __repr__(self): return(str(self)) bi = Bint(1) Version Info: Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-01-31 23:52 Message: Logged In: YES user_id=31435 Well, the difference with long, float and str is that they have *distinct* repr and str implementations. int does not: repr and str are exactly the same function for int, so calling str inside an int subclass repr implementation is really calling repr again (just with a different *name*). The same is true of, e.g., the dict type, which also uses the same function for str and repr (so you'd also see unbounded recursion if you tried a similar thing with a dict subtype). This uncomfortably clumsly to explain, so maybe there's room for improvement. In the meantime, since str and repr *are* the same for ints, you could write class Bint(int): . def __repr__(self): . return int.__repr__(self) Calling a base class method would be clearer anyway, and in all your examples. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511603&group_id=5470 From noreply@sourceforge.net Fri Feb 1 09:45:53 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 01:45:53 -0800 Subject: [Python-bugs-list] [ python-Bugs-509288 ] package_dir paths not converted Message-ID: Bugs item #509288, was opened at 2002-01-27 12:45 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=509288&group_id=5470 Category: Distutils Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jack Jansen (jackjansen) >Assigned to: Thomas Heller (theller) Summary: package_dir paths not converted Initial Comment: If you supply a package_dir dictionary to setup(), as the most recent setup.py for Numeric does, the pathnames in this dictionary are used as-is, in stead of going through a URL-to-local-pathname-convention mapping, as all other pathnames do. I don't understand enough of the architecture to know where to do this conversion, so if someone else could have a look.... ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2002-02-01 01:45 Message: Logged In: YES user_id=11105 Assigned to me, checked in, and closed. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-01-31 14:21 Message: Logged In: YES user_id=45365 Your patch works wonderfully! Please check it in... ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2002-01-28 08:20 Message: Logged In: YES user_id=11105 Oops, apparently package_dir can be None instead of beeing a dictionatry. Updated the patch (patch2.txt). ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2002-01-28 01:41 Message: Logged In: YES user_id=11105 Jack, does this patch fix your problem? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=509288&group_id=5470 From noreply@sourceforge.net Fri Feb 1 09:47:30 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 01:47:30 -0800 Subject: [Python-bugs-list] [ python-Bugs-511055 ] bdist_wininst fails on StandaloneZODB Message-ID: Bugs item #511055, was opened at 2002-01-30 21:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511055&group_id=5470 Category: Distutils Group: Python 2.2 >Status: Closed Resolution: Invalid Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Thomas Heller (theller) Summary: bdist_wininst fails on StandaloneZODB Initial Comment: I don't know what I'm doing (never tried anything like this before), so scream if this is stupid. From a vanilla CVS checkout of Zope Corp's StandaloneZODB, I tried (this is with Python 2.2 final): \python22\python setup.py bdist_wininst This was on Win98SE, with MSVC 6 installed. It seemed to run fine for awhile, but eventually barfed. Here's the tail end: ... LIB\BTrees copying build\lib.win32-2.2\BTrees\OIBTree.pyc -> build\bdist.win32\wininst\PLATLIB\BTrees warning: install: modules installed to 'build\bdist.win32\wininst\PLATLIB\', which is not in Python's module search path (sys.path) -- you'll have to change the search path yourself changing into 'build\bdist.win32\wininst' zip -rq c:\windows\TEMP\~-1370033-3.zip . creating 'c:\windows\TEMP\~-1370033-3.zip' and adding '.' to it changing back to 'C:\Code\StandaloneZODB' creating dist\BTrees-?.win32-py2.2.exe error: dist\BTrees-?.win32-py2.2.exe: No such file or directory C:\Code\StandaloneZODB> FYI, "setup.py install" and "setup.py build" work fine from this directory. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2002-02-01 01:47 Message: Logged In: YES user_id=11105 Ok. Clsed again. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-01-31 12:48 Message: Logged In: YES user_id=31435 I wouldn't bother, Thomas. All OSes have *some* disallowed characters (e.g., the platform path separator character is always disallowed, else a path would be ambiguous). The problem here was using "?" instead of the universally recognized "XXX" to mean "this needs more attention", and that quirk is unlikely to be common. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2002-01-31 12:27 Message: Logged In: YES user_id=11105 Maybe the right branch of StandaloneZODB fixes this particular problem, the branch did at least reproduce the problem;-) The question is: should distutils check if the version number contains characters disallowed in filenames (on windows, are there any for *nix or Mac?), and construct a filename without embedded version number in these cases? I have reopened the bug, please comment. ---------------------------------------------------------------------- Comment By: Barry Warsaw (bwarsaw) Date: 2002-01-31 11:17 Message: Logged In: YES user_id=12800 I don't think you're using the right cvs branch on your checkout. You should be using StandaloneZODB-1_0-branch. On that branch, setup.py will have a revision 1.9.4.4 which should have the correct values. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2002-01-31 07:54 Message: Logged In: YES user_id=11105 Ok, I found it. Yup, I was right. The distutils-bug *is* fixed, here's the patch for Standalone's setup.py: RCS file: /cvs-repository/StandaloneZODB/setup.py,v retrieving revision 1.10 diff -c -r1.10 setup.py *** setup.py 21 Jan 2002 16:47:01 -0000 1.10 --- setup.py 31 Jan 2002 15:51:00 -0000 *************** *** 115,121 **** ) setup(name="BTrees", ! version="?", packages=["BTrees", "BTrees.tests"], ext_modules = [oob, oib, iib, iob], author = zope_corp, --- 115,121 ---- ) setup(name="BTrees", ! version="XXX", packages=["BTrees", "BTrees.tests"], ext_modules = [oob, oib, iib, iob], author = zope_corp, I'll mark the bug as invalid. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2002-01-31 07:46 Message: Logged In: YES user_id=11105 I thought this bug was fixed before 2.2 final... Where can I check out StandaloneZODB so that I can look into it? ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-01-31 01:32 Message: Logged In: YES user_id=38388 This is a distutils sort of bug: if you don't specify a package version number, distutils uses '?' instead. Unfortunately, some OSes don't handle '?' in filenames too well and this is probably what you are seeing here. The fix is simple: define a version number in setup.py. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511055&group_id=5470 From noreply@sourceforge.net Fri Feb 1 10:26:37 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 02:26:37 -0800 Subject: [Python-bugs-list] [ python-Bugs-220993 ] Installation flaky with multiple installers, old versions Message-ID: Bugs item #220993, was opened at 2000-11-01 05:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=220993&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Greg Ward (gward) Assigned to: Michael Hudson (mwh) Summary: Installation flaky with multiple installers, old versions Initial Comment: Installation tends to have problems when there are old installations present, especially when a different user is doing the new installation. In particular, it appears that the chmod() done in 'copy_file()' (as a result of the "install" command attempting to preserve the mode of files from the build tree) fails, because you can't chmod() a file owned by somebody else. Paul Dubois suggests that simply unlinking the target file before doing the copy should work. I think he's right, but need to think about it and test it a bit first. ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-01 02:26 Message: Logged In: YES user_id=6656 Um, may be being dense, but os.unlink raises os.error if you try to unlink a file that doesn't exist... so won't this blow up in a first-time install for instance? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2002-01-31 12:35 Message: Logged In: YES user_id=11375 This bit David Binger today, so I finally dug in and fixed it. Patch attached for a sanity-check. (2.2 bugfix candidate) ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-01-23 08:07 Message: Logged In: YES user_id=6656 I'll have a look at this, as I've already assigned some installation related bugs to myself. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-09-20 11:41 Message: Logged In: YES user_id=11375 I think unlinking first is the right thing to do, having just run into another problem that seems to be caused by this. Installing *.so files to an NFS partition messed up other people, I think because they had the *.so file loaded into memory and the kernel's VM got confused. (That's the theory, anyway.) Bumping up the priority as a reminder to myself... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=220993&group_id=5470 From noreply@sourceforge.net Fri Feb 1 10:36:31 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 02:36:31 -0800 Subject: [Python-bugs-list] [ python-Bugs-510868 ] Solaris 2.7 make chokes. Message-ID: Bugs item #510868, was opened at 2002-01-30 11:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 Category: Installation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Sharad Satsangi (sharadinfozen) Assigned to: Nobody/Anonymous (nobody) Summary: Solaris 2.7 make chokes. Initial Comment: I'm building python2.2 on a Solaris2.7 box, an Ultra- 10. I get a segmentation fault error at 'xreadlines' when I try the make. I am not sure why. Logs of the configuration script & make are attached. (in one concatenated file, I could not tell how to upload more than one file). Any help will be greatly appreciated. thanks! -sharad. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-01 02:36 Message: Logged In: YES user_id=21627 It would be good if you could analyse this with gdb further. I recommend to obtain a more recent copy of gdb (e.g. gdb 5.0), in particular one compiled for your system (the one you have is compiled for Solaris 2.4). You can get get binaries from sunfreeware.com (although they don't have gdb 5 for Solaris 7; you might want to try the 4.18 that they do have). The important thing is that you need to run the setup.py under gdb. To do this, please invoke the setup.py line manually. I.e. if the makefile invoke ENV1=val1 ENV2=val2 python-command python-options arguments you will need to perform the following commands ENV1=val1 ENV2=val2 export ENV1 ENV2 gdb python-command run python-options arguments As a side point, what is the exact gcc version that you are usingq (gcc -v)? If that also is not a gcc for Solaris 7, I recommend to re-install the compiler, or use the system compiler. ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-01-31 13:09 Message: Logged In: YES user_id=443851 I did try gdb on the python binary, but got nothing interesting (you can see in the file gdbpyth). thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-31 09:35 Message: Logged In: YES user_id=21627 Can you attach to Python with gdb and see why it crashes? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 From noreply@sourceforge.net Fri Feb 1 10:59:45 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 02:59:45 -0800 Subject: [Python-bugs-list] [ python-Bugs-511655 ] Readline: unwanted filename completion Message-ID: Bugs item #511655, was opened at 2002-02-01 02:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511655&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Tjabo Kloppenburg (tapo) Assigned to: Nobody/Anonymous (nobody) Summary: Readline: unwanted filename completion Initial Comment: Hi all. Something is broken with the completion of readline: simon@ping-pong:~$ python Python 2.1.1+ (#1, Jan 8 2002, 00:37:12) [GCC 2.95.4 20011006 (Debian prerelease)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import rlcompleter >>> rlcompleter.readline.parse_and_bind ("tab: complete") >>> foo foo.gif foo.txt foo2.gif foobar.jpg >>> foo.gif Traceback (most recent call last): File "", line 1, in ? NameError: name 'foo' is not defined [the "foo.gif", "foo.txt", "foo2.gif" and "foobar.jpg" are files in my current working directory] It seems that readline has a fallback to filename completion when no matches are available. Even if I use my own completion function: >>> def nullcompleter (text, state): ... print "\nBuh!" ... return None ... >>> rlcompleter.readline.set_completer(nullcompleter) >>> foo Buh! Buh! foo.gif foo.txt foo2.gif foobar.jpg foot >>> foo there is this filename fallback. Is this a known Problem? Is there an evil hack to avoid this? Thanks, Simon -- Simon.Budig@unix-ag.org http://www.home.unix-ag.org/simon/ ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511655&group_id=5470 From noreply@sourceforge.net Fri Feb 1 11:03:54 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 03:03:54 -0800 Subject: [Python-bugs-list] [ python-Bugs-511655 ] Readline: unwanted filename completion Message-ID: Bugs item #511655, was opened at 2002-02-01 02:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511655&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Tjabo Kloppenburg (tapo) Assigned to: Nobody/Anonymous (nobody) Summary: Readline: unwanted filename completion Initial Comment: Hi all. Something is broken with the completion of readline: simon@ping-pong:~$ python Python 2.1.1+ (#1, Jan 8 2002, 00:37:12) [GCC 2.95.4 20011006 (Debian prerelease)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import rlcompleter >>> rlcompleter.readline.parse_and_bind ("tab: complete") >>> foo foo.gif foo.txt foo2.gif foobar.jpg >>> foo.gif Traceback (most recent call last): File "", line 1, in ? NameError: name 'foo' is not defined [the "foo.gif", "foo.txt", "foo2.gif" and "foobar.jpg" are files in my current working directory] It seems that readline has a fallback to filename completion when no matches are available. Even if I use my own completion function: >>> def nullcompleter (text, state): ... print "\nBuh!" ... return None ... >>> rlcompleter.readline.set_completer(nullcompleter) >>> foo Buh! Buh! foo.gif foo.txt foo2.gif foobar.jpg foot >>> foo there is this filename fallback. Is this a known Problem? Is there an evil hack to avoid this? Thanks, Simon -- Simon.Budig@unix-ag.org http://www.home.unix-ag.org/simon/ ---------------------------------------------------------------------- >Comment By: Tjabo Kloppenburg (tapo) Date: 2002-02-01 03:03 Message: Logged In: YES user_id=309048 simon is a friend of mine. He tried to submit the bug without sourceforge account, but he failed. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511655&group_id=5470 From noreply@sourceforge.net Fri Feb 1 12:31:46 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 04:31:46 -0800 Subject: [Python-bugs-list] [ python-Bugs-501164 ] 2.2 on linux SEGV sometimes Message-ID: Bugs item #501164, was opened at 2002-01-08 19:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=501164&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: MATSUI Tetsushi (tetsushi) Assigned to: Nobody/Anonymous (nobody) Summary: 2.2 on linux SEGV sometimes Initial Comment: I am using Python 2.2. The execution with pure python scripts suddenly stops after several hours or a few days. With the latest core I run gdb, it says: Program terminated with signal 11, Segmentation fault. and the head of bt is like this: #0 0x80afb1e in binary_op1 (v=0x8dc0f54, w=0x8c641bc, op_slot=4) at Objects/abstract.c:340 #1 0x80b2537 in PyNumber_Subtract (v=0x8dc0f54, w=0x8c641bc) at Objects/abstract.c:392 #2 0x8079f27 in eval_frame (f=0x820c1fc) at Python/ceval.c:988 #3 0x807cd50 in PyEval_EvalCodeEx (co=0x81cf608, globals=0x81d5214, locals=0x0, args=0x8202fc4, argcount=5, kws=0x8202fd8, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2574 #4 0x807f41c in fast_function (func=0x81e4584, pp_stack=0xbfffe474, n=5, na=5, nk=0) at Python/ceval.c:3150 Thanks, tetsushi ---------------------------------------------------------------------- >Comment By: MATSUI Tetsushi (tetsushi) Date: 2002-02-01 04:31 Message: Logged In: YES user_id=421269 I changed my gcc back to 2.95.3 from 3.0.3. And I have not experienced segmentation fault since then. Thus I conclude the problem is in gcc 3.0.x and Python is innocent. Thank you very much. ---------------------------------------------------------------------- Comment By: MATSUI Tetsushi (tetsushi) Date: 2002-01-16 08:44 Message: Logged In: YES user_id=421269 I tried to reproduce SEGV. from alib import * for i in range(10000,50000): n=(i**7-1)/(i-1) if isprime(n): continue print n,MPQS(n).run() The above script stopped when i was 17359. It took about 1 day on my PC. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-14 14:21 Message: Logged In: YES user_id=21627 I cannot reproduce this: >>> from alib import * >>> MPQS(30).run() starting MPQS 10 {10: 1, 3: 1}>>> MPQS(3000000000000000000000000000000000).run() starting MPQS 1000000000000000000000000000000000 {1000000000000000000000000000000000L: 1, 3: 1} Can you please give the *precise* sequence of commands to make this crash? ---------------------------------------------------------------------- Comment By: MATSUI Tetsushi (tetsushi) Date: 2002-01-08 23:00 Message: Logged In: YES user_id=421269 OK, I attach the main script. (Maybe the 1659-th line is the stopping point.) It consists of many factoring or primality testing functions and classes, and the stopping point I suspect is in the class MPQS. To run the algorithm MPQS(n).run() where n is about 30 decimal digit composite. The length of stack trace is 53. The last 3 are like this: #50 0x8053fcb in Py_Main (argc=5, argv=0xbffff644) at Modules/main.c:369 #51 0x8053a47 in main (argc=5, argv=0xbffff644) at Modules/python.c:10 #52 0x4004ca49 in Letext () Thanks, tetsushi. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-01-08 20:00 Message: Logged In: YES user_id=6380 Can you attach the script, any input data it needs, and instructions for running it? Otherwise there's no hope in debugging this. Also, how long is the stack? Could it be a stack overflow? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=501164&group_id=5470 From noreply@sourceforge.net Fri Feb 1 15:08:21 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 07:08:21 -0800 Subject: [Python-bugs-list] [ python-Bugs-220993 ] Installation flaky with multiple installers, old versions Message-ID: Bugs item #220993, was opened at 2000-11-01 05:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=220993&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Greg Ward (gward) Assigned to: Michael Hudson (mwh) Summary: Installation flaky with multiple installers, old versions Initial Comment: Installation tends to have problems when there are old installations present, especially when a different user is doing the new installation. In particular, it appears that the chmod() done in 'copy_file()' (as a result of the "install" command attempting to preserve the mode of files from the build tree) fails, because you can't chmod() a file owned by somebody else. Paul Dubois suggests that simply unlinking the target file before doing the copy should work. I think he's right, but need to think about it and test it a bit first. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2002-02-01 07:08 Message: Logged In: YES user_id=11375 Oops! Shows I only tried it with a repeated install... New patch uploaded, which only unlinks if os.path.exists(dst). ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-01 02:26 Message: Logged In: YES user_id=6656 Um, may be being dense, but os.unlink raises os.error if you try to unlink a file that doesn't exist... so won't this blow up in a first-time install for instance? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2002-01-31 12:35 Message: Logged In: YES user_id=11375 This bit David Binger today, so I finally dug in and fixed it. Patch attached for a sanity-check. (2.2 bugfix candidate) ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-01-23 08:07 Message: Logged In: YES user_id=6656 I'll have a look at this, as I've already assigned some installation related bugs to myself. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-09-20 11:41 Message: Logged In: YES user_id=11375 I think unlinking first is the right thing to do, having just run into another problem that seems to be caused by this. Installing *.so files to an NFS partition messed up other people, I think because they had the *.so file loaded into memory and the kernel's VM got confused. (That's the theory, anyway.) Bumping up the priority as a reminder to myself... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=220993&group_id=5470 From noreply@sourceforge.net Fri Feb 1 15:16:49 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 07:16:49 -0800 Subject: [Python-bugs-list] [ python-Bugs-220993 ] Installation flaky with multiple installers, old versions Message-ID: Bugs item #220993, was opened at 2000-11-01 05:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=220993&group_id=5470 Category: Distutils Group: None Status: Open >Resolution: Accepted Priority: 5 Submitted By: Greg Ward (gward) >Assigned to: A.M. Kuchling (akuchling) Summary: Installation flaky with multiple installers, old versions Initial Comment: Installation tends to have problems when there are old installations present, especially when a different user is doing the new installation. In particular, it appears that the chmod() done in 'copy_file()' (as a result of the "install" command attempting to preserve the mode of files from the build tree) fails, because you can't chmod() a file owned by somebody else. Paul Dubois suggests that simply unlinking the target file before doing the copy should work. I think he's right, but need to think about it and test it a bit first. ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-01 07:16 Message: Logged In: YES user_id=6656 That looks better. Check it in? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2002-02-01 07:08 Message: Logged In: YES user_id=11375 Oops! Shows I only tried it with a repeated install... New patch uploaded, which only unlinks if os.path.exists(dst). ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-01 02:26 Message: Logged In: YES user_id=6656 Um, may be being dense, but os.unlink raises os.error if you try to unlink a file that doesn't exist... so won't this blow up in a first-time install for instance? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2002-01-31 12:35 Message: Logged In: YES user_id=11375 This bit David Binger today, so I finally dug in and fixed it. Patch attached for a sanity-check. (2.2 bugfix candidate) ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-01-23 08:07 Message: Logged In: YES user_id=6656 I'll have a look at this, as I've already assigned some installation related bugs to myself. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-09-20 11:41 Message: Logged In: YES user_id=11375 I think unlinking first is the right thing to do, having just run into another problem that seems to be caused by this. Installing *.so files to an NFS partition messed up other people, I think because they had the *.so file loaded into memory and the kernel's VM got confused. (That's the theory, anyway.) Bumping up the priority as a reminder to myself... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=220993&group_id=5470 From noreply@sourceforge.net Fri Feb 1 15:34:41 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 07:34:41 -0800 Subject: [Python-bugs-list] [ python-Bugs-511603 ] Error calling str on subclass of int Message-ID: Bugs item #511603, was opened at 2002-01-31 23:31 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511603&group_id=5470 Category: Type/class unification Group: Python 2.2 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Nicholas Socci (nsocci) Assigned to: Nobody/Anonymous (nobody) Summary: Error calling str on subclass of int Initial Comment: Not sure if this is a bug or my misunderstanding of str() and repr(). This work: class Bfloat(float): def __repr__(self): return(str(self)) bf=Bfloat(1.0) print bf so does subclassing long, float, and str but the following causes and infinite recursion class Bint(int): def __repr__(self): return(str(self)) bi = Bint(1) Version Info: Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-01 07:34 Message: Logged In: YES user_id=6380 It's easy to fix in this particular case, and should be fixed in general, by filling the tp_str slot for PyInt_Type with the same slot as the tp_repr slot; I've done so in CVS so I can close this as Fixed. What goes on without that is that tp_str of the subclass is undefined, so it falls back on tp_repr of the subclass. I would definitely say that calling str() from __repr(__) or repr() from __str__() is walking on thin ice though and should be avoided. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-01-31 23:52 Message: Logged In: YES user_id=31435 Well, the difference with long, float and str is that they have *distinct* repr and str implementations. int does not: repr and str are exactly the same function for int, so calling str inside an int subclass repr implementation is really calling repr again (just with a different *name*). The same is true of, e.g., the dict type, which also uses the same function for str and repr (so you'd also see unbounded recursion if you tried a similar thing with a dict subtype). This uncomfortably clumsly to explain, so maybe there's room for improvement. In the meantime, since str and repr *are* the same for ints, you could write class Bint(int): . def __repr__(self): . return int.__repr__(self) Calling a base class method would be clearer anyway, and in all your examples. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511603&group_id=5470 From noreply@sourceforge.net Fri Feb 1 15:35:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 07:35:38 -0800 Subject: [Python-bugs-list] [ python-Bugs-511737 ] Bug/limitation in ConfigParser Message-ID: Bugs item #511737, was opened at 2002-02-01 07:35 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511737&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Loïc Lefort (loicl) Assigned to: Nobody/Anonymous (nobody) Summary: Bug/limitation in ConfigParser Initial Comment: It is not possible to use '%' character in config options in combination with $() substitution. Example: Given this configuration file: [DEFAULT] option1=xxx option2=%(option1)s/xxx ok=%(option1)s/%%s not_ok=%(option2)s/%%s config.get('DEFAULT', 'ok') returns xxx/%s but config.get('DEFAULT', 'not_ok') fails with an exception because the '%' needs to be escaped multiple times depending on the evaluation depth: %(option2)s/%%s -> %(option1)s/xxx/%s -> exception what I would like it to do is: %(option2)s/%%s -> %(option1)s/xxx/%%s -> xxx/xxx/%s Attached to this bug report is a simple patch to work around this limitation (not very elegant, but it works) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511737&group_id=5470 From noreply@sourceforge.net Fri Feb 1 15:38:07 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 07:38:07 -0800 Subject: [Python-bugs-list] [ python-Bugs-511737 ] Bug/limitation in ConfigParser Message-ID: Bugs item #511737, was opened at 2002-02-01 07:35 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511737&group_id=5470 Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Loïc Lefort (loicl) Assigned to: Nobody/Anonymous (nobody) Summary: Bug/limitation in ConfigParser Initial Comment: It is not possible to use '%' character in config options in combination with $() substitution. Example: Given this configuration file: [DEFAULT] option1=xxx option2=%(option1)s/xxx ok=%(option1)s/%%s not_ok=%(option2)s/%%s config.get('DEFAULT', 'ok') returns xxx/%s but config.get('DEFAULT', 'not_ok') fails with an exception because the '%' needs to be escaped multiple times depending on the evaluation depth: %(option2)s/%%s -> %(option1)s/xxx/%s -> exception what I would like it to do is: %(option2)s/%%s -> %(option1)s/xxx/%%s -> xxx/xxx/%s Attached to this bug report is a simple patch to work around this limitation (not very elegant, but it works) ---------------------------------------------------------------------- >Comment By: Loïc Lefort (loicl) Date: 2002-02-01 07:38 Message: Logged In: YES user_id=78862 Forgot the python version: this problem is present in both python 2.1.2 and python 2.2 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511737&group_id=5470 From noreply@sourceforge.net Fri Feb 1 16:04:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 08:04:38 -0800 Subject: [Python-bugs-list] [ python-Bugs-510868 ] Solaris 2.7 make chokes. Message-ID: Bugs item #510868, was opened at 2002-01-30 11:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 Category: Installation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Sharad Satsangi (sharadinfozen) Assigned to: Nobody/Anonymous (nobody) Summary: Solaris 2.7 make chokes. Initial Comment: I'm building python2.2 on a Solaris2.7 box, an Ultra- 10. I get a segmentation fault error at 'xreadlines' when I try the make. I am not sure why. Logs of the configuration script & make are attached. (in one concatenated file, I could not tell how to upload more than one file). Any help will be greatly appreciated. thanks! -sharad. ---------------------------------------------------------------------- >Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-02-01 08:04 Message: Logged In: YES user_id=443851 Thanks for the gdb tip, I've switched to the solaris7 pkg for gdb. The version info for gcc does not explicitly list what flavor of Solaris it's built for, but the version number is 3.0.3, and it reads it's specs from /usr/local/lib/gcc-lib/sparc-sun- solaris2.7/3.0.3/specs, which leads me to believe that it's built for solaris7. Anywho, after some freaking around with env var's & gdb, I got the following output (see gdbout). It leads me to believe that the problem is in /usr/lib/libc.so.1, but I'm not sure how to replace/update this lib, or even if it is indeed the source of my python misery. Any input or guidance would be appreciated. thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-01 02:36 Message: Logged In: YES user_id=21627 It would be good if you could analyse this with gdb further. I recommend to obtain a more recent copy of gdb (e.g. gdb 5.0), in particular one compiled for your system (the one you have is compiled for Solaris 2.4). You can get get binaries from sunfreeware.com (although they don't have gdb 5 for Solaris 7; you might want to try the 4.18 that they do have). The important thing is that you need to run the setup.py under gdb. To do this, please invoke the setup.py line manually. I.e. if the makefile invoke ENV1=val1 ENV2=val2 python-command python-options arguments you will need to perform the following commands ENV1=val1 ENV2=val2 export ENV1 ENV2 gdb python-command run python-options arguments As a side point, what is the exact gcc version that you are usingq (gcc -v)? If that also is not a gcc for Solaris 7, I recommend to re-install the compiler, or use the system compiler. ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-01-31 13:09 Message: Logged In: YES user_id=443851 I did try gdb on the python binary, but got nothing interesting (you can see in the file gdbpyth). thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-31 09:35 Message: Logged In: YES user_id=21627 Can you attach to Python with gdb and see why it crashes? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 From noreply@sourceforge.net Fri Feb 1 16:56:17 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 08:56:17 -0800 Subject: [Python-bugs-list] [ python-Bugs-511786 ] urllib2.py loses headers on redirect Message-ID: Bugs item #511786, was opened at 2002-02-01 08:56 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511786&group_id=5470 Category: Python Library Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Greg Ward (gward) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2.py loses headers on redirect Initial Comment: Using urllib2 for an HTTP request that involves a redirect, any custom-supplied headers are lost on the second (redirected) request. Example: >>> from urllib2 import * >>> req = Request("http://www.python.org/doc", ... headers={"cookie": "foo=bar"}) >>> result = urlopen(req) This results in two HTTP requests being sent to www.python.org. The first one includes my cookie header: GET /doc HTTP/1.0 Host: www.python.org User-agent: Python-urllib/2.0a1 cookie: foo=bar but the second one (after the fix-trailing-slash redirect) does not: GET /doc/ HTTP/1.0 Host: www.python.org User-agent: Python-urllib/2.0a1 Luckily, a one-line patch (attached) seems to fix the bug. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511786&group_id=5470 From noreply@sourceforge.net Fri Feb 1 18:30:16 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 10:30:16 -0800 Subject: [Python-bugs-list] [ python-Bugs-220993 ] Installation flaky with multiple installers, old versions Message-ID: Bugs item #220993, was opened at 2000-11-01 05:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=220993&group_id=5470 Category: Distutils Group: None >Status: Closed Resolution: Accepted Priority: 5 Submitted By: Greg Ward (gward) Assigned to: A.M. Kuchling (akuchling) Summary: Installation flaky with multiple installers, old versions Initial Comment: Installation tends to have problems when there are old installations present, especially when a different user is doing the new installation. In particular, it appears that the chmod() done in 'copy_file()' (as a result of the "install" command attempting to preserve the mode of files from the build tree) fails, because you can't chmod() a file owned by somebody else. Paul Dubois suggests that simply unlinking the target file before doing the copy should work. I think he's right, but need to think about it and test it a bit first. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2002-02-01 10:30 Message: Logged In: YES user_id=11375 Checked in as rev. 1.12 of file_util.py. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-01 07:16 Message: Logged In: YES user_id=6656 That looks better. Check it in? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2002-02-01 07:08 Message: Logged In: YES user_id=11375 Oops! Shows I only tried it with a repeated install... New patch uploaded, which only unlinks if os.path.exists(dst). ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-01 02:26 Message: Logged In: YES user_id=6656 Um, may be being dense, but os.unlink raises os.error if you try to unlink a file that doesn't exist... so won't this blow up in a first-time install for instance? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2002-01-31 12:35 Message: Logged In: YES user_id=11375 This bit David Binger today, so I finally dug in and fixed it. Patch attached for a sanity-check. (2.2 bugfix candidate) ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-01-23 08:07 Message: Logged In: YES user_id=6656 I'll have a look at this, as I've already assigned some installation related bugs to myself. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2001-09-20 11:41 Message: Logged In: YES user_id=11375 I think unlinking first is the right thing to do, having just run into another problem that seems to be caused by this. Installing *.so files to an NFS partition messed up other people, I think because they had the *.so file loaded into memory and the kernel's VM got confused. (That's the theory, anyway.) Bumping up the priority as a reminder to myself... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=220993&group_id=5470 From noreply@sourceforge.net Fri Feb 1 18:31:21 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 10:31:21 -0800 Subject: [Python-bugs-list] [ python-Bugs-479469 ] File copy fails on True64 AFS file syste Message-ID: Bugs item #479469, was opened at 2001-11-08 00:03 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=479469&group_id=5470 Category: Distutils Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Konrad Hinsen (hinsen) >Assigned to: A.M. Kuchling (akuchling) Summary: File copy fails on True64 AFS file syste Initial Comment: The following report comes from an MMTK user who has a peculiar installation problem which is probably caused by some Distutils bug/feature: ------------------------------------------------ As an aside, I'm having problems with the installation of MMTK itself on Alpha/Tru64. I keep getting these: copying MMTK/Database/Groups/aspartic_acid_uni2 -> /afs/bi/v/@sys/languages/python/latest/lib/python2.0/site-packages/MMTK/Database/Groups error: /afs/bi/v/@sys/languages/python/latest/lib/python2.0/site-packages/MMTK/Database/Groups/aspartic_acid_uni2: Not owner The file is copied, but the installation process halts there. I am able to proceed by rerunning the python setup.py install; it notices that that file got there OK, copies the next one, and halts again. As you notice, we have the AFS file system (whose chmod/chown semantics are quite different from regular UNIX). This is not a problem on Linux, HP, SGI or Sun, only on Alpha. Also, it does not matter whether I have my normal account or the administrator account on AFS. What is the installation program trying to do and is it altogether necessary? ------------------------------------------------- The setup.py script is attached. The file mentioned above is one of the data files that end up in the data_files variable. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2002-02-01 10:31 Message: Logged In: YES user_id=11375 Revision 1.12 of Lib/distutils/file_util.py now unlinks the destination file before copying to it. Perhaps this will fix the problems on AFS. (Let me know if it doesn't.) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=479469&group_id=5470 From noreply@sourceforge.net Fri Feb 1 20:14:39 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 12:14:39 -0800 Subject: [Python-bugs-list] [ python-Bugs-511876 ] UserList.__cmp__() raises RuntimeError Message-ID: Bugs item #511876, was opened at 2002-02-01 12:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511876&group_id=5470 Category: Python Library Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Barry Warsaw (bwarsaw) Assigned to: Anthony Baxter (anthonybaxter) Summary: UserList.__cmp__() raises RuntimeError Initial Comment: Summary says it all. The trunk version of this method (i.e. Python 2.2) doesn't raise this exception. Was this even intended? It makes it difficult to write derived classes that work under both Python 2.1.x and Python 2.2. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511876&group_id=5470 From noreply@sourceforge.net Fri Feb 1 20:23:05 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 12:23:05 -0800 Subject: [Python-bugs-list] [ python-Bugs-511876 ] UserList.__cmp__() raises RuntimeError Message-ID: Bugs item #511876, was opened at 2002-02-01 12:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511876&group_id=5470 Category: Python Library Group: Python 2.1.2 >Status: Closed >Resolution: Works For Me Priority: 5 Submitted By: Barry Warsaw (bwarsaw) >Assigned to: Guido van Rossum (gvanrossum) Summary: UserList.__cmp__() raises RuntimeError Initial Comment: Summary says it all. The trunk version of this method (i.e. Python 2.2) doesn't raise this exception. Was this even intended? It makes it difficult to write derived classes that work under both Python 2.1.x and Python 2.2. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-01 12:23 Message: Logged In: YES user_id=6380 It was intentional that __cmp__ raised an error, because it wasn't supposed to be called any more -- as of 2.1, rich comparisons take priority. Try it: if you cmp() a UserList instance in 2.1, you don't get an exception, because __cmp__ isn't called. You only ran into this because you were using UserList as a mix-in class for ExtensionClass, which doesn't support rich comparisons. I don't think it's a bug, and I'm closing it as Works For Me. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511876&group_id=5470 From noreply@sourceforge.net Fri Feb 1 20:26:03 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 12:26:03 -0800 Subject: [Python-bugs-list] [ python-Bugs-510868 ] Solaris 2.7 make chokes. Message-ID: Bugs item #510868, was opened at 2002-01-30 11:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 Category: Installation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Sharad Satsangi (sharadinfozen) Assigned to: Nobody/Anonymous (nobody) Summary: Solaris 2.7 make chokes. Initial Comment: I'm building python2.2 on a Solaris2.7 box, an Ultra- 10. I get a segmentation fault error at 'xreadlines' when I try the make. I am not sure why. Logs of the configuration script & make are attached. (in one concatenated file, I could not tell how to upload more than one file). Any help will be greatly appreciated. thanks! -sharad. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-01 12:26 Message: Logged In: YES user_id=21627 Ok, gcc 3.0.3 itself could be a source of problems, but I won't accuse that compiler prematurely (you might want to try 2.95.x, though, if you have that readily available). As for the gdb analysis: that it crashes is strlen is not the problem; strlen is the innocent C library function that computes the length of the string. Please invoke the command "bt" when it crashes; that should tell you the backtrace (i.e. where strlen is called from) - please report that. If you want to investigate further: "up" brings you up a stack-level, and "p varname" prints a variable. This approach to debugging may take many more rounds, so I'd understand if you are ready to give up (sunfreeware has 2.1.1 binaries). It's just that it builds fine for me (on Solaris 8, using gcc 2.95.2), so I have no clue as to what the problem might be. Did you pass any options to ./configure? ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-02-01 08:04 Message: Logged In: YES user_id=443851 Thanks for the gdb tip, I've switched to the solaris7 pkg for gdb. The version info for gcc does not explicitly list what flavor of Solaris it's built for, but the version number is 3.0.3, and it reads it's specs from /usr/local/lib/gcc-lib/sparc-sun- solaris2.7/3.0.3/specs, which leads me to believe that it's built for solaris7. Anywho, after some freaking around with env var's & gdb, I got the following output (see gdbout). It leads me to believe that the problem is in /usr/lib/libc.so.1, but I'm not sure how to replace/update this lib, or even if it is indeed the source of my python misery. Any input or guidance would be appreciated. thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-01 02:36 Message: Logged In: YES user_id=21627 It would be good if you could analyse this with gdb further. I recommend to obtain a more recent copy of gdb (e.g. gdb 5.0), in particular one compiled for your system (the one you have is compiled for Solaris 2.4). You can get get binaries from sunfreeware.com (although they don't have gdb 5 for Solaris 7; you might want to try the 4.18 that they do have). The important thing is that you need to run the setup.py under gdb. To do this, please invoke the setup.py line manually. I.e. if the makefile invoke ENV1=val1 ENV2=val2 python-command python-options arguments you will need to perform the following commands ENV1=val1 ENV2=val2 export ENV1 ENV2 gdb python-command run python-options arguments As a side point, what is the exact gcc version that you are usingq (gcc -v)? If that also is not a gcc for Solaris 7, I recommend to re-install the compiler, or use the system compiler. ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-01-31 13:09 Message: Logged In: YES user_id=443851 I did try gdb on the python binary, but got nothing interesting (you can see in the file gdbpyth). thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-31 09:35 Message: Logged In: YES user_id=21627 Can you attach to Python with gdb and see why it crashes? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 From noreply@sourceforge.net Fri Feb 1 22:15:24 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 14:15:24 -0800 Subject: [Python-bugs-list] [ python-Bugs-510868 ] Solaris 2.7 make chokes. Message-ID: Bugs item #510868, was opened at 2002-01-30 11:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 Category: Installation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Sharad Satsangi (sharadinfozen) Assigned to: Nobody/Anonymous (nobody) Summary: Solaris 2.7 make chokes. Initial Comment: I'm building python2.2 on a Solaris2.7 box, an Ultra- 10. I get a segmentation fault error at 'xreadlines' when I try the make. I am not sure why. Logs of the configuration script & make are attached. (in one concatenated file, I could not tell how to upload more than one file). Any help will be greatly appreciated. thanks! -sharad. ---------------------------------------------------------------------- >Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-02-01 14:15 Message: Logged In: YES user_id=443851 I've done a full backtrace on it (see gdbout2), but I really don't know how to interpret the results. From what I can tell, the problem lies in this area: #1 0x20f00 in PyString_FromString ( str=0x7e1138
) at Objects/stringobject.c:112 #2 0xad7dc in PyDict_SetItemString (v=0x7e1138, key=0x7e1138
, item=0x17d350) at Objects/dictobject.c:1879 Unfotunately, I can't tell what's going wrong in these source files, and when I tried 'p str' on the var referenced in line #1, I get: $1 = 0x7e1138
which does not explain much to me. I have tried the package at SunFreeWare's site, but my developer needs the 'HTTPSConnection' from 'httplib', which apparently is _not_ built into the sunfreeware package. So, any input, again, would be greatly appreciated. I realise you must be a busy guy, thanks for all of your help & patience! -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-01 12:26 Message: Logged In: YES user_id=21627 Ok, gcc 3.0.3 itself could be a source of problems, but I won't accuse that compiler prematurely (you might want to try 2.95.x, though, if you have that readily available). As for the gdb analysis: that it crashes is strlen is not the problem; strlen is the innocent C library function that computes the length of the string. Please invoke the command "bt" when it crashes; that should tell you the backtrace (i.e. where strlen is called from) - please report that. If you want to investigate further: "up" brings you up a stack-level, and "p varname" prints a variable. This approach to debugging may take many more rounds, so I'd understand if you are ready to give up (sunfreeware has 2.1.1 binaries). It's just that it builds fine for me (on Solaris 8, using gcc 2.95.2), so I have no clue as to what the problem might be. Did you pass any options to ./configure? ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-02-01 08:04 Message: Logged In: YES user_id=443851 Thanks for the gdb tip, I've switched to the solaris7 pkg for gdb. The version info for gcc does not explicitly list what flavor of Solaris it's built for, but the version number is 3.0.3, and it reads it's specs from /usr/local/lib/gcc-lib/sparc-sun- solaris2.7/3.0.3/specs, which leads me to believe that it's built for solaris7. Anywho, after some freaking around with env var's & gdb, I got the following output (see gdbout). It leads me to believe that the problem is in /usr/lib/libc.so.1, but I'm not sure how to replace/update this lib, or even if it is indeed the source of my python misery. Any input or guidance would be appreciated. thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-01 02:36 Message: Logged In: YES user_id=21627 It would be good if you could analyse this with gdb further. I recommend to obtain a more recent copy of gdb (e.g. gdb 5.0), in particular one compiled for your system (the one you have is compiled for Solaris 2.4). You can get get binaries from sunfreeware.com (although they don't have gdb 5 for Solaris 7; you might want to try the 4.18 that they do have). The important thing is that you need to run the setup.py under gdb. To do this, please invoke the setup.py line manually. I.e. if the makefile invoke ENV1=val1 ENV2=val2 python-command python-options arguments you will need to perform the following commands ENV1=val1 ENV2=val2 export ENV1 ENV2 gdb python-command run python-options arguments As a side point, what is the exact gcc version that you are usingq (gcc -v)? If that also is not a gcc for Solaris 7, I recommend to re-install the compiler, or use the system compiler. ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-01-31 13:09 Message: Logged In: YES user_id=443851 I did try gdb on the python binary, but got nothing interesting (you can see in the file gdbpyth). thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-31 09:35 Message: Logged In: YES user_id=21627 Can you attach to Python with gdb and see why it crashes? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 From noreply@sourceforge.net Sat Feb 2 00:40:40 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 16:40:40 -0800 Subject: [Python-bugs-list] [ python-Bugs-511876 ] UserList.__cmp__() raises RuntimeError Message-ID: Bugs item #511876, was opened at 2002-02-01 12:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511876&group_id=5470 Category: Python Library Group: Python 2.1.2 Status: Closed Resolution: Works For Me Priority: 5 Submitted By: Barry Warsaw (bwarsaw) Assigned to: Guido van Rossum (gvanrossum) Summary: UserList.__cmp__() raises RuntimeError Initial Comment: Summary says it all. The trunk version of this method (i.e. Python 2.2) doesn't raise this exception. Was this even intended? It makes it difficult to write derived classes that work under both Python 2.1.x and Python 2.2. ---------------------------------------------------------------------- >Comment By: Barry Warsaw (bwarsaw) Date: 2002-02-01 16:40 Message: Logged In: YES user_id=12800 Should the 2.2 version of UserList.py then also be raising the exception in its __cmp__()? What confused Jeremy and I was that the version in the release21-maint branch raises the exception, but the version on the trunk does not. It was odd that it seems like the exception was removed for Python 2.2. Hmm, maybe its a bug in 2.2? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-01 12:23 Message: Logged In: YES user_id=6380 It was intentional that __cmp__ raised an error, because it wasn't supposed to be called any more -- as of 2.1, rich comparisons take priority. Try it: if you cmp() a UserList instance in 2.1, you don't get an exception, because __cmp__ isn't called. You only ran into this because you were using UserList as a mix-in class for ExtensionClass, which doesn't support rich comparisons. I don't think it's a bug, and I'm closing it as Works For Me. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511876&group_id=5470 From noreply@sourceforge.net Sat Feb 2 00:49:45 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 16:49:45 -0800 Subject: [Python-bugs-list] [ python-Bugs-487297 ] Copy from stdout after crash Message-ID: Bugs item #487297, was opened at 2001-11-29 15:46 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=487297&group_id=5470 Category: Macintosh Group: Feature Request Status: Open Resolution: None Priority: 3 Submitted By: Nobody/Anonymous (nobody) Assigned to: Jack Jansen (jackjansen) Summary: Copy from stdout after crash Initial Comment: It would be really nice if one could copy text from the stdout window (e.g. PythonInterpreter.out) after a crash. Apparently this now works in some cases, but still fails after a crash in a delay-console-window applet. I am submitting this to SourceForge as per Jack Jansens' request. ---------------------------------------------------------------------- Comment By: Jurjen N.E. Bos (jneb) Date: 2002-02-01 16:49 Message: Logged In: YES user_id=446428 May be connected to the bug that I am about to submit on sys.stdout.flush(). - Jurjen ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2001-12-02 13:09 Message: Logged In: YES user_id=45365 Lowered the priority. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=487297&group_id=5470 From noreply@sourceforge.net Sat Feb 2 00:54:14 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 16:54:14 -0800 Subject: [Python-bugs-list] [ python-Bugs-511992 ] IDE: sys.stdout.flush() doesn't work Message-ID: Bugs item #511992, was opened at 2002-02-01 16:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511992&group_id=5470 Category: Macintosh Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Jurjen N.E. Bos (jneb) Assigned to: Jack Jansen (jackjansen) Summary: IDE: sys.stdout.flush() doesn't work Initial Comment: Using: Mac OS X 10.1.1, Python 2.2 for Carbon. Problem: def t(n): for i in range(10): for j in range(n): pass sys.stdout.write(`i`) sys.stdout.write("\n") sys.stdout.flush() t(100000) #depends on your system's performance If you run this in an IDE window, you would expect that the digits appear with regular intervals. In fact, they appear all together. This is quite irritating, since it frustrates "real time" output. Jack asked me to assign it to Just. Jurjen ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511992&group_id=5470 From noreply@sourceforge.net Sat Feb 2 02:35:26 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 18:35:26 -0800 Subject: [Python-bugs-list] [ python-Bugs-511876 ] UserList.__cmp__() raises RuntimeError Message-ID: Bugs item #511876, was opened at 2002-02-01 12:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511876&group_id=5470 Category: Python Library Group: Python 2.1.2 Status: Closed Resolution: Works For Me Priority: 5 Submitted By: Barry Warsaw (bwarsaw) Assigned to: Guido van Rossum (gvanrossum) Summary: UserList.__cmp__() raises RuntimeError Initial Comment: Summary says it all. The trunk version of this method (i.e. Python 2.2) doesn't raise this exception. Was this even intended? It makes it difficult to write derived classes that work under both Python 2.1.x and Python 2.2. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-01 18:35 Message: Logged In: YES user_id=6380 No, in 2.2 __cmp__, if available, acts as an optimization. Read the cvs logs if you want to know all the details. ---------------------------------------------------------------------- Comment By: Barry Warsaw (bwarsaw) Date: 2002-02-01 16:40 Message: Logged In: YES user_id=12800 Should the 2.2 version of UserList.py then also be raising the exception in its __cmp__()? What confused Jeremy and I was that the version in the release21-maint branch raises the exception, but the version on the trunk does not. It was odd that it seems like the exception was removed for Python 2.2. Hmm, maybe its a bug in 2.2? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-01 12:23 Message: Logged In: YES user_id=6380 It was intentional that __cmp__ raised an error, because it wasn't supposed to be called any more -- as of 2.1, rich comparisons take priority. Try it: if you cmp() a UserList instance in 2.1, you don't get an exception, because __cmp__ isn't called. You only ran into this because you were using UserList as a mix-in class for ExtensionClass, which doesn't support rich comparisons. I don't think it's a bug, and I'm closing it as Works For Me. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511876&group_id=5470 From noreply@sourceforge.net Sat Feb 2 03:25:42 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 01 Feb 2002 19:25:42 -0800 Subject: [Python-bugs-list] [ python-Bugs-512007 ] make test failure on sunos5 Message-ID: Bugs item #512007, was opened at 2002-02-01 19:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512007&group_id=5470 Category: Build Group: Python 2.2 Status: Open Resolution: None Priority: 7 Submitted By: Barry Warsaw (bwarsaw) Assigned to: Nobody/Anonymous (nobody) Summary: make test failure on sunos5 Initial Comment: I don't have time to dig into this right now, but this shouldn't get lost. I just tried to build and test current 2.2+ cvs on a SunOS 5.8 box on the SourceForge compile farm. This may be shallow, but here are the results: bash-2.03$ uname -a SunOS usf-cf-sparc-solaris-2 5.8 Generic_108528-11 sun4u sparc SUNW,Ultra-60 [...] 163 tests OK. 4 tests failed: test_pwd test_socket test_sundry test_urllib2 20 tests skipped: test___all__ test_al test_asynchat test_bsddb test_cd test_cl test_curses test_gdbm test_gl test_imgfile test_linuxaudiodev test_minidom test_openpty test_pyexpat test_sax test_socket_ssl test_socketserver test_sunaudiodev test_winreg test_winsound Ask someone to teach regrtest.py about which tests are expected to get skipped on sunos5. *** Error code 1 make: Fatal error: Command failed for target `test' ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512007&group_id=5470 From noreply@sourceforge.net Sat Feb 2 11:38:35 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 02 Feb 2002 03:38:35 -0800 Subject: [Python-bugs-list] [ python-Bugs-510868 ] Solaris 2.7 make chokes. Message-ID: Bugs item #510868, was opened at 2002-01-30 11:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 Category: Installation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Sharad Satsangi (sharadinfozen) Assigned to: Nobody/Anonymous (nobody) Summary: Solaris 2.7 make chokes. Initial Comment: I'm building python2.2 on a Solaris2.7 box, an Ultra- 10. I get a segmentation fault error at 'xreadlines' when I try the make. I am not sure why. Logs of the configuration script & make are attached. (in one concatenated file, I could not tell how to upload more than one file). Any help will be greatly appreciated. thanks! -sharad. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-02 03:38 Message: Logged In: YES user_id=21627 That looks very much like a miscompilation. Notice that initreadline is supposed to pass a first string of "readline" to InitModule4; in the gdb backtrace, we see an empty string. Likewise, the invalid address comes from the methods argument to InitModule4, fetching ml_name. These are all static strings, compiled into an array (namely, readline.c:readline_methods). So I really recommend to downgrade the compiler (or use the Sun system compiler if you have it); if you are interested in a work-around, here are two options: - build readline statically into the Python interpreter. Do so by uncommenting the readline line in Modules/Setup (adding libraries as necessary) - do not build the readline module at all; do so by adding 'readline' into setup.py:disabled_module_list. ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-02-01 14:15 Message: Logged In: YES user_id=443851 I've done a full backtrace on it (see gdbout2), but I really don't know how to interpret the results. From what I can tell, the problem lies in this area: #1 0x20f00 in PyString_FromString ( str=0x7e1138
) at Objects/stringobject.c:112 #2 0xad7dc in PyDict_SetItemString (v=0x7e1138, key=0x7e1138
, item=0x17d350) at Objects/dictobject.c:1879 Unfotunately, I can't tell what's going wrong in these source files, and when I tried 'p str' on the var referenced in line #1, I get: $1 = 0x7e1138
which does not explain much to me. I have tried the package at SunFreeWare's site, but my developer needs the 'HTTPSConnection' from 'httplib', which apparently is _not_ built into the sunfreeware package. So, any input, again, would be greatly appreciated. I realise you must be a busy guy, thanks for all of your help & patience! -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-01 12:26 Message: Logged In: YES user_id=21627 Ok, gcc 3.0.3 itself could be a source of problems, but I won't accuse that compiler prematurely (you might want to try 2.95.x, though, if you have that readily available). As for the gdb analysis: that it crashes is strlen is not the problem; strlen is the innocent C library function that computes the length of the string. Please invoke the command "bt" when it crashes; that should tell you the backtrace (i.e. where strlen is called from) - please report that. If you want to investigate further: "up" brings you up a stack-level, and "p varname" prints a variable. This approach to debugging may take many more rounds, so I'd understand if you are ready to give up (sunfreeware has 2.1.1 binaries). It's just that it builds fine for me (on Solaris 8, using gcc 2.95.2), so I have no clue as to what the problem might be. Did you pass any options to ./configure? ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-02-01 08:04 Message: Logged In: YES user_id=443851 Thanks for the gdb tip, I've switched to the solaris7 pkg for gdb. The version info for gcc does not explicitly list what flavor of Solaris it's built for, but the version number is 3.0.3, and it reads it's specs from /usr/local/lib/gcc-lib/sparc-sun- solaris2.7/3.0.3/specs, which leads me to believe that it's built for solaris7. Anywho, after some freaking around with env var's & gdb, I got the following output (see gdbout). It leads me to believe that the problem is in /usr/lib/libc.so.1, but I'm not sure how to replace/update this lib, or even if it is indeed the source of my python misery. Any input or guidance would be appreciated. thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-01 02:36 Message: Logged In: YES user_id=21627 It would be good if you could analyse this with gdb further. I recommend to obtain a more recent copy of gdb (e.g. gdb 5.0), in particular one compiled for your system (the one you have is compiled for Solaris 2.4). You can get get binaries from sunfreeware.com (although they don't have gdb 5 for Solaris 7; you might want to try the 4.18 that they do have). The important thing is that you need to run the setup.py under gdb. To do this, please invoke the setup.py line manually. I.e. if the makefile invoke ENV1=val1 ENV2=val2 python-command python-options arguments you will need to perform the following commands ENV1=val1 ENV2=val2 export ENV1 ENV2 gdb python-command run python-options arguments As a side point, what is the exact gcc version that you are usingq (gcc -v)? If that also is not a gcc for Solaris 7, I recommend to re-install the compiler, or use the system compiler. ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-01-31 13:09 Message: Logged In: YES user_id=443851 I did try gdb on the python binary, but got nothing interesting (you can see in the file gdbpyth). thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-31 09:35 Message: Logged In: YES user_id=21627 Can you attach to Python with gdb and see why it crashes? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 From noreply@sourceforge.net Sat Feb 2 11:42:33 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 02 Feb 2002 03:42:33 -0800 Subject: [Python-bugs-list] [ python-Bugs-512007 ] make test failure on sunos5 Message-ID: Bugs item #512007, was opened at 2002-02-01 19:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512007&group_id=5470 Category: Build Group: Python 2.2 Status: Open Resolution: None Priority: 7 Submitted By: Barry Warsaw (bwarsaw) Assigned to: Nobody/Anonymous (nobody) Summary: make test failure on sunos5 Initial Comment: I don't have time to dig into this right now, but this shouldn't get lost. I just tried to build and test current 2.2+ cvs on a SunOS 5.8 box on the SourceForge compile farm. This may be shallow, but here are the results: bash-2.03$ uname -a SunOS usf-cf-sparc-solaris-2 5.8 Generic_108528-11 sun4u sparc SUNW,Ultra-60 [...] 163 tests OK. 4 tests failed: test_pwd test_socket test_sundry test_urllib2 20 tests skipped: test___all__ test_al test_asynchat test_bsddb test_cd test_cl test_curses test_gdbm test_gl test_imgfile test_linuxaudiodev test_minidom test_openpty test_pyexpat test_sax test_socket_ssl test_socketserver test_sunaudiodev test_winreg test_winsound Ask someone to teach regrtest.py about which tests are expected to get skipped on sunos5. *** Error code 1 make: Fatal error: Command failed for target `test' ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-02 03:42 Message: Logged In: YES user_id=21627 The "skipped on platform xy" approach is inherently broken. minidom and pyexpat are not necessarily skipped; they are only skipped if no expat library was found during build. Likewise, the gdbm tests are skipped if no gdbm is installed. Furthermore, "on sunos5" says nearly nothing; Solaris 2.3 classifies as sunos5 just as well as Solaris8, yet Solaris 8 has many more functions built-in. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512007&group_id=5470 From noreply@sourceforge.net Sun Feb 3 13:51:48 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 03 Feb 2002 05:51:48 -0800 Subject: [Python-bugs-list] [ python-Bugs-501164 ] 2.2 on linux SEGV sometimes Message-ID: Bugs item #501164, was opened at 2002-01-08 19:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=501164&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 >Status: Closed >Resolution: Works For Me Priority: 5 Submitted By: MATSUI Tetsushi (tetsushi) Assigned to: Nobody/Anonymous (nobody) Summary: 2.2 on linux SEGV sometimes Initial Comment: I am using Python 2.2. The execution with pure python scripts suddenly stops after several hours or a few days. With the latest core I run gdb, it says: Program terminated with signal 11, Segmentation fault. and the head of bt is like this: #0 0x80afb1e in binary_op1 (v=0x8dc0f54, w=0x8c641bc, op_slot=4) at Objects/abstract.c:340 #1 0x80b2537 in PyNumber_Subtract (v=0x8dc0f54, w=0x8c641bc) at Objects/abstract.c:392 #2 0x8079f27 in eval_frame (f=0x820c1fc) at Python/ceval.c:988 #3 0x807cd50 in PyEval_EvalCodeEx (co=0x81cf608, globals=0x81d5214, locals=0x0, args=0x8202fc4, argcount=5, kws=0x8202fd8, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2574 #4 0x807f41c in fast_function (func=0x81e4584, pp_stack=0xbfffe474, n=5, na=5, nk=0) at Python/ceval.c:3150 Thanks, tetsushi ---------------------------------------------------------------------- Comment By: MATSUI Tetsushi (tetsushi) Date: 2002-02-01 04:31 Message: Logged In: YES user_id=421269 I changed my gcc back to 2.95.3 from 3.0.3. And I have not experienced segmentation fault since then. Thus I conclude the problem is in gcc 3.0.x and Python is innocent. Thank you very much. ---------------------------------------------------------------------- Comment By: MATSUI Tetsushi (tetsushi) Date: 2002-01-16 08:44 Message: Logged In: YES user_id=421269 I tried to reproduce SEGV. from alib import * for i in range(10000,50000): n=(i**7-1)/(i-1) if isprime(n): continue print n,MPQS(n).run() The above script stopped when i was 17359. It took about 1 day on my PC. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-14 14:21 Message: Logged In: YES user_id=21627 I cannot reproduce this: >>> from alib import * >>> MPQS(30).run() starting MPQS 10 {10: 1, 3: 1}>>> MPQS(3000000000000000000000000000000000).run() starting MPQS 1000000000000000000000000000000000 {1000000000000000000000000000000000L: 1, 3: 1} Can you please give the *precise* sequence of commands to make this crash? ---------------------------------------------------------------------- Comment By: MATSUI Tetsushi (tetsushi) Date: 2002-01-08 23:00 Message: Logged In: YES user_id=421269 OK, I attach the main script. (Maybe the 1659-th line is the stopping point.) It consists of many factoring or primality testing functions and classes, and the stopping point I suspect is in the class MPQS. To run the algorithm MPQS(n).run() where n is about 30 decimal digit composite. The length of stack trace is 53. The last 3 are like this: #50 0x8053fcb in Py_Main (argc=5, argv=0xbffff644) at Modules/main.c:369 #51 0x8053a47 in main (argc=5, argv=0xbffff644) at Modules/python.c:10 #52 0x4004ca49 in Letext () Thanks, tetsushi. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-01-08 20:00 Message: Logged In: YES user_id=6380 Can you attach the script, any input data it needs, and instructions for running it? Otherwise there's no hope in debugging this. Also, how long is the stack? Could it be a stack overflow? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=501164&group_id=5470 From noreply@sourceforge.net Sun Feb 3 17:36:18 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 03 Feb 2002 09:36:18 -0800 Subject: [Python-bugs-list] [ python-Bugs-512433 ] Quote handling in os.system & os.popen Message-ID: Bugs item #512433, was opened at 2002-02-03 09:36 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512433&group_id=5470 Category: Windows Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Jimmy Retzlaff (jretz) Assigned to: Tim Peters (tim_one) Summary: Quote handling in os.system & os.popen Initial Comment: On Python 2.2 under Windows XP: os.system('"notepad" "test.py"') does not work as expected. It appears that os.system attempts to run: notepad" "test.py A workaround is to use: os.system('""notepad" "test.py""') Both of the following work as expected: os.system('notepad "test.py"') os.system('"notepad" test.py') os.popen exhibits the same behaviour. In naive testing, the following hack seems to make things better: os_system = os.system os.system = lambda command: os_system('"%s"' % command) This may suggest a potential fix in the C code - or it may simply offend the sensibilities of those more knowledgeable than me. :) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512433&group_id=5470 From noreply@sourceforge.net Sun Feb 3 21:54:31 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 03 Feb 2002 13:54:31 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-512490 ] muli-line comment suggestion Message-ID: Feature Requests item #512490, was opened at 2002-02-03 13:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512490&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: frobozz electric (frobozzelectric) Assigned to: Nobody/Anonymous (nobody) Summary: muli-line comment suggestion Initial Comment: Rather than having to type # before every comment statement of a multi-line comment block, e.g. # comment line 1 # comment line 2 ... # comment line n perhaps you could implement this: #: comment line 1 comment line 2 ... comment line n Just a thought... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512490&group_id=5470 From noreply@sourceforge.net Sun Feb 3 22:04:54 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 03 Feb 2002 14:04:54 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-512494 ] multi-line comment block clarification Message-ID: Feature Requests item #512494, was opened at 2002-02-03 14:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512494&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: frobozz electric (frobozzelectric) Assigned to: Nobody/Anonymous (nobody) Summary: multi-line comment block clarification Initial Comment: The previous post did not show the indenting for the multi-line comment block. What I meant was this #: Comment line 1 Comment line 2 ... Comment line n Whatever. It's just an idea. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512494&group_id=5470 From noreply@sourceforge.net Sun Feb 3 22:15:59 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 03 Feb 2002 14:15:59 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-512497 ] multi-line print statement Message-ID: Feature Requests item #512497, was opened at 2002-02-03 14:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512497&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: frobozz electric (frobozzelectric) Assigned to: Nobody/Anonymous (nobody) Summary: multi-line print statement Initial Comment: Similar to the multi-line comment block suggestion, instead of using \ to say the line continues use print: "line 1" "line 2" ... "line n" Ok, then...thanks ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512497&group_id=5470 From noreply@sourceforge.net Mon Feb 4 06:22:29 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 03 Feb 2002 22:22:29 -0800 Subject: [Python-bugs-list] [ python-Bugs-501164 ] 2.2 on linux SEGV sometimes Message-ID: Bugs item #501164, was opened at 2002-01-08 19:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=501164&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Closed Resolution: Works For Me Priority: 5 Submitted By: MATSUI Tetsushi (tetsushi) Assigned to: Nobody/Anonymous (nobody) Summary: 2.2 on linux SEGV sometimes Initial Comment: I am using Python 2.2. The execution with pure python scripts suddenly stops after several hours or a few days. With the latest core I run gdb, it says: Program terminated with signal 11, Segmentation fault. and the head of bt is like this: #0 0x80afb1e in binary_op1 (v=0x8dc0f54, w=0x8c641bc, op_slot=4) at Objects/abstract.c:340 #1 0x80b2537 in PyNumber_Subtract (v=0x8dc0f54, w=0x8c641bc) at Objects/abstract.c:392 #2 0x8079f27 in eval_frame (f=0x820c1fc) at Python/ceval.c:988 #3 0x807cd50 in PyEval_EvalCodeEx (co=0x81cf608, globals=0x81d5214, locals=0x0, args=0x8202fc4, argcount=5, kws=0x8202fd8, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2574 #4 0x807f41c in fast_function (func=0x81e4584, pp_stack=0xbfffe474, n=5, na=5, nk=0) at Python/ceval.c:3150 Thanks, tetsushi ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-03 22:22 Message: Logged In: YES user_id=31435 Thank you for the followup! I don't think we know anything about what caused your problem (I played with it some a few weeks ago and didn't see any problems, BTW). If it's a shy memory corruption bug, it may be anywhere (glibc, Python, off-by-1 code generation, bad spot on your disk, loose connection on a disk controller, bad bit in a RAM chip, ...), and just moving memory around a little may make it appear to go away. These can be very, very hard to track down. Keep a close eye on your results! ---------------------------------------------------------------------- Comment By: MATSUI Tetsushi (tetsushi) Date: 2002-02-01 04:31 Message: Logged In: YES user_id=421269 I changed my gcc back to 2.95.3 from 3.0.3. And I have not experienced segmentation fault since then. Thus I conclude the problem is in gcc 3.0.x and Python is innocent. Thank you very much. ---------------------------------------------------------------------- Comment By: MATSUI Tetsushi (tetsushi) Date: 2002-01-16 08:44 Message: Logged In: YES user_id=421269 I tried to reproduce SEGV. from alib import * for i in range(10000,50000): n=(i**7-1)/(i-1) if isprime(n): continue print n,MPQS(n).run() The above script stopped when i was 17359. It took about 1 day on my PC. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-14 14:21 Message: Logged In: YES user_id=21627 I cannot reproduce this: >>> from alib import * >>> MPQS(30).run() starting MPQS 10 {10: 1, 3: 1}>>> MPQS(3000000000000000000000000000000000).run() starting MPQS 1000000000000000000000000000000000 {1000000000000000000000000000000000L: 1, 3: 1} Can you please give the *precise* sequence of commands to make this crash? ---------------------------------------------------------------------- Comment By: MATSUI Tetsushi (tetsushi) Date: 2002-01-08 23:00 Message: Logged In: YES user_id=421269 OK, I attach the main script. (Maybe the 1659-th line is the stopping point.) It consists of many factoring or primality testing functions and classes, and the stopping point I suspect is in the class MPQS. To run the algorithm MPQS(n).run() where n is about 30 decimal digit composite. The length of stack trace is 53. The last 3 are like this: #50 0x8053fcb in Py_Main (argc=5, argv=0xbffff644) at Modules/main.c:369 #51 0x8053a47 in main (argc=5, argv=0xbffff644) at Modules/python.c:10 #52 0x4004ca49 in Letext () Thanks, tetsushi. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-01-08 20:00 Message: Logged In: YES user_id=6380 Can you attach the script, any input data it needs, and instructions for running it? Otherwise there's no hope in debugging this. Also, how long is the stack? Could it be a stack overflow? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=501164&group_id=5470 From noreply@sourceforge.net Mon Feb 4 09:05:23 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Feb 2002 01:05:23 -0800 Subject: [Python-bugs-list] [ python-Bugs-512660 ] findertools.reveal() broken Message-ID: Bugs item #512660, was opened at 2002-02-04 01:05 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 Category: Macintosh Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Erik van Blokland (letterror) Assigned to: Jack Jansen (jackjansen) Summary: findertools.reveal() broken Initial Comment: The findertools functions reveal(), select() and update() seem to be broken. Perhaps the changes in the Finder appleevent modules broke it. I have alternative functions which construct the events directly -- seem to work properly. What now? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 From noreply@sourceforge.net Mon Feb 4 10:02:21 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Feb 2002 02:02:21 -0800 Subject: [Python-bugs-list] [ python-Bugs-511992 ] IDE: sys.stdout.flush() doesn't work Message-ID: Bugs item #511992, was opened at 2002-02-01 16:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511992&group_id=5470 Category: Macintosh Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Jurjen N.E. Bos (jneb) >Assigned to: Just van Rossum (jvr) Summary: IDE: sys.stdout.flush() doesn't work Initial Comment: Using: Mac OS X 10.1.1, Python 2.2 for Carbon. Problem: def t(n): for i in range(10): for j in range(n): pass sys.stdout.write(`i`) sys.stdout.write("\n") sys.stdout.flush() t(100000) #depends on your system's performance If you run this in an IDE window, you would expect that the digits appear with regular intervals. In fact, they appear all together. This is quite irritating, since it frustrates "real time" output. Jack asked me to assign it to Just. Jurjen ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-04 02:02 Message: Logged In: YES user_id=45365 Assigning to Just, this is his baby. My guess is that flush() in the text widget class needs a bit more extra magic so that Waste actually updates the text widget in stead of waiting for the next update event. But that's pure guesswork:-) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511992&group_id=5470 From noreply@sourceforge.net Mon Feb 4 10:11:15 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Feb 2002 02:11:15 -0800 Subject: [Python-bugs-list] [ python-Bugs-512660 ] findertools.reveal() broken Message-ID: Bugs item #512660, was opened at 2002-02-04 01:05 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 Category: Macintosh Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Erik van Blokland (letterror) Assigned to: Jack Jansen (jackjansen) Summary: findertools.reveal() broken Initial Comment: The findertools functions reveal(), select() and update() seem to be broken. Perhaps the changes in the Finder appleevent modules broke it. I have alternative functions which construct the events directly -- seem to work properly. What now? ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-04 02:11 Message: Logged In: YES user_id=45365 Attach the patch to this bug report. Context-diff style patches are preferred, but as these are difficult to create an a pre-OSX mac I'll accept anything from you:-) Don't forget to check the checkmark ("Check to upload and attach file") when you add the patch. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 From noreply@sourceforge.net Mon Feb 4 10:28:19 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Feb 2002 02:28:19 -0800 Subject: [Python-bugs-list] [ python-Bugs-512660 ] findertools.reveal() broken Message-ID: Bugs item #512660, was opened at 2002-02-04 01:05 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 Category: Macintosh Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Erik van Blokland (letterror) Assigned to: Jack Jansen (jackjansen) Summary: findertools.reveal() broken Initial Comment: The findertools functions reveal(), select() and update() seem to be broken. Perhaps the changes in the Finder appleevent modules broke it. I have alternative functions which construct the events directly -- seem to work properly. What now? ---------------------------------------------------------------------- >Comment By: Erik van Blokland (letterror) Date: 2002-02-04 02:28 Message: Logged In: YES user_id=448095 Patch for findertools.reveal() et. al. Tested on OS9.1 and 10.1.2. The file is the entire findertools module as I'm not yet familiar with generating the required diffs. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-04 02:11 Message: Logged In: YES user_id=45365 Attach the patch to this bug report. Context-diff style patches are preferred, but as these are difficult to create an a pre-OSX mac I'll accept anything from you:-) Don't forget to check the checkmark ("Check to upload and attach file") when you add the patch. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 From noreply@sourceforge.net Mon Feb 4 12:46:34 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Feb 2002 04:46:34 -0800 Subject: [Python-bugs-list] [ python-Bugs-511992 ] IDE: sys.stdout.flush() doesn't work Message-ID: Bugs item #511992, was opened at 2002-02-01 16:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511992&group_id=5470 Category: Macintosh Group: Platform-specific >Status: Closed Resolution: None Priority: 5 Submitted By: Jurjen N.E. Bos (jneb) Assigned to: Just van Rossum (jvr) Summary: IDE: sys.stdout.flush() doesn't work Initial Comment: Using: Mac OS X 10.1.1, Python 2.2 for Carbon. Problem: def t(n): for i in range(10): for j in range(n): pass sys.stdout.write(`i`) sys.stdout.write("\n") sys.stdout.flush() t(100000) #depends on your system's performance If you run this in an IDE window, you would expect that the digits appear with regular intervals. In fact, they appear all together. This is quite irritating, since it frustrates "real time" output. Jack asked me to assign it to Just. Jurjen ---------------------------------------------------------------------- >Comment By: Just van Rossum (jvr) Date: 2002-02-04 04:46 Message: Logged In: YES user_id=92689 The problem was that the window's pixel buffer wasn't explicitly flushed. This happens automatically when you run an event loop, but you need to do it by hand if you want to see the results before handing control back to the event loop. Fixed in CVS, both for the output window as the interactive console. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-04 02:02 Message: Logged In: YES user_id=45365 Assigning to Just, this is his baby. My guess is that flush() in the text widget class needs a bit more extra magic so that Waste actually updates the text widget in stead of waiting for the next update event. But that's pure guesswork:-) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511992&group_id=5470 From noreply@sourceforge.net Mon Feb 4 18:21:42 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Feb 2002 10:21:42 -0800 Subject: [Python-bugs-list] [ python-Bugs-512871 ] Installation instructions are wrong Message-ID: Bugs item #512871, was opened at 2002-02-04 10:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512871&group_id=5470 Category: Installation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Jon Ribbens (jribbens) Assigned to: Nobody/Anonymous (nobody) Summary: Installation instructions are wrong Initial Comment: The README file's installation instructions in Python 2.2 are wrong. The Modules/Setup file has changed considerably in purpose between Python 2.0 and Python 2.2, but the instructions are identical. There needs to be some wording to the effect that Modules/Setup is only for configuring where to look for libraries, etc, and actually everything that's commented out in Modules/Setup will be included anyway by some magic means. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512871&group_id=5470 From noreply@sourceforge.net Mon Feb 4 22:02:12 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Feb 2002 14:02:12 -0800 Subject: [Python-bugs-list] [ python-Bugs-510868 ] Solaris 2.7 make chokes. Message-ID: Bugs item #510868, was opened at 2002-01-30 11:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 Category: Installation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Sharad Satsangi (sharadinfozen) Assigned to: Nobody/Anonymous (nobody) Summary: Solaris 2.7 make chokes. Initial Comment: I'm building python2.2 on a Solaris2.7 box, an Ultra- 10. I get a segmentation fault error at 'xreadlines' when I try the make. I am not sure why. Logs of the configuration script & make are attached. (in one concatenated file, I could not tell how to upload more than one file). Any help will be greatly appreciated. thanks! -sharad. ---------------------------------------------------------------------- >Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-02-04 14:02 Message: Logged In: YES user_id=443851 I tried downgrading gcc to 2.95.3, however, it still craps out at the same place, and according to backtrace, it is still quitting in the same place. I've attached the logs. We've successfully built python on another box, keeping the project that was in jeopardy moving forward, however, I would still very much like to find out how to install python correctly on this problem box. thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-02 03:38 Message: Logged In: YES user_id=21627 That looks very much like a miscompilation. Notice that initreadline is supposed to pass a first string of "readline" to InitModule4; in the gdb backtrace, we see an empty string. Likewise, the invalid address comes from the methods argument to InitModule4, fetching ml_name. These are all static strings, compiled into an array (namely, readline.c:readline_methods). So I really recommend to downgrade the compiler (or use the Sun system compiler if you have it); if you are interested in a work-around, here are two options: - build readline statically into the Python interpreter. Do so by uncommenting the readline line in Modules/Setup (adding libraries as necessary) - do not build the readline module at all; do so by adding 'readline' into setup.py:disabled_module_list. ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-02-01 14:15 Message: Logged In: YES user_id=443851 I've done a full backtrace on it (see gdbout2), but I really don't know how to interpret the results. From what I can tell, the problem lies in this area: #1 0x20f00 in PyString_FromString ( str=0x7e1138
) at Objects/stringobject.c:112 #2 0xad7dc in PyDict_SetItemString (v=0x7e1138, key=0x7e1138
, item=0x17d350) at Objects/dictobject.c:1879 Unfotunately, I can't tell what's going wrong in these source files, and when I tried 'p str' on the var referenced in line #1, I get: $1 = 0x7e1138
which does not explain much to me. I have tried the package at SunFreeWare's site, but my developer needs the 'HTTPSConnection' from 'httplib', which apparently is _not_ built into the sunfreeware package. So, any input, again, would be greatly appreciated. I realise you must be a busy guy, thanks for all of your help & patience! -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-01 12:26 Message: Logged In: YES user_id=21627 Ok, gcc 3.0.3 itself could be a source of problems, but I won't accuse that compiler prematurely (you might want to try 2.95.x, though, if you have that readily available). As for the gdb analysis: that it crashes is strlen is not the problem; strlen is the innocent C library function that computes the length of the string. Please invoke the command "bt" when it crashes; that should tell you the backtrace (i.e. where strlen is called from) - please report that. If you want to investigate further: "up" brings you up a stack-level, and "p varname" prints a variable. This approach to debugging may take many more rounds, so I'd understand if you are ready to give up (sunfreeware has 2.1.1 binaries). It's just that it builds fine for me (on Solaris 8, using gcc 2.95.2), so I have no clue as to what the problem might be. Did you pass any options to ./configure? ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-02-01 08:04 Message: Logged In: YES user_id=443851 Thanks for the gdb tip, I've switched to the solaris7 pkg for gdb. The version info for gcc does not explicitly list what flavor of Solaris it's built for, but the version number is 3.0.3, and it reads it's specs from /usr/local/lib/gcc-lib/sparc-sun- solaris2.7/3.0.3/specs, which leads me to believe that it's built for solaris7. Anywho, after some freaking around with env var's & gdb, I got the following output (see gdbout). It leads me to believe that the problem is in /usr/lib/libc.so.1, but I'm not sure how to replace/update this lib, or even if it is indeed the source of my python misery. Any input or guidance would be appreciated. thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-01 02:36 Message: Logged In: YES user_id=21627 It would be good if you could analyse this with gdb further. I recommend to obtain a more recent copy of gdb (e.g. gdb 5.0), in particular one compiled for your system (the one you have is compiled for Solaris 2.4). You can get get binaries from sunfreeware.com (although they don't have gdb 5 for Solaris 7; you might want to try the 4.18 that they do have). The important thing is that you need to run the setup.py under gdb. To do this, please invoke the setup.py line manually. I.e. if the makefile invoke ENV1=val1 ENV2=val2 python-command python-options arguments you will need to perform the following commands ENV1=val1 ENV2=val2 export ENV1 ENV2 gdb python-command run python-options arguments As a side point, what is the exact gcc version that you are usingq (gcc -v)? If that also is not a gcc for Solaris 7, I recommend to re-install the compiler, or use the system compiler. ---------------------------------------------------------------------- Comment By: Sharad Satsangi (sharadinfozen) Date: 2002-01-31 13:09 Message: Logged In: YES user_id=443851 I did try gdb on the python binary, but got nothing interesting (you can see in the file gdbpyth). thanks, -sharad. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-31 09:35 Message: Logged In: YES user_id=21627 Can you attach to Python with gdb and see why it crashes? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=510868&group_id=5470 From noreply@sourceforge.net Mon Feb 4 23:17:01 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Feb 2002 15:17:01 -0800 Subject: [Python-bugs-list] [ python-Bugs-513033 ] unsafe call to PyThreadState_Swap Message-ID: Bugs item #513033, was opened at 2002-02-04 15:16 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513033&group_id=5470 Category: Python Library Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Jake McGuire (jamcguir) Assigned to: Nobody/Anonymous (nobody) Summary: unsafe call to PyThreadState_Swap Initial Comment: It appears that there is a blatantly unsafe call to PyThreadState_Swap in the functions on_hook and on_completer in Modules/Readline.c The diff adding these calls is viewable at http://cvs.sourceforge.net/cgi- bin/viewcvs.cgi/python/python/dist/src/Modules/readline .c.diff?r1=2.5&r2=2.6&only_with_tag=MAIN The call to PyThreadState_Swap is added directly below a comment pointing out that readline() is called with the interpreter lock released. Viewing the code shows that the interpreter lock is indeed released before calling readline (in myreadline.c). Multithreaded programs that define callback functions suffer from intermittent crashes, often Py_FatalError- ing claiming "tstate mix-up" from ceval.c Removing the calls to PyThreadState_Swap makes these problems go away. Can someone explain how the call to PyThreadState_Swap is indeed the right thing to be doing? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513033&group_id=5470 From noreply@sourceforge.net Mon Feb 4 23:41:41 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Feb 2002 15:41:41 -0800 Subject: [Python-bugs-list] [ python-Bugs-513033 ] unsafe call to PyThreadState_Swap Message-ID: Bugs item #513033, was opened at 2002-02-04 15:16 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513033&group_id=5470 Category: Python Library Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Jake McGuire (jamcguir) >Assigned to: Guido van Rossum (gvanrossum) Summary: unsafe call to PyThreadState_Swap Initial Comment: It appears that there is a blatantly unsafe call to PyThreadState_Swap in the functions on_hook and on_completer in Modules/Readline.c The diff adding these calls is viewable at http://cvs.sourceforge.net/cgi- bin/viewcvs.cgi/python/python/dist/src/Modules/readline .c.diff?r1=2.5&r2=2.6&only_with_tag=MAIN The call to PyThreadState_Swap is added directly below a comment pointing out that readline() is called with the interpreter lock released. Viewing the code shows that the interpreter lock is indeed released before calling readline (in myreadline.c). Multithreaded programs that define callback functions suffer from intermittent crashes, often Py_FatalError- ing claiming "tstate mix-up" from ceval.c Removing the calls to PyThreadState_Swap makes these problems go away. Can someone explain how the call to PyThreadState_Swap is indeed the right thing to be doing? ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-04 15:41 Message: Logged In: YES user_id=31435 Guido's checkin comment said: """ Darn. When thread support is disabled, the BEGIN/END macros don't save and restore the tstate, but explicitly calling PyEval_SaveThread() does reset it! While I think about how to fix this for real, here's a fix that avoids getting a fatal error. """ Therefore I assigned the bug to Guido . It would help if you could describe a specific simple scenario that provokes the problems you're seeing. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513033&group_id=5470 From noreply@sourceforge.net Mon Feb 4 23:55:37 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 04 Feb 2002 15:55:37 -0800 Subject: [Python-bugs-list] [ python-Bugs-513033 ] unsafe call to PyThreadState_Swap Message-ID: Bugs item #513033, was opened at 2002-02-04 15:16 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513033&group_id=5470 Category: Python Library Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Jake McGuire (jamcguir) Assigned to: Guido van Rossum (gvanrossum) Summary: unsafe call to PyThreadState_Swap Initial Comment: It appears that there is a blatantly unsafe call to PyThreadState_Swap in the functions on_hook and on_completer in Modules/Readline.c The diff adding these calls is viewable at http://cvs.sourceforge.net/cgi- bin/viewcvs.cgi/python/python/dist/src/Modules/readline .c.diff?r1=2.5&r2=2.6&only_with_tag=MAIN The call to PyThreadState_Swap is added directly below a comment pointing out that readline() is called with the interpreter lock released. Viewing the code shows that the interpreter lock is indeed released before calling readline (in myreadline.c). Multithreaded programs that define callback functions suffer from intermittent crashes, often Py_FatalError- ing claiming "tstate mix-up" from ceval.c Removing the calls to PyThreadState_Swap makes these problems go away. Can someone explain how the call to PyThreadState_Swap is indeed the right thing to be doing? ---------------------------------------------------------------------- >Comment By: Jake McGuire (jamcguir) Date: 2002-02-04 15:55 Message: Logged In: YES user_id=448911 Unfortunately, the scenario isn't really *simple*. I think it goes like this: Thread A defines a readline startup hook. Thread A calls PyOS_Readline() in myreadline.c Thread A calls Py_BEGIN_ALLOW_THREADS, saving its thread state and setting the global thread state to NULL. Thread A calls readline. Thread A gets blocked, and Thread B gets scheduled. Thread B grabs the global interpreter lock, and restores its thread state. Thread B gets suspended, and Thread A gets scheduled. -- note: Thread B has the intepreter lock -- Thread A calls PyThreadState_Swap in on_hook(), setting the current global thread state to NULL Thread A calls PyEval_RestoreThread, which blocks waiting for the global interpreter lock Thread B gets scheduled, tries to run, but finds that the global thread state is NULL. Bad things happen. Proposed solution: Change Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS in myreadline.c:PyOS_Readline to calls to PyEval_SaveThread and PyEval_RestoreThread. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-04 15:41 Message: Logged In: YES user_id=31435 Guido's checkin comment said: """ Darn. When thread support is disabled, the BEGIN/END macros don't save and restore the tstate, but explicitly calling PyEval_SaveThread() does reset it! While I think about how to fix this for real, here's a fix that avoids getting a fatal error. """ Therefore I assigned the bug to Guido . It would help if you could describe a specific simple scenario that provokes the problems you're seeing. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513033&group_id=5470 From noreply@sourceforge.net Tue Feb 5 14:11:24 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Feb 2002 06:11:24 -0800 Subject: [Python-bugs-list] [ python-Bugs-511655 ] Readline: unwanted filename completion Message-ID: Bugs item #511655, was opened at 2002-02-01 02:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511655&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Tjabo Kloppenburg (tapo) Assigned to: Nobody/Anonymous (nobody) Summary: Readline: unwanted filename completion Initial Comment: Hi all. Something is broken with the completion of readline: simon@ping-pong:~$ python Python 2.1.1+ (#1, Jan 8 2002, 00:37:12) [GCC 2.95.4 20011006 (Debian prerelease)] on linux2 Type "copyright", "credits" or "license" for more information. >>> import rlcompleter >>> rlcompleter.readline.parse_and_bind ("tab: complete") >>> foo foo.gif foo.txt foo2.gif foobar.jpg >>> foo.gif Traceback (most recent call last): File "", line 1, in ? NameError: name 'foo' is not defined [the "foo.gif", "foo.txt", "foo2.gif" and "foobar.jpg" are files in my current working directory] It seems that readline has a fallback to filename completion when no matches are available. Even if I use my own completion function: >>> def nullcompleter (text, state): ... print "\nBuh!" ... return None ... >>> rlcompleter.readline.set_completer(nullcompleter) >>> foo Buh! Buh! foo.gif foo.txt foo2.gif foobar.jpg foot >>> foo there is this filename fallback. Is this a known Problem? Is there an evil hack to avoid this? Thanks, Simon -- Simon.Budig@unix-ag.org http://www.home.unix-ag.org/simon/ ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-05 06:11 Message: Logged In: YES user_id=6656 I've submitted Simon's patch to fix this as patch #513235. ---------------------------------------------------------------------- Comment By: Tjabo Kloppenburg (tapo) Date: 2002-02-01 03:03 Message: Logged In: YES user_id=309048 simon is a friend of mine. He tried to submit the bug without sourceforge account, but he failed. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511655&group_id=5470 From noreply@sourceforge.net Tue Feb 5 21:38:00 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Feb 2002 13:38:00 -0800 Subject: [Python-bugs-list] [ python-Bugs-512660 ] findertools.reveal() broken Message-ID: Bugs item #512660, was opened at 2002-02-04 01:05 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 Category: Macintosh Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Erik van Blokland (letterror) Assigned to: Jack Jansen (jackjansen) Summary: findertools.reveal() broken Initial Comment: The findertools functions reveal(), select() and update() seem to be broken. Perhaps the changes in the Finder appleevent modules broke it. I have alternative functions which construct the events directly -- seem to work properly. What now? ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-05 13:38 Message: Logged In: YES user_id=45365 Erik, I played with your patch for a while (because I preferred to "re-educate" the Finder suite over the lowlevel code in your patch), and at some point I tried the original findertools that's in the repository, and lo and behold: it works wonderfully for me! Both on OS9 and OSX! What exactly is the problem your experiencing, and with which Python? (What I did change is that I added Unicode support to aepack, so the ugly Unknown('utxt', ...) are now unicode strings on OSX). ---------------------------------------------------------------------- Comment By: Erik van Blokland (letterror) Date: 2002-02-04 02:28 Message: Logged In: YES user_id=448095 Patch for findertools.reveal() et. al. Tested on OS9.1 and 10.1.2. The file is the entire findertools module as I'm not yet familiar with generating the required diffs. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-04 02:11 Message: Logged In: YES user_id=45365 Attach the patch to this bug report. Context-diff style patches are preferred, but as these are difficult to create an a pre-OSX mac I'll accept anything from you:-) Don't forget to check the checkmark ("Check to upload and attach file") when you add the patch. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 From noreply@sourceforge.net Tue Feb 5 21:57:25 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Feb 2002 13:57:25 -0800 Subject: [Python-bugs-list] [ python-Bugs-512660 ] findertools.reveal() broken Message-ID: Bugs item #512660, was opened at 2002-02-04 01:05 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 Category: Macintosh Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Erik van Blokland (letterror) Assigned to: Jack Jansen (jackjansen) Summary: findertools.reveal() broken Initial Comment: The findertools functions reveal(), select() and update() seem to be broken. Perhaps the changes in the Finder appleevent modules broke it. I have alternative functions which construct the events directly -- seem to work properly. What now? ---------------------------------------------------------------------- >Comment By: Erik van Blokland (letterror) Date: 2002-02-05 13:57 Message: Logged In: YES user_id=448095 Jack, I guess I'm failing at failing :) -- reveal() et.al. complained about the path I gave them, tracebacks from within the ae bowels. I tried the lowerlevel functions and they worked, so I took that to be the solution. I had a HD crash yesterday, perhaps the directories were mangled to begin with and the events couldn't help but err? Solved I guess.. :) ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-05 13:38 Message: Logged In: YES user_id=45365 Erik, I played with your patch for a while (because I preferred to "re-educate" the Finder suite over the lowlevel code in your patch), and at some point I tried the original findertools that's in the repository, and lo and behold: it works wonderfully for me! Both on OS9 and OSX! What exactly is the problem your experiencing, and with which Python? (What I did change is that I added Unicode support to aepack, so the ugly Unknown('utxt', ...) are now unicode strings on OSX). ---------------------------------------------------------------------- Comment By: Erik van Blokland (letterror) Date: 2002-02-04 02:28 Message: Logged In: YES user_id=448095 Patch for findertools.reveal() et. al. Tested on OS9.1 and 10.1.2. The file is the entire findertools module as I'm not yet familiar with generating the required diffs. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-04 02:11 Message: Logged In: YES user_id=45365 Attach the patch to this bug report. Context-diff style patches are preferred, but as these are difficult to create an a pre-OSX mac I'll accept anything from you:-) Don't forget to check the checkmark ("Check to upload and attach file") when you add the patch. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 From noreply@sourceforge.net Tue Feb 5 22:10:12 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Feb 2002 14:10:12 -0800 Subject: [Python-bugs-list] [ python-Bugs-505562 ] Summary: "BuildApplet can destory the source file on Mac OS X" Message-ID: Bugs item #505562, was opened at 2002-01-18 14:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=505562&group_id=5470 Category: Macintosh Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Russell Owen (reowen) Assigned to: Jack Jansen (jackjansen) >Summary: Summary: "BuildApplet can destory the source file on Mac OS X" Initial Comment: If the name of the file dropped on BuildApplet is the right length, BuildApplet will work and then will delete the source file!!! For instance dropping a file named "Cvt cmm -> Igor data 2-0 long name.py" onto BuildApple first produces a working droplet with name: "Cvt cmm -> Igor data 2#7F2E4" and then the source file simply vanishes. It's really gone, too (or perhaps moved and renamed) -- a disk search doesn't turn it up anywhere. Making the file name significantly shorter causes everything to work normally. Making the file name significantly longer causes BuildApplet to exit immediately with no error message and nothing done. There seems to be a magic range of file name lengths that cause the source file to softly and silently vanish away. Configuration: - Mac OS X 10.1.2 - MacPython 2.1.1 configured for Carbon - I have only one disk partition, formatted as Mac OS Extended, with tons of free space. Further details available on request, but I hope the problem is easily reproducible. I tried it many times on my Mac and it always did the same thing. I doubt the contents of the source file is relevant, but if it is, I do have a copy (with a shorter name!). -- Russell ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-05 14:10 Message: Logged In: YES user_id=45365 An Apple person on pythonmac-sig suggested this is indeed an Apple problem (and a serious one too, therefore). I've submitted it to the Apple bug reporter as ID 2854931. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-01-27 13:35 Message: Logged In: YES user_id=45365 This turns out to be a very serious problem in the way OSX converts long filenames to FSSpecs. I'm taking the discussion to pythonmac-sig (for starters). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=505562&group_id=5470 From noreply@sourceforge.net Tue Feb 5 22:22:21 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Feb 2002 14:22:21 -0800 Subject: [Python-bugs-list] [ python-Bugs-505150 ] mac module documentation inaccuracy. Message-ID: Bugs item #505150, was opened at 2002-01-17 15:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=505150&group_id=5470 >Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Martin Miller (mrmiller) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: mac module documentation inaccuracy. Initial Comment: The documentation at for the MacPython 2.2 mac module says, in part: > ==snip== >> One additional function is available: >> >> xstat(path) >> This function returns the same information as stat(), >> but with three additional values appended: the size of the >> resource fork of the file and its >> 4-character creator and type. > ==snip== The xstat() function is available only under PPC MacPython but not under Carbon MacPython. The documentation should be updated, assuming the ommision was intentional. Ideally, it would suggest alternatives for the Carbon version. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-05 14:22 Message: Logged In: YES user_id=45365 Here is a patch for libmac.tex. I'll leave it to you to replace the \code{} sections with one of the gazillion macros I can never remember, hope you don't mind:-) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=505150&group_id=5470 From noreply@sourceforge.net Tue Feb 5 22:45:15 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Feb 2002 14:45:15 -0800 Subject: [Python-bugs-list] [ python-Bugs-490558 ] Missing Snd functions Message-ID: Bugs item #490558, was opened at 2001-12-08 02:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=490558&group_id=5470 Category: Macintosh Group: Feature Request >Status: Closed Resolution: None Priority: 5 Submitted By: Jason Harper (jasonharper) Assigned to: Jack Jansen (jackjansen) Summary: Missing Snd functions Initial Comment: A few minor omissions in the Carbon.Snd module as of 2.2b2: SndPlay and SndStartFilePlay are exposed only as methods of sound channel objects. However, these are usefully called with a NULL sound channel, in which case a channel is internally allocated for the duration of the call (and the 'async' parameter is ignored). SndRecord is completely missing (as is SndRecordToFile, but that isn't supported in Carbon) - yes, there is SPBRecord, but that's a lower-level routine that doesn't present a user interface for recording. If I'm understanding the bgen process correctly, this is because of a parameter of type ModalFilterUPP, which is blacklisted. However, SndRecord would be useful even if this parameter wasn't supported (required to be None, perhaps): the Sound Manager documentation gives no hint as to why you'd even want to use a filterproc, and the sample code I can find always passes NULL. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-05 14:45 Message: Logged In: YES user_id=45365 SndRecord (and SndRecordToFile in classic MacPython) are now supported in the CVS tree. Exposing SndPlay and SndStartFilePlay as methods isn't all that useful, it's easy enough to call SndChannel(...).SndPlay(...), I think (and it's a lot of work:-) I'm closing the bug report, feel free to reopen it if you think the SndPlay issue merits it. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=490558&group_id=5470 From noreply@sourceforge.net Tue Feb 5 23:09:12 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Feb 2002 15:09:12 -0800 Subject: [Python-bugs-list] [ python-Bugs-512660 ] findertools.reveal() broken Message-ID: Bugs item #512660, was opened at 2002-02-04 01:05 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 Category: Macintosh Group: Platform-specific >Status: Closed Resolution: None Priority: 5 Submitted By: Erik van Blokland (letterror) Assigned to: Jack Jansen (jackjansen) Summary: findertools.reveal() broken Initial Comment: The findertools functions reveal(), select() and update() seem to be broken. Perhaps the changes in the Finder appleevent modules broke it. I have alternative functions which construct the events directly -- seem to work properly. What now? ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-05 15:09 Message: Logged In: YES user_id=45365 Crashing you harddisk as a way to solve software problems... What a novel approach:-) I'll close the bug, reopen it if it resurfaces. ---------------------------------------------------------------------- Comment By: Erik van Blokland (letterror) Date: 2002-02-05 13:57 Message: Logged In: YES user_id=448095 Jack, I guess I'm failing at failing :) -- reveal() et.al. complained about the path I gave them, tracebacks from within the ae bowels. I tried the lowerlevel functions and they worked, so I took that to be the solution. I had a HD crash yesterday, perhaps the directories were mangled to begin with and the events couldn't help but err? Solved I guess.. :) ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-05 13:38 Message: Logged In: YES user_id=45365 Erik, I played with your patch for a while (because I preferred to "re-educate" the Finder suite over the lowlevel code in your patch), and at some point I tried the original findertools that's in the repository, and lo and behold: it works wonderfully for me! Both on OS9 and OSX! What exactly is the problem your experiencing, and with which Python? (What I did change is that I added Unicode support to aepack, so the ugly Unknown('utxt', ...) are now unicode strings on OSX). ---------------------------------------------------------------------- Comment By: Erik van Blokland (letterror) Date: 2002-02-04 02:28 Message: Logged In: YES user_id=448095 Patch for findertools.reveal() et. al. Tested on OS9.1 and 10.1.2. The file is the entire findertools module as I'm not yet familiar with generating the required diffs. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-04 02:11 Message: Logged In: YES user_id=45365 Attach the patch to this bug report. Context-diff style patches are preferred, but as these are difficult to create an a pre-OSX mac I'll accept anything from you:-) Don't forget to check the checkmark ("Check to upload and attach file") when you add the patch. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512660&group_id=5470 From noreply@sourceforge.net Wed Feb 6 00:34:18 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Feb 2002 16:34:18 -0800 Subject: [Python-bugs-list] [ python-Bugs-511073 ] urllib problems Message-ID: Bugs item #511073, was opened at 2002-01-30 23:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511073&group_id=5470 Category: Macintosh Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Yair Benita (ybenita) Assigned to: Jack Jansen (jackjansen) Summary: urllib problems Initial Comment: when using urllib.urlopen("url") and then reading the file with handle.read() i get only parts of pages. it works for short webpages but if i use it to download large pages it always come too short. To me it looks that it tries to read the file before it is downloaded. Jack Jansen's said: MacPython may do short reads on sockets. I've always maintained that this was correct (which reasoning was quietly accepted by everyone here), but last year I finally admitted that it may actually be incorrect (which was again quietly accepted:-) example: x=urllib.urlopen("http://www.ebi.ac.uk/cgi-bin/emblf etch?db=embl&format=fasta&style=raw&id=AB002 378") print x.read() compare the file downloaded by any html browser and the file from macpython. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-05 16:34 Message: Logged In: YES user_id=45365 I probably found the cause for this, now the only task remaining is finding out who to blame:-) httplib explicitly sets non-buffering I/O on the file corresponding to the socket, by calling self.fp = socket.makefile("rb", 0). MSL, the CodeWarrior I/O library, has an optimization (or bug:-) that if you fread() from a binary file with buffering turned off it will call the underlying read() straight away. Python's fileobject.c file_read() reacts to a short fread() return value by returning. One of these three is wrong, apparently. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511073&group_id=5470 From noreply@sourceforge.net Wed Feb 6 02:07:48 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 05 Feb 2002 18:07:48 -0800 Subject: [Python-bugs-list] [ python-Bugs-513572 ] isdir behavior getting odder on UNC path Message-ID: Bugs item #513572, was opened at 2002-02-05 18:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gary Herron (herron) Assigned to: Nobody/Anonymous (nobody) Summary: isdir behavior getting odder on UNC path Initial Comment: It's been documented in earlier version of Python on windows that os.path.isdir returns true on a UNC directory only if there was an extra backslash at the end of the argument. In Python2.2 (at least on windows 2000) it appears that *TWO* extra backslashes are needed. Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import os >>> os.path.isdir('\\trainer\island') 0 >>> os.path.isdir('\\trainer\island\') 0 >>> os.path.isdir('\\trainer\island\\') 1 >>> In a perfect world, the first call should return 1, but never has. In older versions of python, the second returned 1, but no longer. In limited tests, appending 2 or more backslashes to the end of any pathname returns the correct answer in both isfile and isdir. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 From noreply@sourceforge.net Wed Feb 6 08:52:49 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 00:52:49 -0800 Subject: [Python-bugs-list] [ python-Bugs-513666 ] unicode() docs don't mention LookupError Message-ID: Bugs item #513666, was opened at 2002-02-06 00:52 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513666&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Ben Gertzfield (che_fox) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: unicode() docs don't mention LookupError Initial Comment: The unicode() docs say: "If errors is 'strict' (the default), a ValueError is raised on errors..." This is not true; ValueError is only raised on conversion errors. There are other exceptions that can be raised: Python 2.1.2 (#1, Jan 18 2002, 18:05:45) [GCC 2.95.4 (Debian prerelease)] on linux2 Type "copyright", "credits" or "license" for more information. >>> unicode("abc", "nonexistent codec") Traceback (most recent call last): File "", line 1, in ? LookupError: unknown encoding Looking at src/Objects/unicodeobject.c, there are lots of other exceptions that can be raised. The documentation should probably be clarified. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513666&group_id=5470 From noreply@sourceforge.net Wed Feb 6 09:25:16 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 01:25:16 -0800 Subject: [Python-bugs-list] [ python-Bugs-513666 ] unicode() docs don't mention LookupError Message-ID: Bugs item #513666, was opened at 2002-02-06 00:52 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513666&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Ben Gertzfield (che_fox) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: unicode() docs don't mention LookupError Initial Comment: The unicode() docs say: "If errors is 'strict' (the default), a ValueError is raised on errors..." This is not true; ValueError is only raised on conversion errors. There are other exceptions that can be raised: Python 2.1.2 (#1, Jan 18 2002, 18:05:45) [GCC 2.95.4 (Debian prerelease)] on linux2 Type "copyright", "credits" or "license" for more information. >>> unicode("abc", "nonexistent codec") Traceback (most recent call last): File "", line 1, in ? LookupError: unknown encoding Looking at src/Objects/unicodeobject.c, there are lots of other exceptions that can be raised. The documentation should probably be clarified. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 01:25 Message: Logged In: YES user_id=38388 You are right in that there are many more exceptions which are possible (perhaps we ought to mention LookupError in the docs), ValueError will certainly be the most common, though. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513666&group_id=5470 From noreply@sourceforge.net Wed Feb 6 09:35:28 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 01:35:28 -0800 Subject: [Python-bugs-list] [ python-Bugs-513683 ] email.Parser uses LF as line sep. Message-ID: Bugs item #513683, was opened at 2002-02-06 01:35 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513683&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Brian Takashi Hooper (bthooper) Assigned to: Nobody/Anonymous (nobody) Summary: email.Parser uses LF as line sep. Initial Comment: I'm not sure what the best solution is for this, but some email clients sent multipart MIME messages using CRLF as the line separator instead of just LF, which seems to be assumed in email.Parser.Parser._parsebody. Maybe I'm reading the RFC wrong, but it seems like it says that lines of a mail message should be separated using CRLF (although I'm sure many clients don't do that either)... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513683&group_id=5470 From noreply@sourceforge.net Wed Feb 6 12:32:32 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 04:32:32 -0800 Subject: [Python-bugs-list] [ python-Bugs-513683 ] email.Parser uses LF as line sep. Message-ID: Bugs item #513683, was opened at 2002-02-06 01:35 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513683&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Brian Takashi Hooper (bthooper) Assigned to: Nobody/Anonymous (nobody) Summary: email.Parser uses LF as line sep. Initial Comment: I'm not sure what the best solution is for this, but some email clients sent multipart MIME messages using CRLF as the line separator instead of just LF, which seems to be assumed in email.Parser.Parser._parsebody. Maybe I'm reading the RFC wrong, but it seems like it says that lines of a mail message should be separated using CRLF (although I'm sure many clients don't do that either)... ---------------------------------------------------------------------- >Comment By: Barry Warsaw (bwarsaw) Date: 2002-02-06 04:32 Message: Logged In: YES user_id=12800 My philosophy so far (and I *think* this is documented in the latest rev of the .tex file), is that the email package should deal with native line endings, and that it is the job of a delivering mta to convert from rfc line endings (crlf) to native. It is certainly the case that smtplib converts from native to rfc line endings when sending the message out. Most mtas (e.g. postfix) when piping the message to a process or onto a file will convert to native line endings, at least in my experience. This may not be a very useful assumption though, and it is probably more robust to be able to deal with either line endings. There have been some movements in this direction in the cvs snapshot of the mimelib/email package where support for multibyte charsets (e.g. Japanese) have been added. You might want to check out that project's cvs trunk and see if it helps your situation, or submit a bug report there and we'll prototype the fix in that project first. Eventually all that code will be ported back to the Python 2.3 tree. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513683&group_id=5470 From noreply@sourceforge.net Wed Feb 6 12:32:52 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 04:32:52 -0800 Subject: [Python-bugs-list] [ python-Bugs-513683 ] email.Parser uses LF as line sep. Message-ID: Bugs item #513683, was opened at 2002-02-06 01:35 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513683&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Brian Takashi Hooper (bthooper) >Assigned to: Barry Warsaw (bwarsaw) Summary: email.Parser uses LF as line sep. Initial Comment: I'm not sure what the best solution is for this, but some email clients sent multipart MIME messages using CRLF as the line separator instead of just LF, which seems to be assumed in email.Parser.Parser._parsebody. Maybe I'm reading the RFC wrong, but it seems like it says that lines of a mail message should be separated using CRLF (although I'm sure many clients don't do that either)... ---------------------------------------------------------------------- Comment By: Barry Warsaw (bwarsaw) Date: 2002-02-06 04:32 Message: Logged In: YES user_id=12800 My philosophy so far (and I *think* this is documented in the latest rev of the .tex file), is that the email package should deal with native line endings, and that it is the job of a delivering mta to convert from rfc line endings (crlf) to native. It is certainly the case that smtplib converts from native to rfc line endings when sending the message out. Most mtas (e.g. postfix) when piping the message to a process or onto a file will convert to native line endings, at least in my experience. This may not be a very useful assumption though, and it is probably more robust to be able to deal with either line endings. There have been some movements in this direction in the cvs snapshot of the mimelib/email package where support for multibyte charsets (e.g. Japanese) have been added. You might want to check out that project's cvs trunk and see if it helps your situation, or submit a bug report there and we'll prototype the fix in that project first. Eventually all that code will be ported back to the Python 2.3 tree. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513683&group_id=5470 From noreply@sourceforge.net Wed Feb 6 12:54:54 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 04:54:54 -0800 Subject: [Python-bugs-list] [ python-Bugs-513725 ] memory leak in VC6++ Message-ID: Bugs item #513725, was opened at 2002-02-06 04:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513725&group_id=5470 Category: Build Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Jaeyoun Chung (dalgong) Assigned to: Nobody/Anonymous (nobody) Summary: memory leak in VC6++ Initial Comment: I'm building an windows application with Python 2.2 as an extension language. What I did was: 1. build pythoncore.dsp in Visual C++ 6.0 2. in my application initialize the python: when starting the program: Py_SetProgramName("Main"); Py_Initialize(); PySys_SetArgv(__argc, __argv); and at just before the program exit: Py_Finalize(); _Py_ReleaseInternedStrings(); Py_Exit(0); 3. then link my app with python22_d.lib 4. run my app in debug mode 5. simply select exit. Dumping objects -> {2426} normal block at 0x009365C8, 39 bytes long. Data: <(] (\ XK > 28 5D 91 00 28 5C 16 1E 01 00 00 00 58 4B 17 1E {791} normal block at 0x00915D28, 40 bytes long. Data: < ^ e XK > D8 5E 91 00 C8 65 93 00 01 00 00 00 58 4B 17 1E {787} normal block at 0x00915EC8, 52 bytes long. Data: < 0 ] > D8 30 15 1E 90 5D 91 00 CD CD CD CD CD CD CD CD {786} normal block at 0x00915D90, 44 bytes long. Data: < ^ \ > C8 5E 91 00 C0 5C 91 00 CD CD CD CD CD CD CD CD {776} normal block at 0x00915CC0, 44 bytes long. Data: < ] Z > 90 5D 91 00 E8 5A 91 00 CD CD CD CD CD CD CD CD {774} normal block at 0x00915BB8, 192 bytes long. Data: < < (> 00 00 00 00 00 00 00 00 00 00 00 00 F1 3C D8 28 {772} normal block at 0x00915B60, 24 bytes long. Data: < Z \ A > F8 5A 91 00 D0 5C 91 00 02 00 00 00 90 41 16 1E {769} normal block at 0x00915AE8, 48 bytes long. Data: < \ pZ > C0 5C 91 00 70 5A 91 00 CD CD CD CD CD CD CD CD .....[this message goes on and on]... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513725&group_id=5470 From noreply@sourceforge.net Wed Feb 6 14:28:40 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 06:28:40 -0800 Subject: [Python-bugs-list] [ python-Bugs-513725 ] memory leak in VC6++ Message-ID: Bugs item #513725, was opened at 2002-02-06 04:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513725&group_id=5470 Category: Build Group: Python 2.2 >Status: Closed >Resolution: Rejected Priority: 5 Submitted By: Jaeyoun Chung (dalgong) >Assigned to: Guido van Rossum (gvanrossum) Summary: memory leak in VC6++ Initial Comment: I'm building an windows application with Python 2.2 as an extension language. What I did was: 1. build pythoncore.dsp in Visual C++ 6.0 2. in my application initialize the python: when starting the program: Py_SetProgramName("Main"); Py_Initialize(); PySys_SetArgv(__argc, __argv); and at just before the program exit: Py_Finalize(); _Py_ReleaseInternedStrings(); Py_Exit(0); 3. then link my app with python22_d.lib 4. run my app in debug mode 5. simply select exit. Dumping objects -> {2426} normal block at 0x009365C8, 39 bytes long. Data: <(] (\ XK > 28 5D 91 00 28 5C 16 1E 01 00 00 00 58 4B 17 1E {791} normal block at 0x00915D28, 40 bytes long. Data: < ^ e XK > D8 5E 91 00 C8 65 93 00 01 00 00 00 58 4B 17 1E {787} normal block at 0x00915EC8, 52 bytes long. Data: < 0 ] > D8 30 15 1E 90 5D 91 00 CD CD CD CD CD CD CD CD {786} normal block at 0x00915D90, 44 bytes long. Data: < ^ \ > C8 5E 91 00 C0 5C 91 00 CD CD CD CD CD CD CD CD {776} normal block at 0x00915CC0, 44 bytes long. Data: < ] Z > 90 5D 91 00 E8 5A 91 00 CD CD CD CD CD CD CD CD {774} normal block at 0x00915BB8, 192 bytes long. Data: < < (> 00 00 00 00 00 00 00 00 00 00 00 00 F1 3C D8 28 {772} normal block at 0x00915B60, 24 bytes long. Data: < Z \ A > F8 5A 91 00 D0 5C 91 00 02 00 00 00 90 41 16 1E {769} normal block at 0x00915AE8, 48 bytes long. Data: < \ pZ > C0 5C 91 00 70 5A 91 00 CD CD CD CD CD CD CD CD .....[this message goes on and on]... ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-06 06:28 Message: Logged In: YES user_id=6380 "Leaks" like these are unavoidable with any large program. Even the standard C library "leaks" when you test it like this. You needn't worry about this; what you see are simply the various objects and buffers that are allocated once during initialization that are not released by finalization. Leaks are only a problem when a certain operation in a program leaks some memory every time it is executed, because these mean that eventually a program 's memory use may grow without bounds as a result of the leaks. Such leaks may exist but you haven't shown evidence. Even calling Py_Initialize() and Py_Finalize() in a loop shouldn't cause the memory use to grow unboundedly. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513725&group_id=5470 From noreply@sourceforge.net Wed Feb 6 14:41:30 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 06:41:30 -0800 Subject: [Python-bugs-list] [ python-Bugs-513683 ] email.Parser uses LF as line sep. Message-ID: Bugs item #513683, was opened at 2002-02-06 01:35 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513683&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Brian Takashi Hooper (bthooper) Assigned to: Barry Warsaw (bwarsaw) Summary: email.Parser uses LF as line sep. Initial Comment: I'm not sure what the best solution is for this, but some email clients sent multipart MIME messages using CRLF as the line separator instead of just LF, which seems to be assumed in email.Parser.Parser._parsebody. Maybe I'm reading the RFC wrong, but it seems like it says that lines of a mail message should be separated using CRLF (although I'm sure many clients don't do that either)... ---------------------------------------------------------------------- >Comment By: Brian Takashi Hooper (bthooper) Date: 2002-02-06 06:41 Message: Logged In: YES user_id=450505 OK, that seems like a satisfactory answer. I do actually happen to be using Postfix on FreeBSD, albeit a little old (maybe a year or so), and am piping mails to a Python script, which is where I observed this problem. Maybe something with my local setup? (I didn't set up Postfix, but I don't see why it wouldn't be doing the default thing) Maybe it would be safer not to make assumptions about the input message, and process line endings to native before parsing? This would be my vote anyways (I tend to avoid thoroughly reading documentation unless something doesn't work as I intuit it should :-) ---------------------------------------------------------------------- Comment By: Barry Warsaw (bwarsaw) Date: 2002-02-06 04:32 Message: Logged In: YES user_id=12800 My philosophy so far (and I *think* this is documented in the latest rev of the .tex file), is that the email package should deal with native line endings, and that it is the job of a delivering mta to convert from rfc line endings (crlf) to native. It is certainly the case that smtplib converts from native to rfc line endings when sending the message out. Most mtas (e.g. postfix) when piping the message to a process or onto a file will convert to native line endings, at least in my experience. This may not be a very useful assumption though, and it is probably more robust to be able to deal with either line endings. There have been some movements in this direction in the cvs snapshot of the mimelib/email package where support for multibyte charsets (e.g. Japanese) have been added. You might want to check out that project's cvs trunk and see if it helps your situation, or submit a bug report there and we'll prototype the fix in that project first. Eventually all that code will be ported back to the Python 2.3 tree. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513683&group_id=5470 From noreply@sourceforge.net Wed Feb 6 17:55:07 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 09:55:07 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-513840 ] entity unescape for sgml/htmllib Message-ID: Feature Requests item #513840, was opened at 2002-02-06 09:55 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=513840&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Greg Chapman (glchapman) Assigned to: Nobody/Anonymous (nobody) Summary: entity unescape for sgml/htmllib Initial Comment: The parsers defined in htmllib and sgmllib do not provide any facilities for unescaping a tag attribute which has an embedded html entityref (i.e., they do not provide a way to convert "a&b" to "a&b"). The parser in HTMLParser unescapes all tag attributes automatically. I'm not sure that's the right approach for sgmllib and htmllib (since it might break existing code), but it seems to me that one of the modules ought to provide a function or method which can do the unescaping if needed. (I'm not familiar with either the SGML or the HTML specification, but I assume one of them mandates the escaping of '&' (e.g.) in tag attributes. If so, then it seems appropriate for one of the modules to provide a function which undoes the mandated transformation.) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=513840&group_id=5470 From noreply@sourceforge.net Wed Feb 6 18:11:05 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 10:11:05 -0800 Subject: [Python-bugs-list] [ python-Bugs-433882 ] UTF-8: unpaired surrogates mishandled Message-ID: Bugs item #433882, was opened at 2001-06-17 04:27 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433882&group_id=5470 Category: Unicode Group: None Status: Open Resolution: None >Priority: 3 Submitted By: Nobody/Anonymous (nobody) Assigned to: M.-A. Lemburg (lemburg) Summary: UTF-8: unpaired surrogates mishandled Initial Comment: Two bugs: 1. UTF-8 encoding of unpaired high surrogate produces an invalid UTF-8 byte sequence. 2. UTF-8 decoding of any unpaired surrogate produces an exception ("illegal encoding") instead of the corresponding 16-bit scalar value. See attached file utf8bugs.py for example plus detailed remarks. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:11 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes bug 1 in the report. I am unsure about "bug 2": I think that raising an exception is better than silently accepting bogus input data. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-08-16 03:50 Message: Logged In: YES user_id=38388 I'll look into this after I'm back from vacation on the 10.09. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-17 19:03 Message: Logged In: YES user_id=21627 I think the codec should reject unpaired surrogates both when encoding and when decoding. I don't have a copy of ISO 10646, but Unicode 3.1 points out # ISO/IEC 10646 does not allow mapping of unpaired surrogates, nor U+FFFE and U+FFFF (but it does allow other noncharacters). So apparently, encoding unpaired surrogates as UTF-8 is not allowed according to ISO 10646. I think Python should follow this rule, instead of the Unicode one. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433882&group_id=5470 From noreply@sourceforge.net Wed Feb 6 18:12:30 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 10:12:30 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Wed Feb 6 18:33:31 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 10:33:31 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Nobody/Anonymous (nobody) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Wed Feb 6 19:59:09 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 11:59:09 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Andrew Koenig (arkoenig) >Assigned to: Tim Peters (tim_one) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Thu Feb 7 04:52:28 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 20:52:28 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core >Group: Not a Bug >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Tim Peters (tim_one) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Thu Feb 7 04:59:43 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 06 Feb 2002 20:59:43 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: Not a Bug Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Tim Peters (tim_one) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:59 Message: Logged In: YES user_id=31435 Oops! I meant """ could lead to a different result than the explicit coercion in somefloat == float(somelong) """ ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Thu Feb 7 09:58:36 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 01:58:36 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Thu Feb 7 11:44:26 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 03:44:26 -0800 Subject: [Python-bugs-list] [ python-Bugs-433882 ] UTF-8: unpaired surrogates mishandled Message-ID: Bugs item #433882, was opened at 2001-06-17 04:27 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433882&group_id=5470 Category: Unicode Group: None >Status: Closed >Resolution: Fixed Priority: 3 Submitted By: Nobody/Anonymous (nobody) Assigned to: M.-A. Lemburg (lemburg) Summary: UTF-8: unpaired surrogates mishandled Initial Comment: Two bugs: 1. UTF-8 encoding of unpaired high surrogate produces an invalid UTF-8 byte sequence. 2. UTF-8 decoding of any unpaired surrogate produces an exception ("illegal encoding") instead of the corresponding 16-bit scalar value. See attached file utf8bugs.py for example plus detailed remarks. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-07 03:44 Message: Logged In: YES user_id=38388 I fixed bug 2 as well. UTF-8 roundtrip safety is needed for Python (even for unpaired surrogates) since we use UTF-8 as marshalling format for code objects, i.e. in PYC files. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:11 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes bug 1 in the report. I am unsure about "bug 2": I think that raising an exception is better than silently accepting bogus input data. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-08-16 03:50 Message: Logged In: YES user_id=38388 I'll look into this after I'm back from vacation on the 10.09. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-06-17 19:03 Message: Logged In: YES user_id=21627 I think the codec should reject unpaired surrogates both when encoding and when decoding. I don't have a copy of ISO 10646, but Unicode 3.1 points out # ISO/IEC 10646 does not allow mapping of unpaired surrogates, nor U+FFFE and U+FFFF (but it does allow other noncharacters). So apparently, encoding unpaired surrogates as UTF-8 is not allowed according to ISO 10646. I think Python should follow this rule, instead of the Unicode one. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=433882&group_id=5470 From noreply@sourceforge.net Thu Feb 7 11:48:37 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 03:48:37 -0800 Subject: [Python-bugs-list] [ python-Bugs-486434 ] Compiler complaints in posixmodule.c Message-ID: Bugs item #486434, was opened at 2001-11-28 03:51 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=486434&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None >Priority: 1 Submitted By: M.-A. Lemburg (lemburg) Assigned to: M.-A. Lemburg (lemburg) Summary: Compiler complaints in posixmodule.c Initial Comment: The linker on Linux reports some warnings for posixmodule.c: libpython2.2.a(posixmodule.o): In function `posix_tmpnam': /home/lemburg/projects/Python/Dev-Python/./Modules/posixmodule.c:4486: the use of `tmpnam_r' is dangerous, better use `mkstemp' libpython2.2.a(posixmodule.o): In function `posix_tempnam': /home/lemburg/projects/Python/Dev-Python/./Modules/posixmodule.c:4436: the use of `tempnam' is dangerous, better use `mkstemp' Perhaps we ought to follow the advice ?! ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-11-30 04:33 Message: Logged In: YES user_id=21627 Before they can be dropped, they must be deprecated. In this case, I see no real reason to deprecate them: They produce a warning indicating a potential problem. For some applications, there may not be a problem at all, e.g. if they write the temporary files to a directory where nobody else has write access, or if they open the temporary file with O_EXCL. There *are* ways to use tempnam safely. I don't think we should change Python just because of a stupid linker warning (which isn't stupid in general, since it made you aware of the problem - but it is unfortunate that it cannot be turned off). ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-11-30 01:40 Message: Logged In: YES user_id=38388 Ok. How about this: we produce warnings for the two APIs in question in 2.2 and drop their support in 2.3 ?! I hate seeing the linker warn me about Python using dangerous system APIs -- this simply doesn't look right. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-11-29 10:30 Message: Logged In: YES user_id=21627 We already expose os.tmpfile, I don't think we need mkstemp. I don't think we should remove tmpnam; applications that use it will get the warning (for the first time in 2.2); we should leave it to the applications to migrate away from it. I found the recommendation not to use mkstemp on a Debian 'testing' system; dunno whether it was added by the Debian maintainers, or whether it is part of more recent manpage package. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-11-28 06:17 Message: Logged In: YES user_id=38388 Hmm, the man page on SuSE Linux does not say anything about using tmpfile() instead of mkstemp(). BTW, the warnings are already in place. I wonder whether it wouldn't be better to remove the Python APIs for these functions altogether and instead provide interfaces for the mkstemp() and/or tempfile(). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-11-28 05:56 Message: Logged In: YES user_id=21627 No. The usage of tmpnam is not dangerous; we are just exposing it to the Python application. It may be reasonable to produce a warning if tmpnam is called. We cannot replace tempnam with mkstemp for the same reason Posix couldn't: one produces a string, the other one a file handle. What we could do is to expose mkstemp(3) where available. I don't see the value of that, though: it could be done only on systems where mkstemp is available, and we already expose tmpfile(3). In fact, the Linux man page for mkstemp(3) says # Don't use this function, use tmpfile(3) instead. It's # bet­ter defined and more portable. If you still think there is a need for action, please propose a patch. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=486434&group_id=5470 From noreply@sourceforge.net Thu Feb 7 13:28:59 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 05:28:59 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: Not a Bug Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Tim Peters (tim_one) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- >Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 05:28 Message: Logged In: YES user_id=418174 The difficulty is that as defined, < is not an order relation, because there exist values a, b, c such that a=T, converting x to long will not lose information, and if x is a long value <=T, converting x to float will not lose information. Therefore, instead of always converting to long, it suffices to convert in a direction chosen by comparing the operands to T (without conversion) first. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:59 Message: Logged In: YES user_id=31435 Oops! I meant """ could lead to a different result than the explicit coercion in somefloat == float(somelong) """ ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Thu Feb 7 15:33:20 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 07:33:20 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: Not a Bug Status: Closed Resolution: Wont Fix Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Tim Peters (tim_one) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- >Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 07:33 Message: Logged In: YES user_id=418174 Here is yet another surprise: x=[1L<10000] y=[0.0] z=x+y Now I can execute x.sort() and y.sort() successfully, but z.sort blows up. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 05:28 Message: Logged In: YES user_id=418174 The difficulty is that as defined, < is not an order relation, because there exist values a, b, c such that a=T, converting x to long will not lose information, and if x is a long value <=T, converting x to float will not lose information. Therefore, instead of always converting to long, it suffices to convert in a direction chosen by comparing the operands to T (without conversion) first. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:59 Message: Logged In: YES user_id=31435 Oops! I meant """ could lead to a different result than the explicit coercion in somefloat == float(somelong) """ ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Thu Feb 7 16:21:45 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 08:21:45 -0800 Subject: [Python-bugs-list] [ python-Bugs-514345 ] pty.fork problem Message-ID: Bugs item #514345, was opened at 2002-02-07 08:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514345&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: M.-A. Lemburg (lemburg) Assigned to: Nobody/Anonymous (nobody) Summary: pty.fork problem Initial Comment: Subject: Python bugreport, pty.fork problem Date: Thu, 07 Feb 2002 07:30:08 -0800 From: Ronald Oussoren To: mal@lemburg.com Sorry about the e-mail, but the bugtracker on SF doesn't accept my bugreport (I don't have a SF account). The following script never returns: ----------------- start of script ------------- import pty import os import sys def test(): pid, fd = pty.fork() if pid == 0: print "1" print "2" print "3" else: fp = os.fdopen(fd, 'r') ln = fp.readline() while ln: print '-->', ln ln = fp.readline() print '-->', ln test() ------------------ end of script ----------------- It prints '-->1' to '-->3' and then blocks. I've tested this with python 2.1 on Solaris 8. On Solaris pty.open seems to use 'openpty' instead of 'os.openpty'. A 2-line change fixed the problem for me, but not for this demo-script: Close 'slave_fd' when pid != CHILD. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514345&group_id=5470 From noreply@sourceforge.net Thu Feb 7 19:00:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 11:00:38 -0800 Subject: [Python-bugs-list] [ python-Bugs-514433 ] bsddb: enable dbopen (file==NULL) Message-ID: Bugs item #514433, was opened at 2002-02-07 11:00 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514433&group_id=5470 Category: Extension Modules Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Sam Rushing (rushing) Assigned to: Nobody/Anonymous (nobody) Summary: bsddb: enable dbopen (file==NULL) Initial Comment: dbopen(): if the file argument is NULL, the library will use a temporary file. this is useful if you want that, or if you want to specify a large cache so that it never actually touches the disk. [i.e., in-memory hash/bt] I've done this by replacing the "s" with a "z" in the arg specs for the three open functions. Seems to work. -Sam ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514433&group_id=5470 From noreply@sourceforge.net Thu Feb 7 19:14:59 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 11:14:59 -0800 Subject: [Python-bugs-list] [ python-Bugs-514443 ] Python cores with "viewcvs" - Cygwin Message-ID: Bugs item #514443, was opened at 2002-02-07 11:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514443&group_id=5470 Category: Threads Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jari Aalto (jaalto) Assigned to: Nobody/Anonymous (nobody) Summary: Python cores with "viewcvs" - Cygwin Initial Comment: ViewCVS dies on startup with Python 2.2 under W2k Pro sp2 / Cygwin http://www.sourceforge.net/projects/viewcvs This may be problem in the Cygwin python itself, So this bug has been reported to python dev team as well. //root@W2KPICASSO /usr/src/cvs-source/python-viewcvs $ ./standalone.py -g -r /cygdrive/h/data/version- control/cvsroot Traceback (most recent call last): File "./standalone.py", line 540, in ? File "./standalone.py", line 495, in cli File "./standalone.py", line 467, in gui File "./standalone.py", line 406, in __init__ File "/usr/lib/python2.2/threading.py", line 5, in ? import thread ImportError: No module named thread cygcheck -s report: 751k 2002/01/19 h:\unix-root\u\bin\cygwin1.dll Cygwin DLL version info: DLL version: 1.3.7 DLL epoch: 19 DLL bad signal mask: 19005 DLL old termios: 5 DLL malloc env: 28 API major: 0 API minor: 51 Shared data: 3 DLL identifier: cygwin1 Mount registry: 2 Cygnus registry name: Cygnus Solutions Cygwin registry name: Cygwin Program options name: Program Options Cygwin mount registry name: mounts v2 Cygdrive flags: cygdrive flags Cygdrive prefix: cygdrive prefix Cygdrive default prefix: Build date: Sat Jan 19 13:20:32 EST 2002 Shared id: cygwin1S3 653k 1998/10/30 h:\bin\sql\mysql- w2k\bin\cygwinb19.dll Cygwin Package Information Package Version ash 20011018-1 autoconf 2.52a-1 autoconf-devel 2.52-4 autoconf-stable 2.13-4 automake 1.5b-1 automake-devel 1.5b-1 automake-stable 1.4p5-5 bash 2.05a-2 bc 1.06-1 binutils 20011002-1 bison 1.30-1 byacc 1.9-1 bzip2 1.0.1-6 clear 1.0 compface 1.4-5 cpio 2.4.2 cron 3.0.1-5 crypt 1.0-1 ctags 5.2-1 curl 7.9.2-1 cvs 1.11.0-1 cygrunsrv 0.94-2 cygutils 0.9.7-1 cygwin 1.3.7-1 dejagnu 20010117-1 diff 0.0 ed 0.2-1 expect 20010117-1 figlet 2.2-1 file 3.37-1 fileutils 4.1-1 findutils 4.1 flex 2.5.4-1 fortune 1.8-1 gawk 3.0.4-1 gcc 2.95.3-5 gdb 20010428-3 gdbm 1.8.0-3 gettext 0.10.40-1 ghostscript 6.51-1 gperf 0.0 grep 2.4.2-1 groff 1.17.2-1 gzip 1.3.2-1 inetutils 1.3.2-17 irc 20010101-1 jbigkit 1.2-4 jpeg 6b-4 less 358-3 libintl 0.10.38-3 libintl1 0.10.40-1 libncurses5 5.2-1 libncurses6 5.2-8 libpng 1.0.12-1 libpng2 1.0.12-1 libreadline4 4.1-2 libreadline5 4.2a-1 libtool 20010531a-1 libtool-devel 20010531-6 libtool-stable 1.4.2-2 libxml2 2.4.13-1 libxslt 1.0.9-1 login 1.4-3 lynx 2.8.4-1 m4 0.0 make 3.79.1-5 man 1.5g-2 mingw 20010917-1 mingw-runtime 1.2-1 mktemp 1.4-1 mt 2.0.1-1 mutt 1.2.5i-6 nano 1.0.7-1 ncftp 3.0.2-2 ncurses 5.2-8 newlib-man 20001118-1 opengl 1.1.0-5 openssh 3.0.2p1-4 openssl 0.9.6c-3 openssl-devel 0.9.6c-2 patch 2.5-2 pcre 3.7-1 perl 5.6.1-2 popt 1.6.2-1 postgresql 7.1.3-2 python 2.2-1 readline 4.2a-1 regex 4.4-2 robots 2.0-1 rsync 2.5.1-2 rxvt 2.7.2-6 rxvt 2.7.2-6 sed 3.02-1 sh-utils 2.0-2 sharutils 4.2.1-2 shellutils 0.0 shutdown 1.2-2 squid 2.4.PRE-STABLE ssmtp 2.38.7-3 tar 1.13.19-1 tcltk 20001125-1 tcsh 6.11.00-3 termcap 20010825-1 terminfo 5.2-1 tetex-beta 20001218-4 texinfo 4.0-5 textutils 2.0.16-1 tiff 3.5.6beta-2 time 1.7-1 units 1.77-1 unzip 5.41-1 vim 6.0.93-1 w32api 20010520-1 wget 1.7.1-1 which 1.5-1 whois 4.5.17-1 xpm 4.0.0-2 xpm-nox 4.1.0-1 zip 2.3-1 zlib 1.1.3-6 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514443&group_id=5470 From noreply@sourceforge.net Thu Feb 7 21:39:16 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 13:39:16 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-514532 ] Add "eu#" parser marker Message-ID: Feature Requests item #514532, was opened at 2002-02-07 13:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=514532&group_id=5470 Category: Unicode Group: None Status: Open Resolution: None Priority: 5 Submitted By: M.-A. Lemburg (lemburg) Assigned to: M.-A. Lemburg (lemburg) Summary: Add "eu#" parser marker Initial Comment: As requested by Jack Janssen: """ Recently, "M.-A. Lemburg" said: > How about this: we add a wchar_t codec to Python and the "eu#" parser > marker. Then you could write: > > wchar_t value = NULL; > int len = 0; > if (PyArg_ParseTuple(tuple, "eu#", "wchar_t", &value, &len) < 0) > return NULL; I like it! """ The parser marker should return Py_UNICODE* instead of char* and work much like "et#" does now for strings. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=514532&group_id=5470 From noreply@sourceforge.net Thu Feb 7 22:59:37 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 14:59:37 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-07 14:59 Message: Logged In: YES user_id=21627 I still think the current algorithm has serious problems as it is based on overallocation, and that it should be replaced with an algorithm that counts the memory requirements upfront. This will be particularly important for pymalloc, but will also avoid unnecessary copies for many other malloc implementations. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Fri Feb 8 02:04:44 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 18:04:44 -0800 Subject: [Python-bugs-list] [ python-Bugs-514627 ] pydoc fails to generate html doc Message-ID: Bugs item #514627, was opened at 2002-02-07 18:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514627&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Raj Kunjithapadam (mmaster25) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc fails to generate html doc Initial Comment: pydoc on the python 2.2 distribution fails to generate html doc(when option -w is given) Traceback follows Traceback (most recent call last): File "/opt/dump/Python-2.2/Lib/pydoc.py", line 2101, in ? if __name__ == '__main__': cli() File "/opt/dump/Python-2.2/Lib/pydoc.py", line 2070, in cli writedoc(arg) File "/opt/dump/Python-2.2/Lib/pydoc.py", line 1341, in writedoc object = locate(key, forceload) File "/opt/dump/Python-2.2/Lib/pydoc.py", line 1293, in locate parts = split(path, '.') File "/opt/dump/Python-2.2/Lib/string.py", line 117, in split return s.split(sep, maxsplit) AttributeError: 'module' object has no attribute 'split' On further investigation I was able to fix it. Attached is the fix. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514627&group_id=5470 From noreply@sourceforge.net Fri Feb 8 06:24:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 07 Feb 2002 22:24:38 -0800 Subject: [Python-bugs-list] [ python-Bugs-514676 ] multifile different in 2.2 from 2.1.1 Message-ID: Bugs item #514676, was opened at 2002-02-07 22:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Matthew Cowles (mdcowles) Assigned to: Nobody/Anonymous (nobody) Summary: multifile different in 2.2 from 2.1.1 Initial Comment: Reported to python-help. When the test program I'll attach is run on the test mail I'll attach separately, it produces this under Python 2.1.1: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: multipart/alternative BOUNDARY: =====================_590453677==_.ALT TYPE: text/plain LINES: ['test A\n'] TYPE: text/html LINES: ['\n', 'test B\n', '\n'] TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n','\n'] But under Python 2.2, it produces: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n'] The first output appears to me to be correct. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 From noreply@sourceforge.net Fri Feb 8 13:19:53 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 05:19:53 -0800 Subject: [Python-bugs-list] [ python-Bugs-506679 ] Core dump subclassing long Message-ID: Bugs item #506679, was opened at 2002-01-21 14:36 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=506679&group_id=5470 Category: Type/class unification Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Nicholas Socci (nsocci) Assigned to: Guido van Rossum (gvanrossum) Summary: Core dump subclassing long Initial Comment: The following code dumps core: class C(long): pass c=C(-1) Note if running interactly the core dump occurs when you quit the interpreter. No problem for positive numbers of if subclassing int or float. Version information: Python 2.2 (#1, Dec 28 2001, 14:02:28) [GCC 2.96 20000731 (Red Hat Linux 7.1 2.96-85)] on linux2 ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2002-02-08 05:19 Message: Logged In: YES user_id=33168 There was a patch submitted for this problem: #514641. https://sourceforge.net/tracker/?func=detail&atid=305470&aid=514641&group_id=5470 ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-01-21 15:24 Message: Logged In: YES user_id=6380 Alas, you're right. I suspect a calculation error in the size of the long. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2002-01-21 14:41 Message: Logged In: YES user_id=33168 Here's a stack trace. #0 subtype_dealloc (self=0x818598c) at Objects/typeobject.c:349 #1 0x080c33cc in PyDict_SetItem (op=0x810b8fc, key=0x812a388, value=0x80ddc3c) at Objects/dictobject.c:373 #2 0x080c6189 in _PyModule_Clear (m=0x810b734) at Objects/moduleobject.c:124 #3 0x0808a5d1 in PyImport_Cleanup () at Python/import.c:284 #4 0x08092f1e in Py_Finalize () at Python/pythonrun.c:231 #5 0x080534a6 in Py_Main (argc=1, argv=0xbffff914) at Modules/main.c:376 #6 0x40087507 in __libc_start_main (main=0x8052d70
, argc=1, ubp_av=0xbffff914, init=0x8052194 <_init>, fini=0x80ca4c0 <_fini>, rtld_fini=0x4000dc14 <_dl_fini>, stack_end=0xbffff90c) at ../sysdeps/generic/libc-start.c:129 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=506679&group_id=5470 From noreply@sourceforge.net Fri Feb 8 15:14:39 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 07:14:39 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-510394 ] Add base classes for numeric types Message-ID: Feature Requests item #510394, was opened at 2002-01-29 14:31 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=510394&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Raymond Hettinger (rhettinger) Assigned to: Nobody/Anonymous (nobody) Summary: Add base classes for numeric types Initial Comment: Create a class hierarchy for numeric types (similar to the structure for exceptions) so that the following work: issubclass(int,numeric)==1 issubclass(int,real)==1 issubclass(complex,real)==0 isinstance( 3.14, real ) == 1 ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2002-02-08 07:14 Message: Logged In: YES user_id=292741 I don't think it's entirely clear whether ints are a subclass of floats or vice versa, or whether complex are a subclass of complex, or vice versa, but I'd be interested to hear reasons. B is a subclass of A makes sense when there are things you can do to the 'B' as if it were an 'A'. Clearly, ints (but not longs) are numerically a subSET of float and float is a subset of complex; I don't see how that relates to subclassing. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=510394&group_id=5470 From noreply@sourceforge.net Fri Feb 8 15:29:32 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 07:29:32 -0800 Subject: [Python-bugs-list] [ python-Bugs-514858 ] complex not entirely immutable Message-ID: Bugs item #514858, was opened at 2002-02-08 07:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gregory Smith (gregsmith) Assigned to: Nobody/Anonymous (nobody) Summary: complex not entirely immutable Initial Comment: .real and .imag of complex are writable, and really shouldn't be. Examples of badness: ------------------------ >>> sys.version '2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)]' >>> c=1+1j >>> d={c:'spam'} >>> c (1+1j) >>> d {(1+1j): 'spam'} >>> d[c] 'spam' >>> c.real=2.2 >>> c (2.2000000000000002+1j) >>> d {(2.2000000000000002+1j): 'spam'} >>> d[c] Traceback (most recent call last): File "", line 1, in ? d[c] KeyError: (2.2+1j) -------------------------------->>> c=1+1j >>> c2=c >>> c3=c >>> c is c2, c is c3 (1, 1) >>> c2 += 1 >>> c3.imag += 1 >>> c is c2, c is c3 (0, 1) >>> c,c2,c3 ((1+2j), (2+1j), (1+2j)) --------------------------- ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 From noreply@sourceforge.net Fri Feb 8 15:34:51 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 07:34:51 -0800 Subject: [Python-bugs-list] [ python-Bugs-514858 ] complex not entirely immutable Message-ID: Bugs item #514858, was opened at 2002-02-08 07:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gregory Smith (gregsmith) >Assigned to: Guido van Rossum (gvanrossum) Summary: complex not entirely immutable Initial Comment: .real and .imag of complex are writable, and really shouldn't be. Examples of badness: ------------------------ >>> sys.version '2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)]' >>> c=1+1j >>> d={c:'spam'} >>> c (1+1j) >>> d {(1+1j): 'spam'} >>> d[c] 'spam' >>> c.real=2.2 >>> c (2.2000000000000002+1j) >>> d {(2.2000000000000002+1j): 'spam'} >>> d[c] Traceback (most recent call last): File "", line 1, in ? d[c] KeyError: (2.2+1j) -------------------------------->>> c=1+1j >>> c2=c >>> c3=c >>> c is c2, c is c3 (1, 1) >>> c2 += 1 >>> c3.imag += 1 >>> c is c2, c is c3 (0, 1) >>> c,c2,c3 ((1+2j), (2+1j), (1+2j)) --------------------------- ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 07:34 Message: Logged In: YES user_id=6380 Eh? Have you hacked your complex number implementation? When I try this, I get >>> c.real = 2.2 Traceback (most recent call last): File "", line 1, in ? TypeError: 'complex' object has only read-only attributes (assign to .real) >>> ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 From noreply@sourceforge.net Fri Feb 8 15:45:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 07:45:38 -0800 Subject: [Python-bugs-list] [ python-Bugs-514858 ] complex not entirely immutable Message-ID: Bugs item #514858, was opened at 2002-02-08 07:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gregory Smith (gregsmith) Assigned to: Guido van Rossum (gvanrossum) Summary: complex not entirely immutable Initial Comment: .real and .imag of complex are writable, and really shouldn't be. Examples of badness: ------------------------ >>> sys.version '2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)]' >>> c=1+1j >>> d={c:'spam'} >>> c (1+1j) >>> d {(1+1j): 'spam'} >>> d[c] 'spam' >>> c.real=2.2 >>> c (2.2000000000000002+1j) >>> d {(2.2000000000000002+1j): 'spam'} >>> d[c] Traceback (most recent call last): File "", line 1, in ? d[c] KeyError: (2.2+1j) -------------------------------->>> c=1+1j >>> c2=c >>> c3=c >>> c is c2, c is c3 (1, 1) >>> c2 += 1 >>> c3.imag += 1 >>> c is c2, c is c3 (0, 1) >>> c,c2,c3 ((1+2j), (2+1j), (1+2j)) --------------------------- ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-08 07:45 Message: Logged In: YES user_id=6656 FWIW, I see this too: Python 2.2+ (#1, Jan 30 2002, 15:27:36) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 1j 1j >>> _.real += 1 >>> _ (1+1j) ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 07:34 Message: Logged In: YES user_id=6380 Eh? Have you hacked your complex number implementation? When I try this, I get >>> c.real = 2.2 Traceback (most recent call last): File "", line 1, in ? TypeError: 'complex' object has only read-only attributes (assign to .real) >>> ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 From noreply@sourceforge.net Fri Feb 8 15:48:56 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 07:48:56 -0800 Subject: [Python-bugs-list] [ python-Bugs-514858 ] complex not entirely immutable Message-ID: Bugs item #514858, was opened at 2002-02-08 07:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gregory Smith (gregsmith) Assigned to: Guido van Rossum (gvanrossum) Summary: complex not entirely immutable Initial Comment: .real and .imag of complex are writable, and really shouldn't be. Examples of badness: ------------------------ >>> sys.version '2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)]' >>> c=1+1j >>> d={c:'spam'} >>> c (1+1j) >>> d {(1+1j): 'spam'} >>> d[c] 'spam' >>> c.real=2.2 >>> c (2.2000000000000002+1j) >>> d {(2.2000000000000002+1j): 'spam'} >>> d[c] Traceback (most recent call last): File "", line 1, in ? d[c] KeyError: (2.2+1j) -------------------------------->>> c=1+1j >>> c2=c >>> c3=c >>> c is c2, c is c3 (1, 1) >>> c2 += 1 >>> c3.imag += 1 >>> c is c2, c is c3 (0, 1) >>> c,c2,c3 ((1+2j), (2+1j), (1+2j)) --------------------------- ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-08 07:48 Message: Logged In: NO Argh! Now I see it too. Can somebody investigate? --Guido (who keeps getting logged out from SF -- what's up with that?) ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-08 07:45 Message: Logged In: YES user_id=6656 FWIW, I see this too: Python 2.2+ (#1, Jan 30 2002, 15:27:36) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 1j 1j >>> _.real += 1 >>> _ (1+1j) ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 07:34 Message: Logged In: YES user_id=6380 Eh? Have you hacked your complex number implementation? When I try this, I get >>> c.real = 2.2 Traceback (most recent call last): File "", line 1, in ? TypeError: 'complex' object has only read-only attributes (assign to .real) >>> ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 From noreply@sourceforge.net Fri Feb 8 18:42:23 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 10:42:23 -0800 Subject: [Python-bugs-list] [ python-Bugs-514928 ] curses error in w.border() Message-ID: Bugs item #514928, was opened at 2002-02-08 10:42 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514928&group_id=5470 Category: Extension Modules Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Bastian Kleineidam (calvin) Assigned to: Nobody/Anonymous (nobody) Summary: curses error in w.border() Initial Comment: this fails on my Linux box: import curses w = curses.initscr() w.border(0) SystemError: old style getargs format uses new features Greetings, Calvin ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514928&group_id=5470 From noreply@sourceforge.net Fri Feb 8 20:10:53 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 12:10:53 -0800 Subject: [Python-bugs-list] [ python-Bugs-514858 ] complex not entirely immutable Message-ID: Bugs item #514858, was opened at 2002-02-08 07:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gregory Smith (gregsmith) Assigned to: Guido van Rossum (gvanrossum) Summary: complex not entirely immutable Initial Comment: .real and .imag of complex are writable, and really shouldn't be. Examples of badness: ------------------------ >>> sys.version '2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)]' >>> c=1+1j >>> d={c:'spam'} >>> c (1+1j) >>> d {(1+1j): 'spam'} >>> d[c] 'spam' >>> c.real=2.2 >>> c (2.2000000000000002+1j) >>> d {(2.2000000000000002+1j): 'spam'} >>> d[c] Traceback (most recent call last): File "", line 1, in ? d[c] KeyError: (2.2+1j) -------------------------------->>> c=1+1j >>> c2=c >>> c3=c >>> c is c2, c is c3 (1, 1) >>> c2 += 1 >>> c3.imag += 1 >>> c is c2, c is c3 (0, 1) >>> c,c2,c3 ((1+2j), (2+1j), (1+2j)) --------------------------- ---------------------------------------------------------------------- >Comment By: Gregory Smith (gregsmith) Date: 2002-02-08 12:10 Message: Logged In: YES user_id=292741 should this... -------------------- static PyMemberDef complex_members[] = { {"real", T_DOUBLE, offsetof(PyComplexObject, cval.real), 0, "the real part of a complex number"}, {"imag", T_DOUBLE, offsetof(PyComplexObject, cval.imag), 0, "the imaginary part of a complex number"}, {0}, }; ------- be this ----------?? static PyMemberDef complex_members[] = { {"real", T_DOUBLE, offsetof(PyComplexObject, cval.real), READONLY, "the real part of a complex number"}, {"imag", T_DOUBLE, offsetof(PyComplexObject, cval.imag), READONLY, "the imaginary part of a complex number"}, {0}, }; ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-08 07:48 Message: Logged In: NO Argh! Now I see it too. Can somebody investigate? --Guido (who keeps getting logged out from SF -- what's up with that?) ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-08 07:45 Message: Logged In: YES user_id=6656 FWIW, I see this too: Python 2.2+ (#1, Jan 30 2002, 15:27:36) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 1j 1j >>> _.real += 1 >>> _ (1+1j) ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 07:34 Message: Logged In: YES user_id=6380 Eh? Have you hacked your complex number implementation? When I try this, I get >>> c.real = 2.2 Traceback (most recent call last): File "", line 1, in ? TypeError: 'complex' object has only read-only attributes (assign to .real) >>> ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 From noreply@sourceforge.net Fri Feb 8 20:51:12 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 12:51:12 -0800 Subject: [Python-bugs-list] [ python-Bugs-514928 ] curses error in w.border() Message-ID: Bugs item #514928, was opened at 2002-02-08 10:42 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514928&group_id=5470 Category: Extension Modules Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Bastian Kleineidam (calvin) Assigned to: Nobody/Anonymous (nobody) Summary: curses error in w.border() Initial Comment: this fails on my Linux box: import curses w = curses.initscr() w.border(0) SystemError: old style getargs format uses new features Greetings, Calvin ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 12:51 Message: Logged In: YES user_id=6380 Yes, it's a bug. It's been fixed in 2.2. Try this diff for 2.1.2: 566c566 < if (!PyArg_Parse(args,"|llllllll;ls,rs,ts,bs,tl,tr,bl,br", --- > if (!PyArg_ParseTuple(args,"|llllllll;ls,rs,ts,bs,tl,tr,bl,br", ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514928&group_id=5470 From noreply@sourceforge.net Fri Feb 8 21:27:05 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 13:27:05 -0800 Subject: [Python-bugs-list] [ python-Bugs-514858 ] complex not entirely immutable Message-ID: Bugs item #514858, was opened at 2002-02-08 07:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Gregory Smith (gregsmith) Assigned to: Guido van Rossum (gvanrossum) Summary: complex not entirely immutable Initial Comment: .real and .imag of complex are writable, and really shouldn't be. Examples of badness: ------------------------ >>> sys.version '2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)]' >>> c=1+1j >>> d={c:'spam'} >>> c (1+1j) >>> d {(1+1j): 'spam'} >>> d[c] 'spam' >>> c.real=2.2 >>> c (2.2000000000000002+1j) >>> d {(2.2000000000000002+1j): 'spam'} >>> d[c] Traceback (most recent call last): File "", line 1, in ? d[c] KeyError: (2.2+1j) -------------------------------->>> c=1+1j >>> c2=c >>> c3=c >>> c is c2, c is c3 (1, 1) >>> c2 += 1 >>> c3.imag += 1 >>> c is c2, c is c3 (0, 1) >>> c,c2,c3 ((1+2j), (2+1j), (1+2j)) --------------------------- ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:27 Message: Logged In: YES user_id=6380 Thanks. I've checked this in. I don't understand why at first I couldn't reproduce it -- I guess my mind-bending powers are now also affecting computers. :-) ---------------------------------------------------------------------- Comment By: Gregory Smith (gregsmith) Date: 2002-02-08 12:10 Message: Logged In: YES user_id=292741 should this... -------------------- static PyMemberDef complex_members[] = { {"real", T_DOUBLE, offsetof(PyComplexObject, cval.real), 0, "the real part of a complex number"}, {"imag", T_DOUBLE, offsetof(PyComplexObject, cval.imag), 0, "the imaginary part of a complex number"}, {0}, }; ------- be this ----------?? static PyMemberDef complex_members[] = { {"real", T_DOUBLE, offsetof(PyComplexObject, cval.real), READONLY, "the real part of a complex number"}, {"imag", T_DOUBLE, offsetof(PyComplexObject, cval.imag), READONLY, "the imaginary part of a complex number"}, {0}, }; ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-08 07:48 Message: Logged In: NO Argh! Now I see it too. Can somebody investigate? --Guido (who keeps getting logged out from SF -- what's up with that?) ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-08 07:45 Message: Logged In: YES user_id=6656 FWIW, I see this too: Python 2.2+ (#1, Jan 30 2002, 15:27:36) [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 1j 1j >>> _.real += 1 >>> _ (1+1j) ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 07:34 Message: Logged In: YES user_id=6380 Eh? Have you hacked your complex number implementation? When I try this, I get >>> c.real = 2.2 Traceback (most recent call last): File "", line 1, in ? TypeError: 'complex' object has only read-only attributes (assign to .real) >>> ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514858&group_id=5470 From noreply@sourceforge.net Fri Feb 8 21:33:21 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 13:33:21 -0800 Subject: [Python-bugs-list] [ python-Bugs-514928 ] curses error in w.border() Message-ID: Bugs item #514928, was opened at 2002-02-08 10:42 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514928&group_id=5470 Category: Extension Modules Group: Python 2.1.2 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Bastian Kleineidam (calvin) >Assigned to: Guido van Rossum (gvanrossum) Summary: curses error in w.border() Initial Comment: this fails on my Linux box: import curses w = curses.initscr() w.border(0) SystemError: old style getargs format uses new features Greetings, Calvin ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:33 Message: Logged In: YES user_id=6380 FWIW, I've applied this fix to the 2.1 maintenance branch, in case someone ever decides to release a 2.1.3. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 12:51 Message: Logged In: YES user_id=6380 Yes, it's a bug. It's been fixed in 2.2. Try this diff for 2.1.2: 566c566 < if (!PyArg_Parse(args,"|llllllll;ls,rs,ts,bs,tl,tr,bl,br", --- > if (!PyArg_ParseTuple(args,"|llllllll;ls,rs,ts,bs,tl,tr,bl,br", ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514928&group_id=5470 From noreply@sourceforge.net Fri Feb 8 21:43:58 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 13:43:58 -0800 Subject: [Python-bugs-list] [ python-Bugs-514676 ] multifile different in 2.2 from 2.1.1 Message-ID: Bugs item #514676, was opened at 2002-02-07 22:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Matthew Cowles (mdcowles) >Assigned to: Guido van Rossum (gvanrossum) Summary: multifile different in 2.2 from 2.1.1 Initial Comment: Reported to python-help. When the test program I'll attach is run on the test mail I'll attach separately, it produces this under Python 2.1.1: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: multipart/alternative BOUNDARY: =====================_590453677==_.ALT TYPE: text/plain LINES: ['test A\n'] TYPE: text/html LINES: ['\n', 'test B\n', '\n'] TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n','\n'] But under Python 2.2, it produces: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n'] The first output appears to me to be correct. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:43 Message: Logged In: YES user_id=6380 You're absolutely right -- this is a bug. Can you suggest a fix? We also need a test suite! Your test program is a beginning for that... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 From noreply@sourceforge.net Fri Feb 8 21:48:19 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 13:48:19 -0800 Subject: [Python-bugs-list] [ python-Bugs-514627 ] pydoc fails to generate html doc Message-ID: Bugs item #514627, was opened at 2002-02-07 18:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514627&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Raj Kunjithapadam (mmaster25) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc fails to generate html doc Initial Comment: pydoc on the python 2.2 distribution fails to generate html doc(when option -w is given) Traceback follows Traceback (most recent call last): File "/opt/dump/Python-2.2/Lib/pydoc.py", line 2101, in ? if __name__ == '__main__': cli() File "/opt/dump/Python-2.2/Lib/pydoc.py", line 2070, in cli writedoc(arg) File "/opt/dump/Python-2.2/Lib/pydoc.py", line 1341, in writedoc object = locate(key, forceload) File "/opt/dump/Python-2.2/Lib/pydoc.py", line 1293, in locate parts = split(path, '.') File "/opt/dump/Python-2.2/Lib/string.py", line 117, in split return s.split(sep, maxsplit) AttributeError: 'module' object has no attribute 'split' On further investigation I was able to fix it. Attached is the fix. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:48 Message: Logged In: YES user_id=6380 I want to believe you, but I cannot reproduce the traceback. Can you tell me which command line you used to cause the traceback, and on which operating system? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514627&group_id=5470 From noreply@sourceforge.net Fri Feb 8 21:49:02 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 13:49:02 -0800 Subject: [Python-bugs-list] [ python-Bugs-514433 ] bsddb: enable dbopen (file==NULL) Message-ID: Bugs item #514433, was opened at 2002-02-07 11:00 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514433&group_id=5470 Category: Extension Modules Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: Sam Rushing (rushing) Assigned to: Nobody/Anonymous (nobody) Summary: bsddb: enable dbopen (file==NULL) Initial Comment: dbopen(): if the file argument is NULL, the library will use a temporary file. this is useful if you want that, or if you want to specify a large cache so that it never actually touches the disk. [i.e., in-memory hash/bt] I've done this by replacing the "s" with a "z" in the arg specs for the three open functions. Seems to work. -Sam ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:49 Message: Logged In: YES user_id=6380 Can you submit a patch to the patch manager? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514433&group_id=5470 From noreply@sourceforge.net Fri Feb 8 21:58:52 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 13:58:52 -0800 Subject: [Python-bugs-list] [ python-Bugs-514345 ] pty.fork problem Message-ID: Bugs item #514345, was opened at 2002-02-07 08:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514345&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: M.-A. Lemburg (lemburg) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: pty.fork problem Initial Comment: Subject: Python bugreport, pty.fork problem Date: Thu, 07 Feb 2002 07:30:08 -0800 From: Ronald Oussoren To: mal@lemburg.com Sorry about the e-mail, but the bugtracker on SF doesn't accept my bugreport (I don't have a SF account). The following script never returns: ----------------- start of script ------------- import pty import os import sys def test(): pid, fd = pty.fork() if pid == 0: print "1" print "2" print "3" else: fp = os.fdopen(fd, 'r') ln = fp.readline() while ln: print '-->', ln ln = fp.readline() print '-->', ln test() ------------------ end of script ----------------- It prints '-->1' to '-->3' and then blocks. I've tested this with python 2.1 on Solaris 8. On Solaris pty.open seems to use 'openpty' instead of 'os.openpty'. A 2-line change fixed the problem for me, but not for this demo-script: Close 'slave_fd' when pid != CHILD. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:58 Message: Logged In: YES user_id=6380 Assigning to Fred, who appears to have hacked this module before. I cannot reproduce the problem, but I see a different problem that may be hinting at the same issue: Under Python 2.1 or before, on Red Hat Linux 7.2, the test program for me prints this: --> 1 --> 2 --> 3 --> and exits. But with Python 2.2, it prints: --> 1 --> 2 --> 3 Traceback (most recent call last): File "/tmp/tpty.py", line 20, in ? test() File "/tmp/tpty.py", line 17, in test ln = fp.readline() IOError: [Errno 5] Input/output error How can the difference be explained? I like the pre-2.2 behavior better! ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514345&group_id=5470 From noreply@sourceforge.net Fri Feb 8 22:05:51 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 14:05:51 -0800 Subject: [Python-bugs-list] [ python-Bugs-513572 ] isdir behavior getting odder on UNC path Message-ID: Bugs item #513572, was opened at 2002-02-05 18:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gary Herron (herron) >Assigned to: Tim Peters (tim_one) Summary: isdir behavior getting odder on UNC path Initial Comment: It's been documented in earlier version of Python on windows that os.path.isdir returns true on a UNC directory only if there was an extra backslash at the end of the argument. In Python2.2 (at least on windows 2000) it appears that *TWO* extra backslashes are needed. Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import os >>> os.path.isdir('\\trainer\island') 0 >>> os.path.isdir('\\trainer\island\') 0 >>> os.path.isdir('\\trainer\island\\') 1 >>> In a perfect world, the first call should return 1, but never has. In older versions of python, the second returned 1, but no longer. In limited tests, appending 2 or more backslashes to the end of any pathname returns the correct answer in both isfile and isdir. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 14:05 Message: Logged In: YES user_id=6380 Tim, I hate to do this to you, but you're the only person I trust with researching this. (My laptop is currently off the net again. :-( ) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 From noreply@sourceforge.net Fri Feb 8 22:23:00 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 14:23:00 -0800 Subject: [Python-bugs-list] [ python-Bugs-512871 ] Installation instructions are wrong Message-ID: Bugs item #512871, was opened at 2002-02-04 10:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512871&group_id=5470 Category: Installation Group: Python 2.2 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jon Ribbens (jribbens) >Assigned to: Guido van Rossum (gvanrossum) Summary: Installation instructions are wrong Initial Comment: The README file's installation instructions in Python 2.2 are wrong. The Modules/Setup file has changed considerably in purpose between Python 2.0 and Python 2.2, but the instructions are identical. There needs to be some wording to the effect that Modules/Setup is only for configuring where to look for libraries, etc, and actually everything that's commented out in Modules/Setup will be included anyway by some magic means. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 14:23 Message: Logged In: YES user_id=6380 I've edited the instructions. Hopefully the CVS version is to your liking. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=512871&group_id=5470 From noreply@sourceforge.net Fri Feb 8 22:27:26 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 14:27:26 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-08 14:27 Message: Logged In: YES user_id=38388 Good news, Walter. Martin: As I explained in an earlier comment, pymalloc needs to be fixed to better address overallocation. The two pass logic would avoid overallocation, but at a high price. Copying memory (if at all needed) is likely to be *much* faster. The UTF-8 codec has to be as fast as possible since it is one of the most used codecs in Python's Unicode implementation. Also note that I have reduced overallocation to 2*size in the codec. I suggest to close the bug report. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-07 14:59 Message: Logged In: YES user_id=21627 I still think the current algorithm has serious problems as it is based on overallocation, and that it should be replaced with an algorithm that counts the memory requirements upfront. This will be particularly important for pymalloc, but will also avoid unnecessary copies for many other malloc implementations. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Fri Feb 8 23:17:16 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 15:17:16 -0800 Subject: [Python-bugs-list] [ python-Bugs-513572 ] isdir behavior getting odder on UNC path Message-ID: Bugs item #513572, was opened at 2002-02-05 18:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gary Herron (herron) >Assigned to: Mark Hammond (mhammond) Summary: isdir behavior getting odder on UNC path Initial Comment: It's been documented in earlier version of Python on windows that os.path.isdir returns true on a UNC directory only if there was an extra backslash at the end of the argument. In Python2.2 (at least on windows 2000) it appears that *TWO* extra backslashes are needed. Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import os >>> os.path.isdir('\\trainer\island') 0 >>> os.path.isdir('\\trainer\island\') 0 >>> os.path.isdir('\\trainer\island\\') 1 >>> In a perfect world, the first call should return 1, but never has. In older versions of python, the second returned 1, but no longer. In limited tests, appending 2 or more backslashes to the end of any pathname returns the correct answer in both isfile and isdir. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-08 15:17 Message: Logged In: YES user_id=31435 Here's the implementation of Windows isdir(): def isdir(path): . """Test whether a path is a directory""" . try: . st = os.stat(path) . except os.error: . return 0 . return stat.S_ISDIR(st[stat.ST_MODE]) That is, we return whatever Microsoft's stat() tells us, and our code is the same in 2.2 as in 2.1. I don't have Win2K here, and my Win98 box isn't on a Windows network so I can't even try real UNC paths here. Reassigning to MarkH in case he can do better on either count. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 14:05 Message: Logged In: YES user_id=6380 Tim, I hate to do this to you, but you're the only person I trust with researching this. (My laptop is currently off the net again. :-( ) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 From noreply@sourceforge.net Fri Feb 8 23:33:33 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 15:33:33 -0800 Subject: [Python-bugs-list] [ python-Bugs-513572 ] isdir behavior getting odder on UNC path Message-ID: Bugs item #513572, was opened at 2002-02-05 18:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gary Herron (herron) Assigned to: Mark Hammond (mhammond) Summary: isdir behavior getting odder on UNC path Initial Comment: It's been documented in earlier version of Python on windows that os.path.isdir returns true on a UNC directory only if there was an extra backslash at the end of the argument. In Python2.2 (at least on windows 2000) it appears that *TWO* extra backslashes are needed. Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import os >>> os.path.isdir('\\trainer\island') 0 >>> os.path.isdir('\\trainer\island\') 0 >>> os.path.isdir('\\trainer\island\\') 1 >>> In a perfect world, the first call should return 1, but never has. In older versions of python, the second returned 1, but no longer. In limited tests, appending 2 or more backslashes to the end of any pathname returns the correct answer in both isfile and isdir. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-08 15:33 Message: Logged In: YES user_id=31435 BTW, it occurs to me that this *may* be a consequence of whatever was done in 2.2 to encode/decode filename strings for system calls on Windows. I didn't follow that, and Mark may be the only one who fully understands the details. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 15:17 Message: Logged In: YES user_id=31435 Here's the implementation of Windows isdir(): def isdir(path): . """Test whether a path is a directory""" . try: . st = os.stat(path) . except os.error: . return 0 . return stat.S_ISDIR(st[stat.ST_MODE]) That is, we return whatever Microsoft's stat() tells us, and our code is the same in 2.2 as in 2.1. I don't have Win2K here, and my Win98 box isn't on a Windows network so I can't even try real UNC paths here. Reassigning to MarkH in case he can do better on either count. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 14:05 Message: Logged In: YES user_id=6380 Tim, I hate to do this to you, but you're the only person I trust with researching this. (My laptop is currently off the net again. :-( ) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 From noreply@sourceforge.net Sat Feb 9 00:50:05 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 16:50:05 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-515073 ] subtypable weak references Message-ID: Feature Requests item #515073, was opened at 2002-02-08 16:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515073&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) Assigned to: Nobody/Anonymous (nobody) Summary: subtypable weak references Initial Comment: I want to be able to create a subtype of weakref. Motivation: I use a trick to non-intrusively keep one Python object (ward) alive as long as another one (custodian) is: I build a weak reference to the custodian whose kill function object holds a reference to the ward. I "leak" the weakref, but the function decrements its refcount so it will eventually die. This scheme costs an extra allocation for the function object, and because there is a function object at all, there's no opportunity to re-use the weakref (please document this part of the re-use behavior, BTW!) I also want the re-use algorithm to check for object and type equality so that I can avoid creating multiple such references. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515073&group_id=5470 From noreply@sourceforge.net Sat Feb 9 00:51:53 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 16:51:53 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-515074 ] Extended storage in new-style classes Message-ID: Feature Requests item #515074, was opened at 2002-02-08 16:51 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515074&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) Assigned to: Nobody/Anonymous (nobody) Summary: Extended storage in new-style classes Initial Comment: I want to be able to reserve some storage in my own new-style class objects. Ideally the storage would fall before the variable-length section so I didn't have to worry about alignment issues. -Dave ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515074&group_id=5470 From noreply@sourceforge.net Sat Feb 9 06:44:53 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 22:44:53 -0800 Subject: [Python-bugs-list] [ python-Bugs-515137 ] metaclasses and 2.2 highlights Message-ID: Bugs item #515137, was opened at 2002-02-08 22:44 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515137&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: metaclasses and 2.2 highlights Initial Comment: The 2.2 highlights list at python.org doesn't mention metaclasses. Maybe they're considered part of the type/class unification but I didn't notice them til reading some old c.l.py articles about class design. I think they're an important enough addition that they should be mentioned in the highlights. --phr ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515137&group_id=5470 From noreply@sourceforge.net Sat Feb 9 07:33:29 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 08 Feb 2002 23:33:29 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core >Group: None >Status: Open >Resolution: Later Priority: 5 Submitted By: Andrew Koenig (arkoenig) >Assigned to: Nobody/Anonymous (nobody) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-08 23:33 Message: Logged In: YES user_id=31435 I reopened this, but unassigned it since I can't justify working on it (the benefit/cost ratio of fixing it is down in the noise compared to other things that should be done). I no longer think we'd need a PEP to change the behavior, and agree it would be nice to change it. Changing it may surprise people expecting Python to work like C (C99 says that when integral -> floating conversion is in range but can't be done exactly, either of the closest representable floating numbers may be returned; Python inherits the platform C's behavior here for Python int -> Python float conversion (C long -> C double); when the conversion is out of range, C doesn't define what happens, and Python inherits that too before 2.2 (Infinities and NaNs are what I've seen most often, varying by platform); in 2.2 it raises OverflowError). I'm not sure it's possible for a= c: . print `a`, `b`, `c` ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 07:33 Message: Logged In: YES user_id=418174 Here is yet another surprise: x=[1L<10000] y=[0.0] z=x+y Now I can execute x.sort() and y.sort() successfully, but z.sort blows up. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 05:28 Message: Logged In: YES user_id=418174 The difficulty is that as defined, < is not an order relation, because there exist values a, b, c such that a=T, converting x to long will not lose information, and if x is a long value <=T, converting x to float will not lose information. Therefore, instead of always converting to long, it suffices to convert in a direction chosen by comparing the operands to T (without conversion) first. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:59 Message: Logged In: YES user_id=31435 Oops! I meant """ could lead to a different result than the explicit coercion in somefloat == float(somelong) """ ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Sat Feb 9 15:42:08 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Feb 2002 07:42:08 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: Later Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Nobody/Anonymous (nobody) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- >Comment By: Andrew Koenig (arkoenig) Date: 2002-02-09 07:42 Message: Logged In: YES user_id=418174 I completely agree it's not a high-priority item, especially because it may be complicated to fix. I think that the fundamental problem is that there is no common type to which both float and long can be converted without losing information, which complicates both the definition and implementation of comparison. Accordingly, it might make sense to think about this issue in conjunction with future consideration of rational numbers. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 23:33 Message: Logged In: YES user_id=31435 I reopened this, but unassigned it since I can't justify working on it (the benefit/cost ratio of fixing it is down in the noise compared to other things that should be done). I no longer think we'd need a PEP to change the behavior, and agree it would be nice to change it. Changing it may surprise people expecting Python to work like C (C99 says that when integral -> floating conversion is in range but can't be done exactly, either of the closest representable floating numbers may be returned; Python inherits the platform C's behavior here for Python int -> Python float conversion (C long -> C double); when the conversion is out of range, C doesn't define what happens, and Python inherits that too before 2.2 (Infinities and NaNs are what I've seen most often, varying by platform); in 2.2 it raises OverflowError). I'm not sure it's possible for a= c: . print `a`, `b`, `c` ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 07:33 Message: Logged In: YES user_id=418174 Here is yet another surprise: x=[1L<10000] y=[0.0] z=x+y Now I can execute x.sort() and y.sort() successfully, but z.sort blows up. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 05:28 Message: Logged In: YES user_id=418174 The difficulty is that as defined, < is not an order relation, because there exist values a, b, c such that a=T, converting x to long will not lose information, and if x is a long value <=T, converting x to float will not lose information. Therefore, instead of always converting to long, it suffices to convert in a direction chosen by comparing the operands to T (without conversion) first. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:59 Message: Logged In: YES user_id=31435 Oops! I meant """ could lead to a different result than the explicit coercion in somefloat == float(somelong) """ ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Sat Feb 9 21:40:57 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Feb 2002 13:40:57 -0800 Subject: [Python-bugs-list] [ python-Bugs-515336 ] Method assignment inconsistency Message-ID: Bugs item #515336, was opened at 2002-02-09 13:40 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515336&group_id=5470 Category: Type/class unification Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Oren Tirosh (oren-sf) Assigned to: Nobody/Anonymous (nobody) Summary: Method assignment inconsistency Initial Comment: Python code and builtin functions don't have a consistent view of new style objects when method attributes are assigned: class A(object): def __repr__(self): return 'abc' a = A() a.__repr__ = lambda:'123' Result: repr(a) != a.__repr__() The repr() function sees the original __repr__ method but reading the __repr__ attribute returns the assigned function. With classic objects both cases use the assigned method. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515336&group_id=5470 From noreply@sourceforge.net Sun Feb 10 02:54:11 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Feb 2002 18:54:11 -0800 Subject: [Python-bugs-list] [ python-Bugs-514627 ] pydoc fails to generate html doc Message-ID: Bugs item #514627, was opened at 2002-02-07 18:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514627&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Raj Kunjithapadam (mmaster25) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc fails to generate html doc Initial Comment: pydoc on the python 2.2 distribution fails to generate html doc(when option -w is given) Traceback follows Traceback (most recent call last): File "/opt/dump/Python-2.2/Lib/pydoc.py", line 2101, in ? if __name__ == '__main__': cli() File "/opt/dump/Python-2.2/Lib/pydoc.py", line 2070, in cli writedoc(arg) File "/opt/dump/Python-2.2/Lib/pydoc.py", line 1341, in writedoc object = locate(key, forceload) File "/opt/dump/Python-2.2/Lib/pydoc.py", line 1293, in locate parts = split(path, '.') File "/opt/dump/Python-2.2/Lib/string.py", line 117, in split return s.split(sep, maxsplit) AttributeError: 'module' object has no attribute 'split' On further investigation I was able to fix it. Attached is the fix. ---------------------------------------------------------------------- >Comment By: Raj Kunjithapadam (mmaster25) Date: 2002-02-09 18:54 Message: Logged In: YES user_id=452533 It happened to me on Redhat Linux 7.1 when I ran pydoc using the -w option to generate html output. -w invokes the writedoc method and it expects an arg(filename) instead of the module object returned by imp_load. I have also submitted a fix for this. Thanks for following up on this quickly. --Raj ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:48 Message: Logged In: YES user_id=6380 I want to believe you, but I cannot reproduce the traceback. Can you tell me which command line you used to cause the traceback, and on which operating system? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514627&group_id=5470 From noreply@sourceforge.net Sun Feb 10 03:03:47 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Feb 2002 19:03:47 -0800 Subject: [Python-bugs-list] [ python-Bugs-514627 ] pydoc fails to generate html doc Message-ID: Bugs item #514627, was opened at 2002-02-07 18:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514627&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Raj Kunjithapadam (mmaster25) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc fails to generate html doc Initial Comment: pydoc on the python 2.2 distribution fails to generate html doc(when option -w is given) Traceback follows Traceback (most recent call last): File "/opt/dump/Python-2.2/Lib/pydoc.py", line 2101, in ? if __name__ == '__main__': cli() File "/opt/dump/Python-2.2/Lib/pydoc.py", line 2070, in cli writedoc(arg) File "/opt/dump/Python-2.2/Lib/pydoc.py", line 1341, in writedoc object = locate(key, forceload) File "/opt/dump/Python-2.2/Lib/pydoc.py", line 1293, in locate parts = split(path, '.') File "/opt/dump/Python-2.2/Lib/string.py", line 117, in split return s.split(sep, maxsplit) AttributeError: 'module' object has no attribute 'split' On further investigation I was able to fix it. Attached is the fix. ---------------------------------------------------------------------- >Comment By: Raj Kunjithapadam (mmaster25) Date: 2002-02-09 19:03 Message: Logged In: YES user_id=452533 the commandline was $pydoc -w I cannot give you the exact commandline and traceback as I am unable to get thru my VPN now. But I am pretty sure that was the commandline. Refer to [ #514628 ] bug in pydoc on python 2.2 release in Python - Patches. --Raj ---------------------------------------------------------------------- Comment By: Raj Kunjithapadam (mmaster25) Date: 2002-02-09 18:54 Message: Logged In: YES user_id=452533 It happened to me on Redhat Linux 7.1 when I ran pydoc using the -w option to generate html output. -w invokes the writedoc method and it expects an arg(filename) instead of the module object returned by imp_load. I have also submitted a fix for this. Thanks for following up on this quickly. --Raj ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:48 Message: Logged In: YES user_id=6380 I want to believe you, but I cannot reproduce the traceback. Can you tell me which command line you used to cause the traceback, and on which operating system? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514627&group_id=5470 From noreply@sourceforge.net Sun Feb 10 04:54:46 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Feb 2002 20:54:46 -0800 Subject: [Python-bugs-list] [ python-Bugs-515434 ] Very slow performance Message-ID: Bugs item #515434, was opened at 2002-02-09 20:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515434&group_id=5470 Category: Regular Expressions Group: Not a Bug Status: Open Resolution: None Priority: 5 Submitted By: Andy Miller (ajmiller) Assigned to: Fredrik Lundh (effbot) Summary: Very slow performance Initial Comment: While performance testing the RE module came across a case where it runs very slow (processing 4 or 5 lines of text per second on a P700 !!) See the attached program for details ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515434&group_id=5470 From noreply@sourceforge.net Sun Feb 10 06:49:39 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 09 Feb 2002 22:49:39 -0800 Subject: [Python-bugs-list] [ python-Bugs-515434 ] Very slow performance Message-ID: Bugs item #515434, was opened at 2002-02-09 20:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515434&group_id=5470 Category: Regular Expressions Group: Not a Bug Status: Open Resolution: None Priority: 5 Submitted By: Andy Miller (ajmiller) Assigned to: Fredrik Lundh (effbot) Summary: Very slow performance Initial Comment: While performance testing the RE module came across a case where it runs very slow (processing 4 or 5 lines of text per second on a P700 !!) See the attached program for details ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-09 22:49 Message: Logged In: YES user_id=31435 Patterns with highly ambiguous subpatterns (like your \w+.+\d+) may run extremely slowly in Python, or Perl, or any other language with a backtracking regexp engine. See Friedl's "Mastering Regular Expressions" (O'Reilly) for an explanation. You can learn how to write zippy regexps faster than fundamental consequences of the matching algorithm can be wished away . ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515434&group_id=5470 From noreply@sourceforge.net Sun Feb 10 18:57:05 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Feb 2002 10:57:05 -0800 Subject: [Python-bugs-list] [ python-Bugs-513572 ] isdir behavior getting odder on UNC path Message-ID: Bugs item #513572, was opened at 2002-02-05 18:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gary Herron (herron) Assigned to: Mark Hammond (mhammond) Summary: isdir behavior getting odder on UNC path Initial Comment: It's been documented in earlier version of Python on windows that os.path.isdir returns true on a UNC directory only if there was an extra backslash at the end of the argument. In Python2.2 (at least on windows 2000) it appears that *TWO* extra backslashes are needed. Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import os >>> os.path.isdir('\\trainer\island') 0 >>> os.path.isdir('\\trainer\island\') 0 >>> os.path.isdir('\\trainer\island\\') 1 >>> In a perfect world, the first call should return 1, but never has. In older versions of python, the second returned 1, but no longer. In limited tests, appending 2 or more backslashes to the end of any pathname returns the correct answer in both isfile and isdir. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-10 10:57 Message: Logged In: YES user_id=31435 Gary, exactly what do you mean by "older versions of Python"? That is, specifically which versions? The Microsoft stat() function is extremely picky about trailing (back)slashes. For example, if you have a directory c:/python, and pass "c:/python/" to the MS stat (), it claims no such thing exists. This isn't documented by MS, but that's how it works: a trailing (back)slash is required if and only if the path passed in "is a root". So MS stat() doesn't understand "/python/", and doesn't understand "d:" either. The former doesn't tolerate a (back)slash, while the latter requires one. This is impossible for people to keep straight, so after 1.5.2 Python started removing (back)slashes on its own to make MS stat() happy. The code currently leaves a trailing (back)slash alone if and only if one exists, and in addition of these obtains: 1) The (back)slash is the only character in the path. or 2) The path has 3 characters, and the middle one is a colon. UNC roots don't fit either of those, so do get one (back) slash chopped off. However, just as for any other roots, the MS stat() refuses to recognize them as valid unless they do have a trailing (back)slash. Indeed, the last time I applied a contributed patch to this code, I added a /* XXX UNC root drives should also be exempted? */ comment there. However, this explanation doesn't make sense unless by "older versions of Python" you mean nothing more recent than 1.5.2. If I'm understanding the source of the problem, it should exist in all Pythons after 1.5.2. So if you don't see the same problem in 1.6, 2.0 or 2.1, I'm on the wrong track. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 15:33 Message: Logged In: YES user_id=31435 BTW, it occurs to me that this *may* be a consequence of whatever was done in 2.2 to encode/decode filename strings for system calls on Windows. I didn't follow that, and Mark may be the only one who fully understands the details. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 15:17 Message: Logged In: YES user_id=31435 Here's the implementation of Windows isdir(): def isdir(path): . """Test whether a path is a directory""" . try: . st = os.stat(path) . except os.error: . return 0 . return stat.S_ISDIR(st[stat.ST_MODE]) That is, we return whatever Microsoft's stat() tells us, and our code is the same in 2.2 as in 2.1. I don't have Win2K here, and my Win98 box isn't on a Windows network so I can't even try real UNC paths here. Reassigning to MarkH in case he can do better on either count. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 14:05 Message: Logged In: YES user_id=6380 Tim, I hate to do this to you, but you're the only person I trust with researching this. (My laptop is currently off the net again. :-( ) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 From noreply@sourceforge.net Sun Feb 10 18:59:19 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Feb 2002 10:59:19 -0800 Subject: [Python-bugs-list] [ python-Bugs-500508 ] problems printing multipart MIME msg Message-ID: Bugs item #500508, was opened at 2002-01-07 10:46 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=500508&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Skip Montanaro (montanaro) Assigned to: Barry Warsaw (bwarsaw) Summary: problems printing multipart MIME msg Initial Comment: Got this from python-help. A user there is trying to use the email module to display multipart MIME messages generated by mutt. Attached is a specific mail message. Using this simple script >>> import email >>> f = open("spam1") >>> msg=email.message_from_file(f) >>> print msg generates this traceback Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.2/email/Message.py", line 49, in __str__ return self.as_string(unixfrom=1) File "/usr/local/lib/python2.2/email/Message.py", line 59, in as_string g(self, unixfrom=unixfrom) File "/usr/local/lib/python2.2/email/Generator.py", line 83, in __call__ self._write(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 104, in _write self._dispatch(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 134, in _dispatch meth(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 243, in _handle_multipart g(part, unixfrom=0) File "/usr/local/lib/python2.2/email/Generator.py", line 83, in __call__ self._write(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 104, in _write self._dispatch(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 134, in _dispatch meth(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 310, in _handle_message g(msg.get_payload(), unixfrom=0) File "/usr/local/lib/python2.2/email/Generator.py", line 83, in __call__ self._write(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 104, in _write self._dispatch(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 134, in _dispatch meth(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 240, in _handle_multipart for part in msg.get_payload(): File "/usr/local/lib/python2.2/email/Message.py", line 151, in __getitem__ return self.get(name) File "/usr/local/lib/python2.2/email/Message.py", line 214, in get name = name.lower() AttributeError: 'int' object has no attribute 'lower' ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-10 10:59 Message: Logged In: NO I'm also seeing this problem, with a newly generated multipart mime message built up using email.MIMEBase / email.MIMEText. For me it only occurs intermittently, but I'm using external data to build my messages and haven't tried a fixed set of input yet. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=500508&group_id=5470 From noreply@sourceforge.net Sun Feb 10 20:13:15 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Feb 2002 12:13:15 -0800 Subject: [Python-bugs-list] [ python-Bugs-515434 ] Very slow performance Message-ID: Bugs item #515434, was opened at 2002-02-09 20:54 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515434&group_id=5470 Category: Regular Expressions Group: Not a Bug Status: Open Resolution: None Priority: 5 Submitted By: Andy Miller (ajmiller) Assigned to: Fredrik Lundh (effbot) Summary: Very slow performance Initial Comment: While performance testing the RE module came across a case where it runs very slow (processing 4 or 5 lines of text per second on a P700 !!) See the attached program for details ---------------------------------------------------------------------- >Comment By: Andy Miller (ajmiller) Date: 2002-02-10 12:13 Message: Logged In: YES user_id=447946 I certainly agree that regular expressions can be fine tuned and some run faster than others - unfortunately the case in point runs very fast in Perl (the problem was originally found when replacing some Perl functionality with Python !) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-09 22:49 Message: Logged In: YES user_id=31435 Patterns with highly ambiguous subpatterns (like your \w+.+\d+) may run extremely slowly in Python, or Perl, or any other language with a backtracking regexp engine. See Friedl's "Mastering Regular Expressions" (O'Reilly) for an explanation. You can learn how to write zippy regexps faster than fundamental consequences of the matching algorithm can be wished away . ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515434&group_id=5470 From noreply@sourceforge.net Sun Feb 10 21:33:33 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Feb 2002 13:33:33 -0800 Subject: [Python-bugs-list] [ python-Bugs-500508 ] problems printing multipart MIME msg Message-ID: Bugs item #500508, was opened at 2002-01-07 10:46 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=500508&group_id=5470 Category: Python Library >Group: Python 2.2 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Skip Montanaro (montanaro) Assigned to: Barry Warsaw (bwarsaw) Summary: problems printing multipart MIME msg Initial Comment: Got this from python-help. A user there is trying to use the email module to display multipart MIME messages generated by mutt. Attached is a specific mail message. Using this simple script >>> import email >>> f = open("spam1") >>> msg=email.message_from_file(f) >>> print msg generates this traceback Traceback (most recent call last): File "", line 1, in ? File "/usr/local/lib/python2.2/email/Message.py", line 49, in __str__ return self.as_string(unixfrom=1) File "/usr/local/lib/python2.2/email/Message.py", line 59, in as_string g(self, unixfrom=unixfrom) File "/usr/local/lib/python2.2/email/Generator.py", line 83, in __call__ self._write(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 104, in _write self._dispatch(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 134, in _dispatch meth(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 243, in _handle_multipart g(part, unixfrom=0) File "/usr/local/lib/python2.2/email/Generator.py", line 83, in __call__ self._write(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 104, in _write self._dispatch(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 134, in _dispatch meth(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 310, in _handle_message g(msg.get_payload(), unixfrom=0) File "/usr/local/lib/python2.2/email/Generator.py", line 83, in __call__ self._write(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 104, in _write self._dispatch(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 134, in _dispatch meth(msg) File "/usr/local/lib/python2.2/email/Generator.py", line 240, in _handle_multipart for part in msg.get_payload(): File "/usr/local/lib/python2.2/email/Message.py", line 151, in __getitem__ return self.get(name) File "/usr/local/lib/python2.2/email/Message.py", line 214, in get name = name.lower() AttributeError: 'int' object has no attribute 'lower' ---------------------------------------------------------------------- >Comment By: Barry Warsaw (bwarsaw) Date: 2002-02-10 13:33 Message: Logged In: YES user_id=12800 Known bug related to declared multipart/*'s with just a single part. Fixed in CVS (candidate for 2.2.1); also fixed in the email/mimelib distro. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-10 10:59 Message: Logged In: NO I'm also seeing this problem, with a newly generated multipart mime message built up using email.MIMEBase / email.MIMEText. For me it only occurs intermittently, but I'm using external data to build my messages and haven't tried a fixed set of input yet. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=500508&group_id=5470 From noreply@sourceforge.net Mon Feb 11 04:20:27 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Feb 2002 20:20:27 -0800 Subject: [Python-bugs-list] [ python-Bugs-514676 ] multifile different in 2.2 from 2.1.1 Message-ID: Bugs item #514676, was opened at 2002-02-07 22:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Matthew Cowles (mdcowles) Assigned to: Guido van Rossum (gvanrossum) Summary: multifile different in 2.2 from 2.1.1 Initial Comment: Reported to python-help. When the test program I'll attach is run on the test mail I'll attach separately, it produces this under Python 2.1.1: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: multipart/alternative BOUNDARY: =====================_590453677==_.ALT TYPE: text/plain LINES: ['test A\n'] TYPE: text/html LINES: ['\n', 'test B\n', '\n'] TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n','\n'] But under Python 2.2, it produces: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n'] The first output appears to me to be correct. ---------------------------------------------------------------------- >Comment By: Matthew Cowles (mdcowles) Date: 2002-02-10 20:20 Message: Logged In: YES user_id=198518 The problem is in _readline(). Since it changes self.level and self.last, they apply to the next line, not the current one. I'll upload a patch that seems to work. The test program and test mail aren't mine. They belong to the person who reported the bug to python-help. I'm sure that he'd be glad to have them used as part of the test suite but I'll mail him to make absolutely certain. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:43 Message: Logged In: YES user_id=6380 You're absolutely right -- this is a bug. Can you suggest a fix? We also need a test suite! Your test program is a beginning for that... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 From noreply@sourceforge.net Mon Feb 11 04:47:14 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Feb 2002 20:47:14 -0800 Subject: [Python-bugs-list] [ python-Bugs-514676 ] multifile different in 2.2 from 2.1.1 Message-ID: Bugs item #514676, was opened at 2002-02-07 22:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Matthew Cowles (mdcowles) Assigned to: Guido van Rossum (gvanrossum) Summary: multifile different in 2.2 from 2.1.1 Initial Comment: Reported to python-help. When the test program I'll attach is run on the test mail I'll attach separately, it produces this under Python 2.1.1: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: multipart/alternative BOUNDARY: =====================_590453677==_.ALT TYPE: text/plain LINES: ['test A\n'] TYPE: text/html LINES: ['\n', 'test B\n', '\n'] TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n','\n'] But under Python 2.2, it produces: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n'] The first output appears to me to be correct. ---------------------------------------------------------------------- >Comment By: Matthew Cowles (mdcowles) Date: 2002-02-10 20:47 Message: Logged In: YES user_id=198518 Sorry, I think my analysis is right but the patch is flawed and I've deleted it. I'll try to have another look at it tomorrow. ---------------------------------------------------------------------- Comment By: Matthew Cowles (mdcowles) Date: 2002-02-10 20:20 Message: Logged In: YES user_id=198518 The problem is in _readline(). Since it changes self.level and self.last, they apply to the next line, not the current one. I'll upload a patch that seems to work. The test program and test mail aren't mine. They belong to the person who reported the bug to python-help. I'm sure that he'd be glad to have them used as part of the test suite but I'll mail him to make absolutely certain. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:43 Message: Logged In: YES user_id=6380 You're absolutely right -- this is a bug. Can you suggest a fix? We also need a test suite! Your test program is a beginning for that... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 From noreply@sourceforge.net Mon Feb 11 04:49:46 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Feb 2002 20:49:46 -0800 Subject: [Python-bugs-list] [ python-Bugs-514676 ] multifile different in 2.2 from 2.1.1 Message-ID: Bugs item #514676, was opened at 2002-02-07 22:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Matthew Cowles (mdcowles) Assigned to: Guido van Rossum (gvanrossum) Summary: multifile different in 2.2 from 2.1.1 Initial Comment: Reported to python-help. When the test program I'll attach is run on the test mail I'll attach separately, it produces this under Python 2.1.1: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: multipart/alternative BOUNDARY: =====================_590453677==_.ALT TYPE: text/plain LINES: ['test A\n'] TYPE: text/html LINES: ['\n', 'test B\n', '\n'] TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n','\n'] But under Python 2.2, it produces: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n'] The first output appears to me to be correct. ---------------------------------------------------------------------- >Comment By: Matthew Cowles (mdcowles) Date: 2002-02-10 20:49 Message: Logged In: YES user_id=198518 It seems that SourceForge won't let me delete the patch. Please ignore it. ---------------------------------------------------------------------- Comment By: Matthew Cowles (mdcowles) Date: 2002-02-10 20:47 Message: Logged In: YES user_id=198518 Sorry, I think my analysis is right but the patch is flawed and I've deleted it. I'll try to have another look at it tomorrow. ---------------------------------------------------------------------- Comment By: Matthew Cowles (mdcowles) Date: 2002-02-10 20:20 Message: Logged In: YES user_id=198518 The problem is in _readline(). Since it changes self.level and self.last, they apply to the next line, not the current one. I'll upload a patch that seems to work. The test program and test mail aren't mine. They belong to the person who reported the bug to python-help. I'm sure that he'd be glad to have them used as part of the test suite but I'll mail him to make absolutely certain. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:43 Message: Logged In: YES user_id=6380 You're absolutely right -- this is a bug. Can you suggest a fix? We also need a test suite! Your test program is a beginning for that... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 From noreply@sourceforge.net Mon Feb 11 05:38:30 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Feb 2002 21:38:30 -0800 Subject: [Python-bugs-list] [ python-Bugs-515745 ] Missing docs for module knee Message-ID: Bugs item #515745, was opened at 2002-02-10 21:38 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515745&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Missing docs for module knee Initial Comment: 3.21.1 in the lib manual sez: "A more complete example that implements hierarchical module names and includes a reload() function can be found in the standard module knee (which is intended as an example only -- don't rely on any part of it being a standard interface)." ...but knee is not in the module list, though it appears to be in the distribution. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515745&group_id=5470 From noreply@sourceforge.net Mon Feb 11 05:55:24 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 10 Feb 2002 21:55:24 -0800 Subject: [Python-bugs-list] [ python-Bugs-515751 ] Missing docs for module imputil Message-ID: Bugs item #515751, was opened at 2002-02-10 21:55 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515751&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Missing docs for module imputil Initial Comment: The summary says it all. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515751&group_id=5470 From noreply@sourceforge.net Mon Feb 11 08:03:27 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 00:03:27 -0800 Subject: [Python-bugs-list] [ python-Bugs-513572 ] isdir behavior getting odder on UNC path Message-ID: Bugs item #513572, was opened at 2002-02-05 18:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gary Herron (herron) Assigned to: Mark Hammond (mhammond) Summary: isdir behavior getting odder on UNC path Initial Comment: It's been documented in earlier version of Python on windows that os.path.isdir returns true on a UNC directory only if there was an extra backslash at the end of the argument. In Python2.2 (at least on windows 2000) it appears that *TWO* extra backslashes are needed. Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import os >>> os.path.isdir('\\trainer\island') 0 >>> os.path.isdir('\\trainer\island\') 0 >>> os.path.isdir('\\trainer\island\\') 1 >>> In a perfect world, the first call should return 1, but never has. In older versions of python, the second returned 1, but no longer. In limited tests, appending 2 or more backslashes to the end of any pathname returns the correct answer in both isfile and isdir. ---------------------------------------------------------------------- >Comment By: Gary Herron (herron) Date: 2002-02-11 00:03 Message: Logged In: YES user_id=395736 Sorry, but I don't have much of an idea which versions I was refering to. I picked up the idea of an extra backslashes in a faq from a web site, the search for which I can't seem to reproduce. It claimed one backslash was enough, but did not specify a python version. It *might* have been old enough to be pre 1.5.2. The two versions I can test are 1.5.1 (where one backslash is enough) and 2.2 (where two are required). This seems to me to support (or at least not contradict) Tim's hypothesis. Gary ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-10 10:57 Message: Logged In: YES user_id=31435 Gary, exactly what do you mean by "older versions of Python"? That is, specifically which versions? The Microsoft stat() function is extremely picky about trailing (back)slashes. For example, if you have a directory c:/python, and pass "c:/python/" to the MS stat (), it claims no such thing exists. This isn't documented by MS, but that's how it works: a trailing (back)slash is required if and only if the path passed in "is a root". So MS stat() doesn't understand "/python/", and doesn't understand "d:" either. The former doesn't tolerate a (back)slash, while the latter requires one. This is impossible for people to keep straight, so after 1.5.2 Python started removing (back)slashes on its own to make MS stat() happy. The code currently leaves a trailing (back)slash alone if and only if one exists, and in addition of these obtains: 1) The (back)slash is the only character in the path. or 2) The path has 3 characters, and the middle one is a colon. UNC roots don't fit either of those, so do get one (back) slash chopped off. However, just as for any other roots, the MS stat() refuses to recognize them as valid unless they do have a trailing (back)slash. Indeed, the last time I applied a contributed patch to this code, I added a /* XXX UNC root drives should also be exempted? */ comment there. However, this explanation doesn't make sense unless by "older versions of Python" you mean nothing more recent than 1.5.2. If I'm understanding the source of the problem, it should exist in all Pythons after 1.5.2. So if you don't see the same problem in 1.6, 2.0 or 2.1, I'm on the wrong track. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 15:33 Message: Logged In: YES user_id=31435 BTW, it occurs to me that this *may* be a consequence of whatever was done in 2.2 to encode/decode filename strings for system calls on Windows. I didn't follow that, and Mark may be the only one who fully understands the details. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 15:17 Message: Logged In: YES user_id=31435 Here's the implementation of Windows isdir(): def isdir(path): . """Test whether a path is a directory""" . try: . st = os.stat(path) . except os.error: . return 0 . return stat.S_ISDIR(st[stat.ST_MODE]) That is, we return whatever Microsoft's stat() tells us, and our code is the same in 2.2 as in 2.1. I don't have Win2K here, and my Win98 box isn't on a Windows network so I can't even try real UNC paths here. Reassigning to MarkH in case he can do better on either count. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 14:05 Message: Logged In: YES user_id=6380 Tim, I hate to do this to you, but you're the only person I trust with researching this. (My laptop is currently off the net again. :-( ) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 From noreply@sourceforge.net Mon Feb 11 08:28:03 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 00:28:03 -0800 Subject: [Python-bugs-list] [ python-Bugs-513572 ] isdir behavior getting odder on UNC path Message-ID: Bugs item #513572, was opened at 2002-02-05 18:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Gary Herron (herron) Assigned to: Mark Hammond (mhammond) Summary: isdir behavior getting odder on UNC path Initial Comment: It's been documented in earlier version of Python on windows that os.path.isdir returns true on a UNC directory only if there was an extra backslash at the end of the argument. In Python2.2 (at least on windows 2000) it appears that *TWO* extra backslashes are needed. Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import os >>> os.path.isdir('\\trainer\island') 0 >>> os.path.isdir('\\trainer\island\') 0 >>> os.path.isdir('\\trainer\island\\') 1 >>> In a perfect world, the first call should return 1, but never has. In older versions of python, the second returned 1, but no longer. In limited tests, appending 2 or more backslashes to the end of any pathname returns the correct answer in both isfile and isdir. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-11 00:28 Message: Logged In: YES user_id=31435 Mark, what do you think about a different approach here? 1. Leave the string alone and *try* stat. If it succeeds, great, we're done. 2. Else if the string doesn't have a trailing (back)slash, append one and try again. Win or lose, that's the end. 3. Else the string does have a trailing (back)slash. If the string has more than one character, strip a trailing (back)slash and try again. Win or lose, that's the end. 4. Else the string is a single (back)slash, yet stat() failed. This shouldn't be possible. It doubles the number of stats in cases where the file path doesn't correspond to anything that exists. OTOH, MS's (back)slash rules are undocumented and incomprehensible (read their implementation of stat() for the whole truth -- we're not out-thinking lots of it now, and the gimmick added after 1.5.2 to out-think part of it is at least breaking Gary's thoroughly sensible use). ---------------------------------------------------------------------- Comment By: Gary Herron (herron) Date: 2002-02-11 00:03 Message: Logged In: YES user_id=395736 Sorry, but I don't have much of an idea which versions I was refering to. I picked up the idea of an extra backslashes in a faq from a web site, the search for which I can't seem to reproduce. It claimed one backslash was enough, but did not specify a python version. It *might* have been old enough to be pre 1.5.2. The two versions I can test are 1.5.1 (where one backslash is enough) and 2.2 (where two are required). This seems to me to support (or at least not contradict) Tim's hypothesis. Gary ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-10 10:57 Message: Logged In: YES user_id=31435 Gary, exactly what do you mean by "older versions of Python"? That is, specifically which versions? The Microsoft stat() function is extremely picky about trailing (back)slashes. For example, if you have a directory c:/python, and pass "c:/python/" to the MS stat (), it claims no such thing exists. This isn't documented by MS, but that's how it works: a trailing (back)slash is required if and only if the path passed in "is a root". So MS stat() doesn't understand "/python/", and doesn't understand "d:" either. The former doesn't tolerate a (back)slash, while the latter requires one. This is impossible for people to keep straight, so after 1.5.2 Python started removing (back)slashes on its own to make MS stat() happy. The code currently leaves a trailing (back)slash alone if and only if one exists, and in addition of these obtains: 1) The (back)slash is the only character in the path. or 2) The path has 3 characters, and the middle one is a colon. UNC roots don't fit either of those, so do get one (back) slash chopped off. However, just as for any other roots, the MS stat() refuses to recognize them as valid unless they do have a trailing (back)slash. Indeed, the last time I applied a contributed patch to this code, I added a /* XXX UNC root drives should also be exempted? */ comment there. However, this explanation doesn't make sense unless by "older versions of Python" you mean nothing more recent than 1.5.2. If I'm understanding the source of the problem, it should exist in all Pythons after 1.5.2. So if you don't see the same problem in 1.6, 2.0 or 2.1, I'm on the wrong track. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 15:33 Message: Logged In: YES user_id=31435 BTW, it occurs to me that this *may* be a consequence of whatever was done in 2.2 to encode/decode filename strings for system calls on Windows. I didn't follow that, and Mark may be the only one who fully understands the details. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 15:17 Message: Logged In: YES user_id=31435 Here's the implementation of Windows isdir(): def isdir(path): . """Test whether a path is a directory""" . try: . st = os.stat(path) . except os.error: . return 0 . return stat.S_ISDIR(st[stat.ST_MODE]) That is, we return whatever Microsoft's stat() tells us, and our code is the same in 2.2 as in 2.1. I don't have Win2K here, and my Win98 box isn't on a Windows network so I can't even try real UNC paths here. Reassigning to MarkH in case he can do better on either count. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 14:05 Message: Logged In: YES user_id=6380 Tim, I hate to do this to you, but you're the only person I trust with researching this. (My laptop is currently off the net again. :-( ) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513572&group_id=5470 From noreply@sourceforge.net Mon Feb 11 09:54:53 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 01:54:53 -0800 Subject: [Python-bugs-list] [ python-Bugs-227361 ] httplib problem with '100 Continue' Message-ID: Bugs item #227361, was opened at 2001-01-02 18:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=227361&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Doug Fort (dougfort) Assigned to: Greg Stein (gstein) Summary: httplib problem with '100 Continue' Initial Comment: I believe there is a bug in httplib IIS 4 and 5 are subject to send an unsolicited result code of '100 Continue' with a couple of headers and a blank line before sending '302 Object Moved'. The 100 response is totally worthless and should be ignored. Unfortunately, httplib.HTTPConnection is unwilling to go back and read more headers when it already has a response object. I was able to get past this with the following kludge: while 1: response = self._client.getresponse() if response.status != 100: break # 2000-12-30 djf -- drop bogus 100 response # by kludging httplib self._client._HTTPConnection__state = httplib._CS_REQ_SENT self._client._HTTPConnection__response = None ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-11 01:54 Message: Logged In: NO This may not be a problem at all, depending on how the authors of httplib intended to process this header. I haven't read their spec so I don't know what they intended. Here is the section on response code 100 from RFC 2616 (HTTP 1.1) so you can make up your own mind: 10.1.1 100 Continue ####################################################### The client SHOULD continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response. The server MUST send a final response after the request has been completed. See section 8.2.3 for detailed discussion of the use and handling of this status code. ####################################################### If you take a look at section 8.2.3, you will find some very good reasons why this header responds as it does. You might also be able to solve your problem just by reading these to section of the spec. ---------------------------------------------------------------------- Comment By: Doug Fort (dougfort) Date: 2001-01-06 12:14 Message: I'm not sure httplib should know anything about the actual status. Right now it is elegantly decoupled from the content it handles. Perhaps just a 'discardresponse()' function. BTW, I've had very good results with the HTTP 1.1 functionality in general. This is a small nit. ---------------------------------------------------------------------- Comment By: Greg Stein (gstein) Date: 2001-01-06 12:04 Message: Agreed -- this is a problem in httplib. I was hoping to get the "chewing up" of 100 (Continue) responses into httplib before the 2.0 release. It should be possible to do this in HTTPResponse.begin() ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=227361&group_id=5470 From noreply@sourceforge.net Mon Feb 11 10:27:21 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 02:27:21 -0800 Subject: [Python-bugs-list] [ python-Bugs-515830 ] macostools.mkalias doesnt work for folde Message-ID: Bugs item #515830, was opened at 2002-02-11 02:27 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515830&group_id=5470 Category: Macintosh Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Jack Jansen (jackjansen) Summary: macostools.mkalias doesnt work for folde Initial Comment: macostools.mkalias() fails if the source is a folder in stead of a file. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515830&group_id=5470 From noreply@sourceforge.net Mon Feb 11 15:16:24 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 07:16:24 -0800 Subject: [Python-bugs-list] [ python-Bugs-515943 ] searching for data with \0 in mmapf Message-ID: Bugs item #515943, was opened at 2002-02-11 07:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515943&group_id=5470 Category: Extension Modules Group: None Status: Open Resolution: None Priority: 5 Submitted By: Grzegorz Makarewicz (makaron) Assigned to: Nobody/Anonymous (nobody) Summary: searching for data with \0 in mmapf Initial Comment: searching for values with embeded nulls returns incorrect results, eg: import mmap,os fp=open('foo','w+') fp.write('foo') fp.write('\1'*(mmap.PAGESIZE-3)) data='foo\0data' fp.write(data)fp.write('\1'*(mmap.PAGESIZE-len(data))) m=mmap.mmap(fp.fileno(),2*mmap.PAGESIZE) fp.close() print 'data found at:',m.find(data) m.close() os.unlink(foo) returns 0 where mmap.PAGESIZE is required ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515943&group_id=5470 From noreply@sourceforge.net Mon Feb 11 15:58:43 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 07:58:43 -0800 Subject: [Python-bugs-list] [ python-Bugs-227361 ] httplib problem with '100 Continue' Message-ID: Bugs item #227361, was opened at 2001-01-02 18:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=227361&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Doug Fort (dougfort) Assigned to: Greg Stein (gstein) Summary: httplib problem with '100 Continue' Initial Comment: I believe there is a bug in httplib IIS 4 and 5 are subject to send an unsolicited result code of '100 Continue' with a couple of headers and a blank line before sending '302 Object Moved'. The 100 response is totally worthless and should be ignored. Unfortunately, httplib.HTTPConnection is unwilling to go back and read more headers when it already has a response object. I was able to get past this with the following kludge: while 1: response = self._client.getresponse() if response.status != 100: break # 2000-12-30 djf -- drop bogus 100 response # by kludging httplib self._client._HTTPConnection__state = httplib._CS_REQ_SENT self._client._HTTPConnection__response = None ---------------------------------------------------------------------- Comment By: Jens B. Jorgensen (jensbjorgensen) Date: 2002-02-11 07:58 Message: Logged In: YES user_id=67930 Whether or not the library transparently consumes 100 responses (which I believe it should) or not there is no question the current behavior represents a bug. When a 100 response is received the connection get's "stuck" because it believes it has already read a response and refuses to read another until another request is sent. Once you hit this with the HTTPConnection you cannot do anything with it unless you modify its internal state data. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-11 01:54 Message: Logged In: NO This may not be a problem at all, depending on how the authors of httplib intended to process this header. I haven't read their spec so I don't know what they intended. Here is the section on response code 100 from RFC 2616 (HTTP 1.1) so you can make up your own mind: 10.1.1 100 Continue ####################################################### The client SHOULD continue with its request. This interim response is used to inform the client that the initial part of the request has been received and has not yet been rejected by the server. The client SHOULD continue by sending the remainder of the request or, if the request has already been completed, ignore this response. The server MUST send a final response after the request has been completed. See section 8.2.3 for detailed discussion of the use and handling of this status code. ####################################################### If you take a look at section 8.2.3, you will find some very good reasons why this header responds as it does. You might also be able to solve your problem just by reading these to section of the spec. ---------------------------------------------------------------------- Comment By: Doug Fort (dougfort) Date: 2001-01-06 12:14 Message: I'm not sure httplib should know anything about the actual status. Right now it is elegantly decoupled from the content it handles. Perhaps just a 'discardresponse()' function. BTW, I've had very good results with the HTTP 1.1 functionality in general. This is a small nit. ---------------------------------------------------------------------- Comment By: Greg Stein (gstein) Date: 2001-01-06 12:04 Message: Agreed -- this is a problem in httplib. I was hoping to get the "chewing up" of 100 (Continue) responses into httplib before the 2.0 release. It should be possible to do this in HTTPResponse.begin() ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=227361&group_id=5470 From noreply@sourceforge.net Mon Feb 11 17:06:24 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 09:06:24 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 09:06 Message: Logged In: YES user_id=21627 Marc: Please have a look at pymalloc; it cannot be "fixed". It is in the nature of a pool allocator that you have to copy when you want to move between pools - or you have to waste the extra space. I agree that UTF-8 coding needs to be fast; that's why I wrote this patch. I've revised it to fit the current implementation, and too add the assert that Tim has requested (unicode3.diff). For the test case time_utf8.zip (which is a UTF-8 converted Misc/ACKS), the version that first counts the size is about 10% faster on my system (Linux glibc 2.2.4) (see timings inside time_utf8.py; #592 is the patched version). So the price for counting the size turns out to be negligible, and to offer significant, and is more than compensated for by the reduction of calls to the memory management system. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-08 14:27 Message: Logged In: YES user_id=38388 Good news, Walter. Martin: As I explained in an earlier comment, pymalloc needs to be fixed to better address overallocation. The two pass logic would avoid overallocation, but at a high price. Copying memory (if at all needed) is likely to be *much* faster. The UTF-8 codec has to be as fast as possible since it is one of the most used codecs in Python's Unicode implementation. Also note that I have reduced overallocation to 2*size in the codec. I suggest to close the bug report. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-07 14:59 Message: Logged In: YES user_id=21627 I still think the current algorithm has serious problems as it is based on overallocation, and that it should be replaced with an algorithm that counts the memory requirements upfront. This will be particularly important for pymalloc, but will also avoid unnecessary copies for many other malloc implementations. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Mon Feb 11 17:49:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 09:49:38 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 09:49 Message: Logged In: YES user_id=38388 I get different timings (note that you have to use time.clock() for benchmarks, not time.time()): without your patch: 0.470 seconds with your patch: 0.960 seconds This is on Linux with pgcc 2.95.2, glibc 2.2, without pymalloc (which is the normal configuration). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 09:06 Message: Logged In: YES user_id=21627 Marc: Please have a look at pymalloc; it cannot be "fixed". It is in the nature of a pool allocator that you have to copy when you want to move between pools - or you have to waste the extra space. I agree that UTF-8 coding needs to be fast; that's why I wrote this patch. I've revised it to fit the current implementation, and too add the assert that Tim has requested (unicode3.diff). For the test case time_utf8.zip (which is a UTF-8 converted Misc/ACKS), the version that first counts the size is about 10% faster on my system (Linux glibc 2.2.4) (see timings inside time_utf8.py; #592 is the patched version). So the price for counting the size turns out to be negligible, and to offer significant, and is more than compensated for by the reduction of calls to the memory management system. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-08 14:27 Message: Logged In: YES user_id=38388 Good news, Walter. Martin: As I explained in an earlier comment, pymalloc needs to be fixed to better address overallocation. The two pass logic would avoid overallocation, but at a high price. Copying memory (if at all needed) is likely to be *much* faster. The UTF-8 codec has to be as fast as possible since it is one of the most used codecs in Python's Unicode implementation. Also note that I have reduced overallocation to 2*size in the codec. I suggest to close the bug report. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-07 14:59 Message: Logged In: YES user_id=21627 I still think the current algorithm has serious problems as it is based on overallocation, and that it should be replaced with an algorithm that counts the memory requirements upfront. This will be particularly important for pymalloc, but will also avoid unnecessary copies for many other malloc implementations. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Mon Feb 11 18:04:49 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 10:04:49 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 10:04 Message: Logged In: YES user_id=21627 time.clock vs. time.time does not make a big difference on an unloaded machine (except time.time has a higher resolution). Can you please run the test 10x more often? I then get 12.520 clocks with CVS python, glibc 2.2.4, gcc 2.95, and 10.890 with my patch. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 09:49 Message: Logged In: YES user_id=38388 I get different timings (note that you have to use time.clock() for benchmarks, not time.time()): without your patch: 0.470 seconds with your patch: 0.960 seconds This is on Linux with pgcc 2.95.2, glibc 2.2, without pymalloc (which is the normal configuration). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 09:06 Message: Logged In: YES user_id=21627 Marc: Please have a look at pymalloc; it cannot be "fixed". It is in the nature of a pool allocator that you have to copy when you want to move between pools - or you have to waste the extra space. I agree that UTF-8 coding needs to be fast; that's why I wrote this patch. I've revised it to fit the current implementation, and too add the assert that Tim has requested (unicode3.diff). For the test case time_utf8.zip (which is a UTF-8 converted Misc/ACKS), the version that first counts the size is about 10% faster on my system (Linux glibc 2.2.4) (see timings inside time_utf8.py; #592 is the patched version). So the price for counting the size turns out to be negligible, and to offer significant, and is more than compensated for by the reduction of calls to the memory management system. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-08 14:27 Message: Logged In: YES user_id=38388 Good news, Walter. Martin: As I explained in an earlier comment, pymalloc needs to be fixed to better address overallocation. The two pass logic would avoid overallocation, but at a high price. Copying memory (if at all needed) is likely to be *much* faster. The UTF-8 codec has to be as fast as possible since it is one of the most used codecs in Python's Unicode implementation. Also note that I have reduced overallocation to 2*size in the codec. I suggest to close the bug report. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-07 14:59 Message: Logged In: YES user_id=21627 I still think the current algorithm has serious problems as it is based on overallocation, and that it should be replaced with an algorithm that counts the memory requirements upfront. This will be particularly important for pymalloc, but will also avoid unnecessary copies for many other malloc implementations. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Mon Feb 11 18:42:50 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 10:42:50 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 10:42 Message: Logged In: YES user_id=38388 Ok, with 100000 loops and time.clock() I get: 4.690 - 4.710 without your patch, 9.560 - 9.570 with your patch (again, without pymalloc and the same compiler/machine on SuSE 7.1). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 10:04 Message: Logged In: YES user_id=21627 time.clock vs. time.time does not make a big difference on an unloaded machine (except time.time has a higher resolution). Can you please run the test 10x more often? I then get 12.520 clocks with CVS python, glibc 2.2.4, gcc 2.95, and 10.890 with my patch. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 09:49 Message: Logged In: YES user_id=38388 I get different timings (note that you have to use time.clock() for benchmarks, not time.time()): without your patch: 0.470 seconds with your patch: 0.960 seconds This is on Linux with pgcc 2.95.2, glibc 2.2, without pymalloc (which is the normal configuration). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 09:06 Message: Logged In: YES user_id=21627 Marc: Please have a look at pymalloc; it cannot be "fixed". It is in the nature of a pool allocator that you have to copy when you want to move between pools - or you have to waste the extra space. I agree that UTF-8 coding needs to be fast; that's why I wrote this patch. I've revised it to fit the current implementation, and too add the assert that Tim has requested (unicode3.diff). For the test case time_utf8.zip (which is a UTF-8 converted Misc/ACKS), the version that first counts the size is about 10% faster on my system (Linux glibc 2.2.4) (see timings inside time_utf8.py; #592 is the patched version). So the price for counting the size turns out to be negligible, and to offer significant, and is more than compensated for by the reduction of calls to the memory management system. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-08 14:27 Message: Logged In: YES user_id=38388 Good news, Walter. Martin: As I explained in an earlier comment, pymalloc needs to be fixed to better address overallocation. The two pass logic would avoid overallocation, but at a high price. Copying memory (if at all needed) is likely to be *much* faster. The UTF-8 codec has to be as fast as possible since it is one of the most used codecs in Python's Unicode implementation. Also note that I have reduced overallocation to 2*size in the codec. I suggest to close the bug report. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-07 14:59 Message: Logged In: YES user_id=21627 I still think the current algorithm has serious problems as it is based on overallocation, and that it should be replaced with an algorithm that counts the memory requirements upfront. This will be particularly important for pymalloc, but will also avoid unnecessary copies for many other malloc implementations. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Mon Feb 11 20:06:36 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 12:06:36 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-11 12:06 Message: Logged In: YES user_id=31435 time.time() sucks for benchmarking on Windows (updates at about 18Hz). Running the test as-is, MSVC6 and Win98SE, it's 1.3 seconds with current CVS, and 1.7 with unicode3.diff. The quantization error in Windows time.time() is > 0.05 seconds, so no point pretending there are 3 significant digits there; luckily(?), it's apparent there's a major difference with just 2 digits. MAL, are you still using an AMD box? In a decade, nobody else has ever reproduced the timing results you see . ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 10:42 Message: Logged In: YES user_id=38388 Ok, with 100000 loops and time.clock() I get: 4.690 - 4.710 without your patch, 9.560 - 9.570 with your patch (again, without pymalloc and the same compiler/machine on SuSE 7.1). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 10:04 Message: Logged In: YES user_id=21627 time.clock vs. time.time does not make a big difference on an unloaded machine (except time.time has a higher resolution). Can you please run the test 10x more often? I then get 12.520 clocks with CVS python, glibc 2.2.4, gcc 2.95, and 10.890 with my patch. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 09:49 Message: Logged In: YES user_id=38388 I get different timings (note that you have to use time.clock() for benchmarks, not time.time()): without your patch: 0.470 seconds with your patch: 0.960 seconds This is on Linux with pgcc 2.95.2, glibc 2.2, without pymalloc (which is the normal configuration). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 09:06 Message: Logged In: YES user_id=21627 Marc: Please have a look at pymalloc; it cannot be "fixed". It is in the nature of a pool allocator that you have to copy when you want to move between pools - or you have to waste the extra space. I agree that UTF-8 coding needs to be fast; that's why I wrote this patch. I've revised it to fit the current implementation, and too add the assert that Tim has requested (unicode3.diff). For the test case time_utf8.zip (which is a UTF-8 converted Misc/ACKS), the version that first counts the size is about 10% faster on my system (Linux glibc 2.2.4) (see timings inside time_utf8.py; #592 is the patched version). So the price for counting the size turns out to be negligible, and to offer significant, and is more than compensated for by the reduction of calls to the memory management system. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-08 14:27 Message: Logged In: YES user_id=38388 Good news, Walter. Martin: As I explained in an earlier comment, pymalloc needs to be fixed to better address overallocation. The two pass logic would avoid overallocation, but at a high price. Copying memory (if at all needed) is likely to be *much* faster. The UTF-8 codec has to be as fast as possible since it is one of the most used codecs in Python's Unicode implementation. Also note that I have reduced overallocation to 2*size in the codec. I suggest to close the bug report. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-07 14:59 Message: Logged In: YES user_id=21627 I still think the current algorithm has serious problems as it is based on overallocation, and that it should be replaced with an algorithm that counts the memory requirements upfront. This will be particularly important for pymalloc, but will also avoid unnecessary copies for many other malloc implementations. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Mon Feb 11 20:32:09 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 12:32:09 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-516076 ] Assign boolean value to a weak reference Message-ID: Feature Requests item #516076, was opened at 2002-02-11 12:32 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=516076&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Stefan Franke (sfranke) Assigned to: Nobody/Anonymous (nobody) Summary: Assign boolean value to a weak reference Initial Comment: To test if a weak reference r is still alive, you type if r() is not None: print "Alive" Wouldn't be if r: print "Alive" more pythonic, since all values of any datatype that are not empty evaluate to "true"? Same if you think about r as a pointer. principle-of-least-surprise-ly yr's Stefan ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=516076&group_id=5470 From noreply@sourceforge.net Mon Feb 11 20:47:02 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 12:47:02 -0800 Subject: [Python-bugs-list] [ python-Bugs-511786 ] urllib2.py loses headers on redirect Message-ID: Bugs item #511786, was opened at 2002-02-01 08:56 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511786&group_id=5470 Category: Python Library Group: Python 2.2.1 candidate >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Greg Ward (gward) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2.py loses headers on redirect Initial Comment: Using urllib2 for an HTTP request that involves a redirect, any custom-supplied headers are lost on the second (redirected) request. Example: >>> from urllib2 import * >>> req = Request("http://www.python.org/doc", ... headers={"cookie": "foo=bar"}) >>> result = urlopen(req) This results in two HTTP requests being sent to www.python.org. The first one includes my cookie header: GET /doc HTTP/1.0 Host: www.python.org User-agent: Python-urllib/2.0a1 cookie: foo=bar but the second one (after the fix-trailing-slash redirect) does not: GET /doc/ HTTP/1.0 Host: www.python.org User-agent: Python-urllib/2.0a1 Luckily, a one-line patch (attached) seems to fix the bug. ---------------------------------------------------------------------- >Comment By: Greg Ward (gward) Date: 2002-02-11 12:47 Message: Logged In: YES user_id=14422 Fixed in rev 1.25 of urllib2.py. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=511786&group_id=5470 From noreply@sourceforge.net Mon Feb 11 20:50:48 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 12:50:48 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 12:50 Message: Logged In: YES user_id=38388 Tim: Yes, I'm still all AMD based... it's Athlon 1200 I'm running. PGCC (the Pentium GCC groups version) has a special AMD optimization mode for Athlon which is what I'm using. Somebody has to hold up the flag against the Wintel camp ;-) Hmm, I could do the same tests on my notebook which runs on one of those Inteliums. Maybe tomorrow... ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-11 12:06 Message: Logged In: YES user_id=31435 time.time() sucks for benchmarking on Windows (updates at about 18Hz). Running the test as-is, MSVC6 and Win98SE, it's 1.3 seconds with current CVS, and 1.7 with unicode3.diff. The quantization error in Windows time.time() is > 0.05 seconds, so no point pretending there are 3 significant digits there; luckily(?), it's apparent there's a major difference with just 2 digits. MAL, are you still using an AMD box? In a decade, nobody else has ever reproduced the timing results you see . ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 10:42 Message: Logged In: YES user_id=38388 Ok, with 100000 loops and time.clock() I get: 4.690 - 4.710 without your patch, 9.560 - 9.570 with your patch (again, without pymalloc and the same compiler/machine on SuSE 7.1). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 10:04 Message: Logged In: YES user_id=21627 time.clock vs. time.time does not make a big difference on an unloaded machine (except time.time has a higher resolution). Can you please run the test 10x more often? I then get 12.520 clocks with CVS python, glibc 2.2.4, gcc 2.95, and 10.890 with my patch. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 09:49 Message: Logged In: YES user_id=38388 I get different timings (note that you have to use time.clock() for benchmarks, not time.time()): without your patch: 0.470 seconds with your patch: 0.960 seconds This is on Linux with pgcc 2.95.2, glibc 2.2, without pymalloc (which is the normal configuration). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 09:06 Message: Logged In: YES user_id=21627 Marc: Please have a look at pymalloc; it cannot be "fixed". It is in the nature of a pool allocator that you have to copy when you want to move between pools - or you have to waste the extra space. I agree that UTF-8 coding needs to be fast; that's why I wrote this patch. I've revised it to fit the current implementation, and too add the assert that Tim has requested (unicode3.diff). For the test case time_utf8.zip (which is a UTF-8 converted Misc/ACKS), the version that first counts the size is about 10% faster on my system (Linux glibc 2.2.4) (see timings inside time_utf8.py; #592 is the patched version). So the price for counting the size turns out to be negligible, and to offer significant, and is more than compensated for by the reduction of calls to the memory management system. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-08 14:27 Message: Logged In: YES user_id=38388 Good news, Walter. Martin: As I explained in an earlier comment, pymalloc needs to be fixed to better address overallocation. The two pass logic would avoid overallocation, but at a high price. Copying memory (if at all needed) is likely to be *much* faster. The UTF-8 codec has to be as fast as possible since it is one of the most used codecs in Python's Unicode implementation. Also note that I have reduced overallocation to 2*size in the codec. I suggest to close the bug report. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-07 14:59 Message: Logged In: YES user_id=21627 I still think the current algorithm has serious problems as it is based on overallocation, and that it should be replaced with an algorithm that counts the memory requirements upfront. This will be particularly important for pymalloc, but will also avoid unnecessary copies for many other malloc implementations. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Mon Feb 11 21:06:58 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 13:06:58 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-11 13:06 Message: Logged In: YES user_id=31435 MAL, cool -- I saw a major slowdown using the patch too, but not nearly as dramatic as you saw, so was curious about what could account for that. Chip, compiler and OS can all have major effects. I assume Martin is using a Pentium box, so assuming your notebook is running Linux too, it would be good to get another LinTel datapoint. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 12:50 Message: Logged In: YES user_id=38388 Tim: Yes, I'm still all AMD based... it's Athlon 1200 I'm running. PGCC (the Pentium GCC groups version) has a special AMD optimization mode for Athlon which is what I'm using. Somebody has to hold up the flag against the Wintel camp ;-) Hmm, I could do the same tests on my notebook which runs on one of those Inteliums. Maybe tomorrow... ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-11 12:06 Message: Logged In: YES user_id=31435 time.time() sucks for benchmarking on Windows (updates at about 18Hz). Running the test as-is, MSVC6 and Win98SE, it's 1.3 seconds with current CVS, and 1.7 with unicode3.diff. The quantization error in Windows time.time() is > 0.05 seconds, so no point pretending there are 3 significant digits there; luckily(?), it's apparent there's a major difference with just 2 digits. MAL, are you still using an AMD box? In a decade, nobody else has ever reproduced the timing results you see . ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 10:42 Message: Logged In: YES user_id=38388 Ok, with 100000 loops and time.clock() I get: 4.690 - 4.710 without your patch, 9.560 - 9.570 with your patch (again, without pymalloc and the same compiler/machine on SuSE 7.1). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 10:04 Message: Logged In: YES user_id=21627 time.clock vs. time.time does not make a big difference on an unloaded machine (except time.time has a higher resolution). Can you please run the test 10x more often? I then get 12.520 clocks with CVS python, glibc 2.2.4, gcc 2.95, and 10.890 with my patch. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 09:49 Message: Logged In: YES user_id=38388 I get different timings (note that you have to use time.clock() for benchmarks, not time.time()): without your patch: 0.470 seconds with your patch: 0.960 seconds This is on Linux with pgcc 2.95.2, glibc 2.2, without pymalloc (which is the normal configuration). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 09:06 Message: Logged In: YES user_id=21627 Marc: Please have a look at pymalloc; it cannot be "fixed". It is in the nature of a pool allocator that you have to copy when you want to move between pools - or you have to waste the extra space. I agree that UTF-8 coding needs to be fast; that's why I wrote this patch. I've revised it to fit the current implementation, and too add the assert that Tim has requested (unicode3.diff). For the test case time_utf8.zip (which is a UTF-8 converted Misc/ACKS), the version that first counts the size is about 10% faster on my system (Linux glibc 2.2.4) (see timings inside time_utf8.py; #592 is the patched version). So the price for counting the size turns out to be negligible, and to offer significant, and is more than compensated for by the reduction of calls to the memory management system. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-08 14:27 Message: Logged In: YES user_id=38388 Good news, Walter. Martin: As I explained in an earlier comment, pymalloc needs to be fixed to better address overallocation. The two pass logic would avoid overallocation, but at a high price. Copying memory (if at all needed) is likely to be *much* faster. The UTF-8 codec has to be as fast as possible since it is one of the most used codecs in Python's Unicode implementation. Also note that I have reduced overallocation to 2*size in the codec. I suggest to close the bug report. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-07 14:59 Message: Logged In: YES user_id=21627 I still think the current algorithm has serious problems as it is based on overallocation, and that it should be replaced with an algorithm that counts the memory requirements upfront. This will be particularly important for pymalloc, but will also avoid unnecessary copies for many other malloc implementations. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Mon Feb 11 22:50:56 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 14:50:56 -0800 Subject: [Python-bugs-list] [ python-Bugs-516232 ] Windows os.path.isdir bad if drive only Message-ID: Bugs item #516232, was opened at 2002-02-11 14:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516232&group_id=5470 Category: Extension Modules Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Charles I. Fuller (cifuller) Assigned to: Nobody/Anonymous (nobody) Summary: Windows os.path.isdir bad if drive only Initial Comment: It seems that most os functions recognize the Windows drive letter without a directory as the current directory on the drive, but os.path.isdir still returns 0. If os.listdir('C:') returns data, os.path.isdir('C:') should return 1 for consistency. C:\folder_on_C>python Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.system('dir C:') Volume in drive C has no label. Volume Serial Number is E4C9-AD16 Directory of C:\folder_on_C 02/11/2002 05:29p . 02/11/2002 05:29p .. 02/11/2002 05:29p subA 02/11/2002 05:29p subB 0 File(s) 0 bytes 4 Dir(s) 22,126,567,424 bytes free 0 >>> os.listdir('C:') ['subA', 'subB'] >>> os.path.abspath('C:') 'C:\folder_on_C' >>> os.path.isdir('C:') 0 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516232&group_id=5470 From noreply@sourceforge.net Mon Feb 11 23:16:11 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 15:16:11 -0800 Subject: [Python-bugs-list] [ python-Bugs-516232 ] Windows os.path.isdir bad if drive only Message-ID: Bugs item #516232, was opened at 2002-02-11 14:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516232&group_id=5470 Category: Extension Modules Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Charles I. Fuller (cifuller) Assigned to: Nobody/Anonymous (nobody) Summary: Windows os.path.isdir bad if drive only Initial Comment: It seems that most os functions recognize the Windows drive letter without a directory as the current directory on the drive, but os.path.isdir still returns 0. If os.listdir('C:') returns data, os.path.isdir('C:') should return 1 for consistency. C:\folder_on_C>python Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.system('dir C:') Volume in drive C has no label. Volume Serial Number is E4C9-AD16 Directory of C:\folder_on_C 02/11/2002 05:29p . 02/11/2002 05:29p .. 02/11/2002 05:29p subA 02/11/2002 05:29p subB 0 File(s) 0 bytes 4 Dir(s) 22,126,567,424 bytes free 0 >>> os.listdir('C:') ['subA', 'subB'] >>> os.path.abspath('C:') 'C:\folder_on_C' >>> os.path.isdir('C:') 0 ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-11 15:16 Message: Logged In: YES user_id=31435 Sorry, this is how Microsoft's implementation of the underlying stat() function works. "Root drive" paths must be given with a trailing slash or backslash, else MS stat() claims they don't exist. You'll see the same irritating behavior in C code. Attempts to worm around it in the past have introduced other bugs; see bug 513572 for a current example. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516232&group_id=5470 From noreply@sourceforge.net Tue Feb 12 04:10:52 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 11 Feb 2002 20:10:52 -0800 Subject: [Python-bugs-list] [ python-Bugs-516299 ] urlparse can get fragments wrong Message-ID: Bugs item #516299, was opened at 2002-02-11 20:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516299&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: A.M. Kuchling (akuchling) Assigned to: Michael Hudson (mwh) Summary: urlparse can get fragments wrong Initial Comment: urlparse.urlparse() goes wrong on a URL such as 'http://amk.ca#foo', where there's a fragment identifier and the hostname isn't followed by a slash. It returns 'amk.ca#foo' as the hostname portion of the URL. While looking at that, I realized that test_urlparse() only tests urljoin(), not urlparse() or urlunparse(). The attached patch also adds a minimal test suite for urlparse(), but it should be still more comprehensive. Unfortunately the RFC doesn't include test cases, so I haven't done this yet. (Assigned to you at random, Michael; feel free to unassign it if you lack the time.) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516299&group_id=5470 From noreply@sourceforge.net Tue Feb 12 10:30:41 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 02:30:41 -0800 Subject: [Python-bugs-list] [ python-Bugs-516372 ] test_thread: unhandled exc. in thread Message-ID: Bugs item #516372, was opened at 2002-02-12 02:30 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516372&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: test_thread: unhandled exc. in thread Initial Comment: test_thread.py occasionally dumps a "Unhandled exception in thread" traceback at the last thread line "mutex.release()" about NoneType not having a release attribute. The problem is confusing for users thinking that something went wrong with the test (althought the regrtest suite doesn't detect such exceptions and tells that the test passed --- this could be another bug report BTW). The problem shows up with Psyco but could also appear on plain Python executions depending on the precise timing. It comes from the fact that the thread code ends with: ... done.release() mutex.release() where these two are mutexes. The main program ends with: ... done.acquire() print "All tasks done" so if 'done' is released, the main program may exit before the thread has a chance to release 'mutex', which happens to be a global variable that the Python module-unloading logic will replace with None. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516372&group_id=5470 From noreply@sourceforge.net Tue Feb 12 13:01:42 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 05:01:42 -0800 Subject: [Python-bugs-list] [ python-Bugs-516412 ] Python gettext doesn't support libglade Message-ID: Bugs item #516412, was opened at 2002-02-12 05:01 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516412&group_id=5470 Category: Python Library Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Christian Reis (kiko_async) Assigned to: Nobody/Anonymous (nobody) Summary: Python gettext doesn't support libglade Initial Comment: Libglade is a library that parses XML and generates GTK-based UIs in runtime. It is written in C and supports a number of languages through bindings. James Henstridge has maintained a set of bindings for Python for some time now. These bindings work very well, _except for internationalization_. The reason seems now straightforward to me. Python's gettext.py is a pure python implementation, and because of it, bindtextdomain/textdomain are never called. This causes any C module that uses gettext to not activate the support, and not use translation because of it. Using Martin's intl.so module things work great, but it is a problem for us having to redistribute it with our application. Any other suggestions to fix? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516412&group_id=5470 From noreply@sourceforge.net Tue Feb 12 13:52:46 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 05:52:46 -0800 Subject: [Python-bugs-list] [ python-Bugs-516412 ] Python gettext doesn't support libglade Message-ID: Bugs item #516412, was opened at 2002-02-12 05:01 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516412&group_id=5470 Category: Python Library Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Christian Reis (kiko_async) Assigned to: Nobody/Anonymous (nobody) Summary: Python gettext doesn't support libglade Initial Comment: Libglade is a library that parses XML and generates GTK-based UIs in runtime. It is written in C and supports a number of languages through bindings. James Henstridge has maintained a set of bindings for Python for some time now. These bindings work very well, _except for internationalization_. The reason seems now straightforward to me. Python's gettext.py is a pure python implementation, and because of it, bindtextdomain/textdomain are never called. This causes any C module that uses gettext to not activate the support, and not use translation because of it. Using Martin's intl.so module things work great, but it is a problem for us having to redistribute it with our application. Any other suggestions to fix? ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-12 05:52 Message: Logged In: NO If what you want is a way to call bindtextdomain/textdomain from Python, feel free to supply a patch or ask martin to add intl.so to the distribution. --Guido (@#$% SF always logs me out :-( ) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516412&group_id=5470 From noreply@sourceforge.net Tue Feb 12 15:21:16 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 07:21:16 -0800 Subject: [Python-bugs-list] [ python-Bugs-504343 ] Unicode docstrings and new style classes Message-ID: Bugs item #504343, was opened at 2002-01-16 04:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=504343&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Nobody/Anonymous (nobody) Summary: Unicode docstrings and new style classes Initial Comment: Unicode docstrings don't work with new style classes. With old style classes they work: ---- class foo: u"föö" class bar(object): u"bär" print repr(foo.__doc__) print repr(bar.__doc__) ---- This prints ---- u'f\xf6\xf6' None ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-02-12 07:21 Message: Logged In: YES user_id=146903 Just wondering if this bug has been forgotten or not. My patch came out a bit weird w.r.t. line wrapping, so you can get here instead: http://www.daa.com.au/~james/files/type-doc.patch I would have added it as an attachment if the SF bug tracker didn't prevent me from doing so (bugzilla is much nicer to use for things like this). ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-01-27 02:10 Message: Logged In: YES user_id=146903 Put together a patch that gets rid of the type.__doc__ property, and sets __doc__ in PyType_Ready() (if appropriate). Seems to work okay in my tests and as a bonus, "print type.__doc__" actually prints documentation on using the type() function :) SF doesn't seem to give me a way to attach a patch to this bug, so I will paste a copy of the patch here (if it is mangled, email me at james@daa.com.au for a copy): --- Python-2.2/Objects/typeobject.c.orig Tue Dec 18 01:14:22 2001 +++ Python-2.2/Objects/typeobject.c Sun Jan 27 17:56:37 2002 @@ -8,7 +8,6 @@ static PyMemberDef type_members[] = { {"__basicsize__", T_INT, offsetof(PyTypeObject,tp_basicsize),READONLY}, {"__itemsize__", T_INT, offsetof(PyTypeObject, tp_itemsize), READONLY}, {"__flags__", T_LONG, offsetof(PyTypeObject, tp_flags), READONLY}, - {"__doc__", T_STRING, offsetof(PyTypeObject, tp_doc), READONLY}, {"__weakrefoffset__", T_LONG, offsetof(PyTypeObject, tp_weaklistoffset), READONLY}, {"__base__", T_OBJECT, offsetof(PyTypeObject, tp_base), READONLY}, @@ -1044,9 +1043,9 @@ type_new(PyTypeObject *metatype, PyObjec } /* Set tp_doc to a copy of dict['__doc__'], if the latter is there - and is a string (tp_doc is a char* -- can't copy a general object - into it). - XXX What if it's a Unicode string? Don't know -- this ignores it. + and is a string. Note that the tp_doc slot will only be used + by C code -- python code will use the version in tp_dict, so + it isn't that important that non string __doc__'s are ignored. */ { PyObject *doc = PyDict_GetItemString(dict, "__doc__"); @@ -2024,6 +2023,19 @@ PyType_Ready(PyTypeObject *type) inherit_slots(type, (PyTypeObject *)b); } + /* if the type dictionary doesn't contain a __doc__, set it from + the tp_doc slot. + */ + if (PyDict_GetItemString(type->tp_dict, "__doc__") == NULL) { + if (type->tp_doc != NULL) { + PyObject *doc = PyString_FromString(type->tp_doc); + PyDict_SetItemString(type->tp_dict, "__doc__", doc); + Py_DECREF(doc); + } else { + PyDict_SetItemString(type->tp_dict, "__doc__", Py_None); + } + } + /* Some more special stuff */ base = type->tp_base; if (base != NULL) { ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-01-27 01:37 Message: Logged In: YES user_id=146903 I am posting some comments about this patch after my similar bug was closed as a duplicate: http://sourceforge.net/tracker/?group_id=5470&atid=105470&func=detail&aid=507394 I just tested the typeobject.c patch, and it doesn't work when using a descriptor as the __doc__ for an object (the descriptor itself is returned for class.__doc__ rather than the result of the tp_descr_get function). With the patch applied, the output of the program attached to the above mentioned bug is: OldClass.__doc__ = 'object=None type=OldClass' OldClass().__doc__ = 'object=OldClass instance type=OldClass' NewClass.__doc__ = <__main__.DocDescr object at 0x811ce34> NewClass().__doc__ = 'object=NewClass instance type=NewClass' The suggestion I gave in the other bug is to get rid of the type.__doc__ property/getset all together, and make PyType_Ready() set __doc__ in tp_dict based on the value of tp_doc. Is there any reason why this wouldn't work? (it would seem to give behaviour more consistant with old style classes, which would be good). I will look at producing a patch to do this shortly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-17 08:14 Message: Logged In: YES user_id=89016 This sound much better. With my current patch all the docstrings for the builltin types are gone, because int etc. never goes through typeobject.c/type_new(). I updated the patch to use Guido's method. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-01-17 06:25 Message: Logged In: YES user_id=6380 Wouldn't it be easier to set the __doc__ attribute in tp_dict and be done with it? That's what classic classes do. The accessor should still be a bit special: it should be implemented as a property (in tp_getsets), and first look for __doc__ in tp_dict and fall back to tp_doc. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-17 06:19 Message: Logged In: YES user_id=89016 OK, I've attached the patch. Note that I had to change the return value of PyStructSequence_InitType from void to int. Introducing tp_docobject should provide backwards compatibility for C extensions that still want to use tp_doc as char *. If this is not relevant then we could switch to PyObject *tp_doc immediately, but this complicates initializing a static type structure. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-17 05:45 Message: Logged In: YES user_id=21627 Adding tp_docobject would work, although it may be somewhat hackish (why should we have this kind of redundancy). I'm not sure how you will convert that to the 8bit version, though: what encoding? If you use the default encoding, tp_doc will be sometimes set, sometimes it won't. In any case, I'd encourage you to produce a patch. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-16 05:03 Message: Logged In: YES user_id=89016 What we could do is add a new slot tp_docobject, that holds the doc object. Then type_members would include {"__doc__", T_OBJECT, offsetof(PyTypeObject, tp_docobject), READONLY}, tp_doc should be initialized with an 8bit version of tp_docobject (using the default encoding and error='ignore' if tp_docobject is unicode). Does this sound reasonably? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-16 04:18 Message: Logged In: YES user_id=21627 There is a good chance that is caused by the lines following XXX What if it's a Unicode string? Don't know -- this ignores it. in Objects/typeobject.c. :-) Would you like to investigate the options and propose a patch? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=504343&group_id=5470 From noreply@sourceforge.net Tue Feb 12 15:13:58 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 07:13:58 -0800 Subject: [Python-bugs-list] [ python-Bugs-210637 ] ihooks on windows and pythoncom (PR#294) Message-ID: Bugs item #210637, was opened at 2000-07-31 14:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=210637&group_id=5470 Category: Windows Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Mark Hammond (mhammond) Summary: ihooks on windows and pythoncom (PR#294) Initial Comment: Jitterbug-Id: 294 Submitted-By: mak@mikroplan.com.pl Date: Thu, 13 Apr 2000 04:09:35 -0400 (EDT) Version: cvs OS: windows Hi, Python module ihooks is not so compatible with builtin imp while importing modules whose name is stored in registry eg. pythoncom/pywintypes. import ihooks ihooks.install() import pythoncom This code will fail inside pythonwin ide too ! ==================================================================== Audit trail: Tue Jul 11 08:29:17 2000 guido moved from incoming to open ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-12 07:13 Message: Logged In: NO i try it first,ok ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-11-15 15:19 Message: Logged In: NO ¾È³çÇϼ¼¿ä ÀÌÁ¦ºÎÅÍ Á¦ ¼Ò°³¸¦ ÇÏ°Ú½À´Ï´Ù ÀúÀÇ À̸§Àº ¹ÚÇýÁØ ÀÌ°í¿ä ³ªÀÌ´Â 13»ìÀÌ¿¡¿ä ±×¸®°í °¡Á·Àº ¸ðµÎ 4¸í ¾ö¸¶ ¾Æºü ´©³ª ³ª Á¦°¡ »ç´Â °÷Àº ºÐ´ç±¸ ¾ßžµ¿ ¸ÅÈ­¸¶À» 105-1006 Á¦ ÀüÈ­¹øÈ£´Â¿© 031-704-9838 Àú´Â Çѱ¹ÀÎ ÀÔ´Ï´Ù. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-09-19 10:23 Message: Logged In: NO ruoy retupmoc si daed ---------------------------------------------------------------------- Comment By: Grzegorz Makarewicz (mpmak) Date: 2001-03-02 04:27 Message: Logged In: YES user_id=141704 BasicModuleLoader.find_module_in_dir is searching for main modules only in frozen and builtin. The imp searches the registry, too. ModuleLoader.find_module_in_dir should call the functions from the inherited object. so this patch should help: --- V:\py21\Lib\ihooks.py Mon Feb 12 08:55:46 2001 +++ ihooks.py Sun Feb 18 04:39:39 2001 @@ -122,8 +122,13 @@ def find_module_in_dir(self, name, dir): if dir is None: - return self.find_builtin_module(name) - else: + result = self.find_builtin_module(name) + if result is not None: + return result + try: + return imp.find_module(name, None) + except: + return None try: return imp.find_module(name, [dir]) except ImportError: @@ -237,7 +242,7 @@ def find_module_in_dir(self, name, dir, allow_packages=1): if dir is None: - return self.find_builtin_module(name) + return BasicModuleLoader.find_module_in_dir (self,name,dir) if allow_packages: fullname = self.hooks.path_join(dir, name) if self.hooks.path_isdir(fullname): ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2000-08-30 23:23 Message: Leaving open, but moving down the priority and resolution lists. A patch would help bump it back up :-) ---------------------------------------------------------------------- Comment By: Mark Hammond (mhammond) Date: 2000-08-13 23:42 Message: This needs a resolution. The "registered module" code in the code also needs to support HKEY_CURRENT_USER along with the HKEY_LOCAL_MACHINE it does now. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=210637&group_id=5470 From noreply@sourceforge.net Tue Feb 12 16:07:21 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 08:07:21 -0800 Subject: [Python-bugs-list] [ python-Bugs-504343 ] Unicode docstrings and new style classes Message-ID: Bugs item #504343, was opened at 2002-01-16 04:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=504343&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Nobody/Anonymous (nobody) Summary: Unicode docstrings and new style classes Initial Comment: Unicode docstrings don't work with new style classes. With old style classes they work: ---- class foo: u"föö" class bar(object): u"bär" print repr(foo.__doc__) print repr(bar.__doc__) ---- This prints ---- u'f\xf6\xf6' None ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-12 08:07 Message: Logged In: NO Not forgotten, but I've been busy, and will continue to be so... ;-( --Guido ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-02-12 07:21 Message: Logged In: YES user_id=146903 Just wondering if this bug has been forgotten or not. My patch came out a bit weird w.r.t. line wrapping, so you can get here instead: http://www.daa.com.au/~james/files/type-doc.patch I would have added it as an attachment if the SF bug tracker didn't prevent me from doing so (bugzilla is much nicer to use for things like this). ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-01-27 02:10 Message: Logged In: YES user_id=146903 Put together a patch that gets rid of the type.__doc__ property, and sets __doc__ in PyType_Ready() (if appropriate). Seems to work okay in my tests and as a bonus, "print type.__doc__" actually prints documentation on using the type() function :) SF doesn't seem to give me a way to attach a patch to this bug, so I will paste a copy of the patch here (if it is mangled, email me at james@daa.com.au for a copy): --- Python-2.2/Objects/typeobject.c.orig Tue Dec 18 01:14:22 2001 +++ Python-2.2/Objects/typeobject.c Sun Jan 27 17:56:37 2002 @@ -8,7 +8,6 @@ static PyMemberDef type_members[] = { {"__basicsize__", T_INT, offsetof(PyTypeObject,tp_basicsize),READONLY}, {"__itemsize__", T_INT, offsetof(PyTypeObject, tp_itemsize), READONLY}, {"__flags__", T_LONG, offsetof(PyTypeObject, tp_flags), READONLY}, - {"__doc__", T_STRING, offsetof(PyTypeObject, tp_doc), READONLY}, {"__weakrefoffset__", T_LONG, offsetof(PyTypeObject, tp_weaklistoffset), READONLY}, {"__base__", T_OBJECT, offsetof(PyTypeObject, tp_base), READONLY}, @@ -1044,9 +1043,9 @@ type_new(PyTypeObject *metatype, PyObjec } /* Set tp_doc to a copy of dict['__doc__'], if the latter is there - and is a string (tp_doc is a char* -- can't copy a general object - into it). - XXX What if it's a Unicode string? Don't know -- this ignores it. + and is a string. Note that the tp_doc slot will only be used + by C code -- python code will use the version in tp_dict, so + it isn't that important that non string __doc__'s are ignored. */ { PyObject *doc = PyDict_GetItemString(dict, "__doc__"); @@ -2024,6 +2023,19 @@ PyType_Ready(PyTypeObject *type) inherit_slots(type, (PyTypeObject *)b); } + /* if the type dictionary doesn't contain a __doc__, set it from + the tp_doc slot. + */ + if (PyDict_GetItemString(type->tp_dict, "__doc__") == NULL) { + if (type->tp_doc != NULL) { + PyObject *doc = PyString_FromString(type->tp_doc); + PyDict_SetItemString(type->tp_dict, "__doc__", doc); + Py_DECREF(doc); + } else { + PyDict_SetItemString(type->tp_dict, "__doc__", Py_None); + } + } + /* Some more special stuff */ base = type->tp_base; if (base != NULL) { ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-01-27 01:37 Message: Logged In: YES user_id=146903 I am posting some comments about this patch after my similar bug was closed as a duplicate: http://sourceforge.net/tracker/?group_id=5470&atid=105470&func=detail&aid=507394 I just tested the typeobject.c patch, and it doesn't work when using a descriptor as the __doc__ for an object (the descriptor itself is returned for class.__doc__ rather than the result of the tp_descr_get function). With the patch applied, the output of the program attached to the above mentioned bug is: OldClass.__doc__ = 'object=None type=OldClass' OldClass().__doc__ = 'object=OldClass instance type=OldClass' NewClass.__doc__ = <__main__.DocDescr object at 0x811ce34> NewClass().__doc__ = 'object=NewClass instance type=NewClass' The suggestion I gave in the other bug is to get rid of the type.__doc__ property/getset all together, and make PyType_Ready() set __doc__ in tp_dict based on the value of tp_doc. Is there any reason why this wouldn't work? (it would seem to give behaviour more consistant with old style classes, which would be good). I will look at producing a patch to do this shortly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-17 08:14 Message: Logged In: YES user_id=89016 This sound much better. With my current patch all the docstrings for the builltin types are gone, because int etc. never goes through typeobject.c/type_new(). I updated the patch to use Guido's method. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-01-17 06:25 Message: Logged In: YES user_id=6380 Wouldn't it be easier to set the __doc__ attribute in tp_dict and be done with it? That's what classic classes do. The accessor should still be a bit special: it should be implemented as a property (in tp_getsets), and first look for __doc__ in tp_dict and fall back to tp_doc. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-17 06:19 Message: Logged In: YES user_id=89016 OK, I've attached the patch. Note that I had to change the return value of PyStructSequence_InitType from void to int. Introducing tp_docobject should provide backwards compatibility for C extensions that still want to use tp_doc as char *. If this is not relevant then we could switch to PyObject *tp_doc immediately, but this complicates initializing a static type structure. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-17 05:45 Message: Logged In: YES user_id=21627 Adding tp_docobject would work, although it may be somewhat hackish (why should we have this kind of redundancy). I'm not sure how you will convert that to the 8bit version, though: what encoding? If you use the default encoding, tp_doc will be sometimes set, sometimes it won't. In any case, I'd encourage you to produce a patch. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-16 05:03 Message: Logged In: YES user_id=89016 What we could do is add a new slot tp_docobject, that holds the doc object. Then type_members would include {"__doc__", T_OBJECT, offsetof(PyTypeObject, tp_docobject), READONLY}, tp_doc should be initialized with an 8bit version of tp_docobject (using the default encoding and error='ignore' if tp_docobject is unicode). Does this sound reasonably? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-16 04:18 Message: Logged In: YES user_id=21627 There is a good chance that is caused by the lines following XXX What if it's a Unicode string? Don't know -- this ignores it. in Objects/typeobject.c. :-) Would you like to investigate the options and propose a patch? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=504343&group_id=5470 From noreply@sourceforge.net Tue Feb 12 18:34:58 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 10:34:58 -0800 Subject: [Python-bugs-list] [ python-Bugs-516532 ] cls.__module__ and metaclasses Message-ID: Bugs item #516532, was opened at 2002-02-12 10:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516532&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Nobody/Anonymous (nobody) Summary: cls.__module__ and metaclasses Initial Comment: In the following the __module__ attribute is incorrect: file foo.py: --------------- class foo(object): class __metaclass__(type): def __new__(cls, name, bases, dict): return type.__new__(cls, name, bases, dict) --------------- file bar.py: --------------- import foo class bar(foo.foo): pass --------------- With these two files the following test prints the wrong result: >>> import bar >>> bar.bar.__module__ 'foo' ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516532&group_id=5470 From noreply@sourceforge.net Tue Feb 12 19:22:30 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 11:22:30 -0800 Subject: [Python-bugs-list] [ python-Bugs-514676 ] multifile different in 2.2 from 2.1.1 Message-ID: Bugs item #514676, was opened at 2002-02-07 22:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Matthew Cowles (mdcowles) Assigned to: Guido van Rossum (gvanrossum) Summary: multifile different in 2.2 from 2.1.1 Initial Comment: Reported to python-help. When the test program I'll attach is run on the test mail I'll attach separately, it produces this under Python 2.1.1: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: multipart/alternative BOUNDARY: =====================_590453677==_.ALT TYPE: text/plain LINES: ['test A\n'] TYPE: text/html LINES: ['\n', 'test B\n', '\n'] TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n','\n'] But under Python 2.2, it produces: TYPE: multipart/mixed BOUNDARY: =====================_590453667==_ TYPE: text/plain LINES: ['Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n', 'Attached Content.\n'] The first output appears to me to be correct. ---------------------------------------------------------------------- >Comment By: Matthew Cowles (mdcowles) Date: 2002-02-12 11:22 Message: Logged In: YES user_id=198518 It turns out that the problem is more intractable than I thought at first. Here's what seems to happen: the readahead function can consume the separator before the user calls push() with it. Since the readahead function decides whether or not a line matches a separator, the push() comes too late and the line is returned as ordinary data. Of course Martijn Pieters is right about conformance to RFC 2046 but it's not obvious to me how to strip the last line-end before a separator and also avoid consuming separators that shouldn't be consumed and retain the public interface of the module. I think that the simplest thing to do would be to resotre the functionality that was in revision 1.18 and tell people who need strict conformance to RFC 2046 that they should use the email module instead since it does strip the last line end before a separator. The person who posed the original question would be happy to have his files used as part of a test suite. ---------------------------------------------------------------------- Comment By: Matthew Cowles (mdcowles) Date: 2002-02-10 20:49 Message: Logged In: YES user_id=198518 It seems that SourceForge won't let me delete the patch. Please ignore it. ---------------------------------------------------------------------- Comment By: Matthew Cowles (mdcowles) Date: 2002-02-10 20:47 Message: Logged In: YES user_id=198518 Sorry, I think my analysis is right but the patch is flawed and I've deleted it. I'll try to have another look at it tomorrow. ---------------------------------------------------------------------- Comment By: Matthew Cowles (mdcowles) Date: 2002-02-10 20:20 Message: Logged In: YES user_id=198518 The problem is in _readline(). Since it changes self.level and self.last, they apply to the next line, not the current one. I'll upload a patch that seems to work. The test program and test mail aren't mine. They belong to the person who reported the bug to python-help. I'm sure that he'd be glad to have them used as part of the test suite but I'll mail him to make absolutely certain. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-08 13:43 Message: Logged In: YES user_id=6380 You're absolutely right -- this is a bug. Can you suggest a fix? We also need a test suite! Your test program is a beginning for that... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514676&group_id=5470 From noreply@sourceforge.net Tue Feb 12 20:08:23 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 12:08:23 -0800 Subject: [Python-bugs-list] [ python-Bugs-507442 ] Thread-Support don't work with HP-UX 11 Message-ID: Bugs item #507442, was opened at 2002-01-23 02:13 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507442&group_id=5470 Category: Installation Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Stefan Walder (stefanwalder) Assigned to: Martin v. Löwis (loewis) Summary: Thread-Support don't work with HP-UX 11 Initial Comment: Hi, I've compiled Python 2.1.2 with the HP Ansi C-Compiler. I've used ./configure --with-threads and added -D_REENTRANT to the Makefile. But the test_thread.py don't work! [ek14] % ../../python test_thread.py creating task 1 Traceback (most recent call last): File "test_thread.py", line 46, in ? newtask() File "test_thread.py", line 41, in newtask thread.start_new_thread(task, (next_ident,)) thread.error: can't start new thread [ek14] % Any idea? More informations? Thanks Stefan Walder ---------------------------------------------------------------------- >Comment By: Stefan Walder (stefanwalder) Date: 2002-02-12 12:08 Message: Logged In: YES user_id=436029 Hi, I've thought threads now work! But I think they don't! I use python 2.1.2 with Zope. Now sometimes it works. But when i add a CMF-Object I get a core dump. So I've startetd gdb and here is the log: jojo 22: gdb /opt/zope/bin/python2.1 core HP gdb 2.0 Copyright 1986 - 1999 Free Software Foundation, Inc. Hewlett-Packard Wildebeest 2.0 (based on GDB 4.17-hpwdb-980821) Wildebeest is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for Wildebeest. Type "show warranty" for details. Wildebeest was built for PA-RISC 1.1 or 2.0 (narrow), HP-UX 11.00. .. Core was generated by `python2.1'. Program terminated with signal 10, Bus error. warning: The shared libraries were not privately mapped; setting a breakpoint in a shared library will not work until you rerun the program. #0 0xc2331920 in pthread_mutex_lock () from /usr/lib/libpthread.1 (gdb) bt #0 0xc2331920 in pthread_mutex_lock () from /usr/lib/libpthread.1 #1 0xc0123ed0 in __thread_mutex_lock () from /usr/lib/libc.2 #2 0xc00a0018 in _sigfillset () from /usr/lib/libc.2 #3 0xc009e22c in _memset () from /usr/lib/libc.2 #4 0xc00a37d8 in malloc () from /usr/lib/libc.2 #5 0x3bad0 in PyFrame_New (tstate=0x0, code=0x0, globals=0x0, locals=0x0) at Objects/frameobject.c:149 #6 0xc0123f94 in __thread_mutex_unlock () from /usr/lib/libc.2 #7 (gdb) I don't know if this is a python or zope Problem and I dont't know if this bug is at the right position. Please help. Thanks Stefan Walder ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-01-25 02:41 Message: Logged In: YES user_id=436029 Fileupload config.h ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-01-25 02:36 Message: Logged In: YES user_id=436029 Hi loewis, I've uploaded the wanted files. Next week I will test python 2.2. But I need python 2.1.2 because I want to use Zope. Thanks Stefan Walder ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-24 11:53 Message: Logged In: YES user_id=21627 I can't check, but in theory, configure should (already, atleast in 2.2): 1. detect to use pthreads on HP-UX 2. therefore, define _REENTRANT in pyconfig.h (config.h for 2.1) 3. automatically link with -lpthread Stefan, can you please attach the (original, unmodified) config.h, Makefile, and config.log to this report? In Python 2.1, the test for pthreads failed, since pthread_create is a macro, and the test failed to include the proper header. This was fixed in configure.in 1.266. So: Stefan, could you also try compiling Python 2.2 on your system, and report whether the thread test case passes there? This might be a duplicate of #416696, which would suggest that properly detection of pthreads on HP-UX really is the cure. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-01-24 06:34 Message: Logged In: NO Anthony, if you want an entry on a bugs page for 2.1.2, its no problem for me to create one. Please mail the exact text that you want to appear there to describe this bug (or any other bug in 2.1.2) to webmaster@python.org and I'll take care of it. --Guido (not logged in) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-01-24 01:38 Message: Logged In: YES user_id=31435 I'm afraid threading on HP-UX never really works, no matter how many times users contribute config patches. They get it to work on their box, we check it in, and the next release it starts all over again. This has been going on for years and years. If you think it suddenly started working in 2.2, wait a few months . Note that the advice that you *may* have to use - D_REENTRANT on HP-UX is recorded in Python's main README file; apparently this is necessary on some unknown proper subset of HP-UX boxes. ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2002-01-24 00:57 Message: Logged In: YES user_id=29957 Hm. I'm not sure, either - but this could probably get an entry on the bugs page on creosote. Anyone? Is there a "known issues" page somewhere? ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-01-23 23:59 Message: Logged In: YES user_id=436029 Hi, I've found a solution. I've added a -D_REENTRANT to the CFLAGS and an -lpthread to the LIBS: OPT= -O -D_REENTRANT DEFS= -DHAVE_CONFIG_H CFLAGS= $(OPT) -I. -I$(srcdir)/Include $(DEFS) LIBS= -lnsl -ldld LIBM= -lm -lpthread LIBC= SYSLIBS= $(LIBM) $(LIBC) Now it works for me. But I don't have any idea to put this changes into the configure script. mfG Stefan Walder ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2002-01-23 07:22 Message: Logged In: YES user_id=29957 Unfortunately, I don't have access to a HP/UX system, and I couldn't find anyone during the process of doing 2.1.2 that was willing to spend the time figuring out how and why 2.2's threading finally started working on HP/UX. Without someone to do that, I'd say the chances of this ever being addressed are close to zero. Does it work on 2.2? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507442&group_id=5470 From noreply@sourceforge.net Tue Feb 12 21:54:21 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 13:54:21 -0800 Subject: [Python-bugs-list] [ python-Bugs-516232 ] Windows os.path.isdir bad if drive only Message-ID: Bugs item #516232, was opened at 2002-02-11 14:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516232&group_id=5470 Category: Extension Modules Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Charles I. Fuller (cifuller) Assigned to: Nobody/Anonymous (nobody) Summary: Windows os.path.isdir bad if drive only Initial Comment: It seems that most os functions recognize the Windows drive letter without a directory as the current directory on the drive, but os.path.isdir still returns 0. If os.listdir('C:') returns data, os.path.isdir('C:') should return 1 for consistency. C:\folder_on_C>python Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.system('dir C:') Volume in drive C has no label. Volume Serial Number is E4C9-AD16 Directory of C:\folder_on_C 02/11/2002 05:29p . 02/11/2002 05:29p .. 02/11/2002 05:29p subA 02/11/2002 05:29p subB 0 File(s) 0 bytes 4 Dir(s) 22,126,567,424 bytes free 0 >>> os.listdir('C:') ['subA', 'subB'] >>> os.path.abspath('C:') 'C:\folder_on_C' >>> os.path.isdir('C:') 0 ---------------------------------------------------------------------- >Comment By: Charles I. Fuller (cifuller) Date: 2002-02-12 13:54 Message: Logged In: YES user_id=211047 Responding to Tim's followup... In this case the 'C:' is not the root drive, it is the current dir on that drive. I noticed that os.path.abspath was updated between 2.0 and 2.2 to recognize the current dir. It's an inconsistency that tripped me up already. >>> os.path.isdir('C:') 0 >>> os.path.isdir(os.path.abspath('C:')) 1 The listdir has been working with drive specs (recognizing the current dir) for a while. The abspath code must be handling the drive-only input as a special case. The isdir function should do the same for consistency. There should at least be a warning in the docs... ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-11 15:16 Message: Logged In: YES user_id=31435 Sorry, this is how Microsoft's implementation of the underlying stat() function works. "Root drive" paths must be given with a trailing slash or backslash, else MS stat() claims they don't exist. You'll see the same irritating behavior in C code. Attempts to worm around it in the past have introduced other bugs; see bug 513572 for a current example. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516232&group_id=5470 From noreply@sourceforge.net Tue Feb 12 22:33:54 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 14:33:54 -0800 Subject: [Python-bugs-list] [ python-Bugs-516232 ] Windows os.path.isdir bad if drive only Message-ID: Bugs item #516232, was opened at 2002-02-11 14:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516232&group_id=5470 Category: Extension Modules Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Charles I. Fuller (cifuller) Assigned to: Nobody/Anonymous (nobody) Summary: Windows os.path.isdir bad if drive only Initial Comment: It seems that most os functions recognize the Windows drive letter without a directory as the current directory on the drive, but os.path.isdir still returns 0. If os.listdir('C:') returns data, os.path.isdir('C:') should return 1 for consistency. C:\folder_on_C>python Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.system('dir C:') Volume in drive C has no label. Volume Serial Number is E4C9-AD16 Directory of C:\folder_on_C 02/11/2002 05:29p . 02/11/2002 05:29p .. 02/11/2002 05:29p subA 02/11/2002 05:29p subB 0 File(s) 0 bytes 4 Dir(s) 22,126,567,424 bytes free 0 >>> os.listdir('C:') ['subA', 'subB'] >>> os.path.abspath('C:') 'C:\folder_on_C' >>> os.path.isdir('C:') 0 ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-12 14:33 Message: Logged In: YES user_id=31435 Well, the underlying Microsoft routines are themselves inconsistent, and in undocumented ways. We make a mild effort to hide their warts, but it's historical truth that doing so introduces other bugs. The sad fact is that MS stat() insists "C:" does not exist, but the MS FindFirstFile happily accepts "C:". If you think *you* can straigten this inherited mess out, happy to accept a patch . ---------------------------------------------------------------------- Comment By: Charles I. Fuller (cifuller) Date: 2002-02-12 13:54 Message: Logged In: YES user_id=211047 Responding to Tim's followup... In this case the 'C:' is not the root drive, it is the current dir on that drive. I noticed that os.path.abspath was updated between 2.0 and 2.2 to recognize the current dir. It's an inconsistency that tripped me up already. >>> os.path.isdir('C:') 0 >>> os.path.isdir(os.path.abspath('C:')) 1 The listdir has been working with drive specs (recognizing the current dir) for a while. The abspath code must be handling the drive-only input as a special case. The isdir function should do the same for consistency. There should at least be a warning in the docs... ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-11 15:16 Message: Logged In: YES user_id=31435 Sorry, this is how Microsoft's implementation of the underlying stat() function works. "Root drive" paths must be given with a trailing slash or backslash, else MS stat() claims they don't exist. You'll see the same irritating behavior in C code. Attempts to worm around it in the past have introduced other bugs; see bug 513572 for a current example. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516232&group_id=5470 From noreply@sourceforge.net Tue Feb 12 23:21:07 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 15:21:07 -0800 Subject: [Python-bugs-list] [ python-Bugs-516703 ] Tix:NoteBook add/delete/add page problem Message-ID: Bugs item #516703, was opened at 2002-02-12 15:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516703&group_id=5470 Category: Tkinter Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Christoph Monzel (chris_mo) Assigned to: Nobody/Anonymous (nobody) Summary: Tix:NoteBook add/delete/add page problem Initial Comment: Problem: NoteBook add/delete/add page with the same name does not work. python2.2/Tix Example Python Script for reproducing the Bug: import Tix import rlcompleter root=Tix.Tk() notebook=Tix.NoteBook(root, ipadx=3, ipady=3) notebook.add('general', label="General", underline=0) notebook.add('displaymode', label="Display mode", underline=0) notebook.pack() notebook.delete('general') notebook.add('general', label="General", underline=0) la=Tix.Label(notebook.general,text="hallo") Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.2/lib-tk/Tkinter.py", line 2261, in __init__ Widget.__init__(self, master, 'label', cnf, kw) File "/usr/lib/python2.2/lib-tk/Tkinter.py", line 1756, in __init__ self.tk.call( TclError: bad window path name ".135915860.nbframe.general" Tix seems nothing to know about the new page >>> notebook.tk.call(notebook._w,'pages') 'displaymode' Analysis: in NoteBook.add() the new "same named" widget will succesfully created in tk. But it will be immediatly removed, if the TixSubWidget is constructed Solution: In the Notebook class: Do mark subwidget "destroy_physically=1". Also for clearness delete entry from subwidget_list dict. I dont't know if this is a fine or correct solution but it works (for me) Patch: derrick:chris$ diff -u /usr/lib/python2.2/lib-tk/Tix.py Tix.py --- /usr/lib/python2.2/lib-tk/Tix.py Sun Nov 4 01:45:36 2001 +++ Tix.py Tue Feb 12 23:41:50 2002 @@ -828,12 +828,13 @@ def add(self, name, cnf={}, **kw): apply(self.tk.call, (self._w, 'add', name) + self._options(cnf, kw)) - self.subwidget_list[name] = TixSubWidget(self, name) + self.subwidget_list[name] = TixSubWidget(self, name, destroy_physically return self.subwidget_list[name] def delete(self, name): + del self.subwidget_list[name] self.tk.call(self._w, 'delete', name) - + def page(self, name): return self.subwidget(name) Tix.py Version # $Id: Tix.py,v 1.4 2001/10/09 11:50:55 loewis Exp $ Tix Version tix-8.1.3 Tcl/Tk-version tcl8.3-8.3.3 tk8.3_8.3.3 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516703&group_id=5470 From noreply@sourceforge.net Tue Feb 12 23:31:14 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 15:31:14 -0800 Subject: [Python-bugs-list] [ python-Bugs-516703 ] Tix:NoteBook add/delete/add page problem Message-ID: Bugs item #516703, was opened at 2002-02-12 15:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516703&group_id=5470 Category: Tkinter Group: Python 2.2 Status: Open >Resolution: Works For Me Priority: 5 Submitted By: Christoph Monzel (chris_mo) Assigned to: Nobody/Anonymous (nobody) Summary: Tix:NoteBook add/delete/add page problem Initial Comment: Problem: NoteBook add/delete/add page with the same name does not work. python2.2/Tix Example Python Script for reproducing the Bug: import Tix import rlcompleter root=Tix.Tk() notebook=Tix.NoteBook(root, ipadx=3, ipady=3) notebook.add('general', label="General", underline=0) notebook.add('displaymode', label="Display mode", underline=0) notebook.pack() notebook.delete('general') notebook.add('general', label="General", underline=0) la=Tix.Label(notebook.general,text="hallo") Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.2/lib-tk/Tkinter.py", line 2261, in __init__ Widget.__init__(self, master, 'label', cnf, kw) File "/usr/lib/python2.2/lib-tk/Tkinter.py", line 1756, in __init__ self.tk.call( TclError: bad window path name ".135915860.nbframe.general" Tix seems nothing to know about the new page >>> notebook.tk.call(notebook._w,'pages') 'displaymode' Analysis: in NoteBook.add() the new "same named" widget will succesfully created in tk. But it will be immediatly removed, if the TixSubWidget is constructed Solution: In the Notebook class: Do mark subwidget "destroy_physically=1". Also for clearness delete entry from subwidget_list dict. I dont't know if this is a fine or correct solution but it works (for me) Patch: derrick:chris$ diff -u /usr/lib/python2.2/lib-tk/Tix.py Tix.py --- /usr/lib/python2.2/lib-tk/Tix.py Sun Nov 4 01:45:36 2001 +++ Tix.py Tue Feb 12 23:41:50 2002 @@ -828,12 +828,13 @@ def add(self, name, cnf={}, **kw): apply(self.tk.call, (self._w, 'add', name) + self._options(cnf, kw)) - self.subwidget_list[name] = TixSubWidget(self, name) + self.subwidget_list[name] = TixSubWidget(self, name, destroy_physically return self.subwidget_list[name] def delete(self, name): + del self.subwidget_list[name] self.tk.call(self._w, 'delete', name) - + def page(self, name): return self.subwidget(name) Tix.py Version # $Id: Tix.py,v 1.4 2001/10/09 11:50:55 loewis Exp $ Tix Version tix-8.1.3 Tcl/Tk-version tcl8.3-8.3.3 tk8.3_8.3.3 ---------------------------------------------------------------------- >Comment By: Christoph Monzel (chris_mo) Date: 2002-02-12 15:31 Message: Logged In: YES user_id=456854 Okay, this is my first bug report, and seems not a good idea to paste patches into the text window :( ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516703&group_id=5470 From noreply@sourceforge.net Tue Feb 12 23:37:58 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 15:37:58 -0800 Subject: [Python-bugs-list] [ python-Bugs-516712 ] SyntaxError tracebacks omit filename Message-ID: Bugs item #516712, was opened at 2002-02-12 15:37 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516712&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Greg Ward (gward) Assigned to: Nobody/Anonymous (nobody) Summary: SyntaxError tracebacks omit filename Initial Comment: In Python 2.2, SyntaxError tracebacks no longer show the last filename, ie. the file where the crash actually occurred. This is really annoying. Here's an example: $ cat foo.py foo = $ python2.1 foo.py File "foo.py", line 1 foo = ^ SyntaxError: invalid syntax $ python2.2 foo.py File "", line 1 foo = ^ SyntaxError: invalid syntax ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516712&group_id=5470 From noreply@sourceforge.net Wed Feb 13 00:13:42 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 16:13:42 -0800 Subject: [Python-bugs-list] [ python-Bugs-516727 ] MyInt(2)+"3" -> NotImplemented Message-ID: Bugs item #516727, was opened at 2002-02-12 16:13 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516727&group_id=5470 Category: Type/class unification Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Kirill Simonov (kirill_simonov) Assigned to: Nobody/Anonymous (nobody) Summary: MyInt(2)+"3" -> NotImplemented Initial Comment: class MyInt(int): pass print MyInt(2)+"3" This code printed "NotImplemented" while I was expecting "TypeError". Not sure that this is a bug though. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516727&group_id=5470 From noreply@sourceforge.net Wed Feb 13 00:27:41 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 16:27:41 -0800 Subject: [Python-bugs-list] [ python-Bugs-516712 ] SyntaxError tracebacks omit filename Message-ID: Bugs item #516712, was opened at 2002-02-12 15:37 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516712&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.1 candidate >Status: Closed >Resolution: Duplicate Priority: 5 Submitted By: Greg Ward (gward) Assigned to: Nobody/Anonymous (nobody) Summary: SyntaxError tracebacks omit filename Initial Comment: In Python 2.2, SyntaxError tracebacks no longer show the last filename, ie. the file where the crash actually occurred. This is really annoying. Here's an example: $ cat foo.py foo = $ python2.1 foo.py File "foo.py", line 1 foo = ^ SyntaxError: invalid syntax $ python2.2 foo.py File "", line 1 foo = ^ SyntaxError: invalid syntax ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-12 16:27 Message: Logged In: YES user_id=21627 Duplicate of #498828 . ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516712&group_id=5470 From noreply@sourceforge.net Wed Feb 13 01:17:27 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 17:17:27 -0800 Subject: [Python-bugs-list] [ python-Bugs-516412 ] Python gettext doesn't support libglade Message-ID: Bugs item #516412, was opened at 2002-02-12 05:01 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516412&group_id=5470 Category: Python Library Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Christian Reis (kiko_async) Assigned to: Nobody/Anonymous (nobody) Summary: Python gettext doesn't support libglade Initial Comment: Libglade is a library that parses XML and generates GTK-based UIs in runtime. It is written in C and supports a number of languages through bindings. James Henstridge has maintained a set of bindings for Python for some time now. These bindings work very well, _except for internationalization_. The reason seems now straightforward to me. Python's gettext.py is a pure python implementation, and because of it, bindtextdomain/textdomain are never called. This causes any C module that uses gettext to not activate the support, and not use translation because of it. Using Martin's intl.so module things work great, but it is a problem for us having to redistribute it with our application. Any other suggestions to fix? ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-12 17:17 Message: Logged In: YES user_id=21627 How does gtk invoke gettext? It sounds buggy in the respect that it expects the textdomain to be set globally; a library should not do that. Instead, the right thing (IMO) would be if gtk called dgettext, using an application-supplied domain name. It would be then the matter of the Python gtk wrapper to expose the GTK APIs for setting the text domain. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-12 05:52 Message: Logged In: NO If what you want is a way to call bindtextdomain/textdomain from Python, feel free to supply a patch or ask martin to add intl.so to the distribution. --Guido (@#$% SF always logs me out :-( ) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516412&group_id=5470 From noreply@sourceforge.net Wed Feb 13 01:45:11 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 17:45:11 -0800 Subject: [Python-bugs-list] [ python-Bugs-507442 ] Thread-Support don't work with HP-UX 11 Message-ID: Bugs item #507442, was opened at 2002-01-23 02:13 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507442&group_id=5470 Category: Installation Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Stefan Walder (stefanwalder) Assigned to: Martin v. Löwis (loewis) Summary: Thread-Support don't work with HP-UX 11 Initial Comment: Hi, I've compiled Python 2.1.2 with the HP Ansi C-Compiler. I've used ./configure --with-threads and added -D_REENTRANT to the Makefile. But the test_thread.py don't work! [ek14] % ../../python test_thread.py creating task 1 Traceback (most recent call last): File "test_thread.py", line 46, in ? newtask() File "test_thread.py", line 41, in newtask thread.start_new_thread(task, (next_ident,)) thread.error: can't start new thread [ek14] % Any idea? More informations? Thanks Stefan Walder ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-12 17:45 Message: Logged In: YES user_id=21627 This problem looks very much like a HP-UX bug. It crashes inside the malloc implementation, and not only that: it also crashes inside the thread mutex used by malloc. I would guess there is nothing we can do about this; please ask HP for advise (or just don't use threads if they don't work) ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-02-12 12:08 Message: Logged In: YES user_id=436029 Hi, I've thought threads now work! But I think they don't! I use python 2.1.2 with Zope. Now sometimes it works. But when i add a CMF-Object I get a core dump. So I've startetd gdb and here is the log: jojo 22: gdb /opt/zope/bin/python2.1 core HP gdb 2.0 Copyright 1986 - 1999 Free Software Foundation, Inc. Hewlett-Packard Wildebeest 2.0 (based on GDB 4.17-hpwdb-980821) Wildebeest is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for Wildebeest. Type "show warranty" for details. Wildebeest was built for PA-RISC 1.1 or 2.0 (narrow), HP-UX 11.00. .. Core was generated by `python2.1'. Program terminated with signal 10, Bus error. warning: The shared libraries were not privately mapped; setting a breakpoint in a shared library will not work until you rerun the program. #0 0xc2331920 in pthread_mutex_lock () from /usr/lib/libpthread.1 (gdb) bt #0 0xc2331920 in pthread_mutex_lock () from /usr/lib/libpthread.1 #1 0xc0123ed0 in __thread_mutex_lock () from /usr/lib/libc.2 #2 0xc00a0018 in _sigfillset () from /usr/lib/libc.2 #3 0xc009e22c in _memset () from /usr/lib/libc.2 #4 0xc00a37d8 in malloc () from /usr/lib/libc.2 #5 0x3bad0 in PyFrame_New (tstate=0x0, code=0x0, globals=0x0, locals=0x0) at Objects/frameobject.c:149 #6 0xc0123f94 in __thread_mutex_unlock () from /usr/lib/libc.2 #7 (gdb) I don't know if this is a python or zope Problem and I dont't know if this bug is at the right position. Please help. Thanks Stefan Walder ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-01-25 02:41 Message: Logged In: YES user_id=436029 Fileupload config.h ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-01-25 02:36 Message: Logged In: YES user_id=436029 Hi loewis, I've uploaded the wanted files. Next week I will test python 2.2. But I need python 2.1.2 because I want to use Zope. Thanks Stefan Walder ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-24 11:53 Message: Logged In: YES user_id=21627 I can't check, but in theory, configure should (already, atleast in 2.2): 1. detect to use pthreads on HP-UX 2. therefore, define _REENTRANT in pyconfig.h (config.h for 2.1) 3. automatically link with -lpthread Stefan, can you please attach the (original, unmodified) config.h, Makefile, and config.log to this report? In Python 2.1, the test for pthreads failed, since pthread_create is a macro, and the test failed to include the proper header. This was fixed in configure.in 1.266. So: Stefan, could you also try compiling Python 2.2 on your system, and report whether the thread test case passes there? This might be a duplicate of #416696, which would suggest that properly detection of pthreads on HP-UX really is the cure. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-01-24 06:34 Message: Logged In: NO Anthony, if you want an entry on a bugs page for 2.1.2, its no problem for me to create one. Please mail the exact text that you want to appear there to describe this bug (or any other bug in 2.1.2) to webmaster@python.org and I'll take care of it. --Guido (not logged in) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-01-24 01:38 Message: Logged In: YES user_id=31435 I'm afraid threading on HP-UX never really works, no matter how many times users contribute config patches. They get it to work on their box, we check it in, and the next release it starts all over again. This has been going on for years and years. If you think it suddenly started working in 2.2, wait a few months . Note that the advice that you *may* have to use - D_REENTRANT on HP-UX is recorded in Python's main README file; apparently this is necessary on some unknown proper subset of HP-UX boxes. ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2002-01-24 00:57 Message: Logged In: YES user_id=29957 Hm. I'm not sure, either - but this could probably get an entry on the bugs page on creosote. Anyone? Is there a "known issues" page somewhere? ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-01-23 23:59 Message: Logged In: YES user_id=436029 Hi, I've found a solution. I've added a -D_REENTRANT to the CFLAGS and an -lpthread to the LIBS: OPT= -O -D_REENTRANT DEFS= -DHAVE_CONFIG_H CFLAGS= $(OPT) -I. -I$(srcdir)/Include $(DEFS) LIBS= -lnsl -ldld LIBM= -lm -lpthread LIBC= SYSLIBS= $(LIBM) $(LIBC) Now it works for me. But I don't have any idea to put this changes into the configure script. mfG Stefan Walder ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2002-01-23 07:22 Message: Logged In: YES user_id=29957 Unfortunately, I don't have access to a HP/UX system, and I couldn't find anyone during the process of doing 2.1.2 that was willing to spend the time figuring out how and why 2.2's threading finally started working on HP/UX. Without someone to do that, I'd say the chances of this ever being addressed are close to zero. Does it work on 2.2? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507442&group_id=5470 From noreply@sourceforge.net Wed Feb 13 02:15:22 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 18:15:22 -0800 Subject: [Python-bugs-list] [ python-Bugs-516412 ] Python gettext doesn't support libglade Message-ID: Bugs item #516412, was opened at 2002-02-12 05:01 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516412&group_id=5470 Category: Python Library Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Christian Reis (kiko_async) Assigned to: Nobody/Anonymous (nobody) Summary: Python gettext doesn't support libglade Initial Comment: Libglade is a library that parses XML and generates GTK-based UIs in runtime. It is written in C and supports a number of languages through bindings. James Henstridge has maintained a set of bindings for Python for some time now. These bindings work very well, _except for internationalization_. The reason seems now straightforward to me. Python's gettext.py is a pure python implementation, and because of it, bindtextdomain/textdomain are never called. This causes any C module that uses gettext to not activate the support, and not use translation because of it. Using Martin's intl.so module things work great, but it is a problem for us having to redistribute it with our application. Any other suggestions to fix? ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-02-12 18:15 Message: Logged In: YES user_id=146903 Some libraries (libglade in this case) translate some messages on behalf of the application (libglade translates messages in the input file using the default translation domain, or some other domain specified by the programmer). This is a case of wanting python's gettext module to cooperate with the C level gettext library. For libglade, this could be achieved by making the gettext.bindtextdomain() and gettext.textdomain() calls to call the equivalent C function in addition to what they do now. For most messages in gtk+ itself, it will use dgettext() for most messages already, so isn't a problem. The exception to this is places where it allows other libraries (or the app) to register new stock items, which get translated with a programmer specified domain. As of gettext 0.10.40, there should be no license problems, as the license for the libintl library was changed from GPL to LGPL. It should be a fairly simple to implement this; just needs a patch :) ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-12 17:17 Message: Logged In: YES user_id=21627 How does gtk invoke gettext? It sounds buggy in the respect that it expects the textdomain to be set globally; a library should not do that. Instead, the right thing (IMO) would be if gtk called dgettext, using an application-supplied domain name. It would be then the matter of the Python gtk wrapper to expose the GTK APIs for setting the text domain. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-12 05:52 Message: Logged In: NO If what you want is a way to call bindtextdomain/textdomain from Python, feel free to supply a patch or ask martin to add intl.so to the distribution. --Guido (@#$% SF always logs me out :-( ) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516412&group_id=5470 From noreply@sourceforge.net Wed Feb 13 03:22:57 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 12 Feb 2002 19:22:57 -0800 Subject: [Python-bugs-list] [ python-Bugs-516762 ] have a way to search backwards for re Message-ID: Bugs item #516762, was opened at 2002-02-12 19:22 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516762&group_id=5470 Category: Regular Expressions Group: Feature Request Status: Open Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Fredrik Lundh (effbot) Summary: have a way to search backwards for re Initial Comment: There doesn't seem to be any reasonable way to search a string backwards for a regular expression, starting from a given character position. I notice that the underlying C regular expression implemention supports a direction flag. I propose adding a direction flag to the search function on match objects: r = re.compile(...) m = re.search(str, startpos=5000, endpos=-1, dir=-1) would search in str for r, starting at location 5000 and searching backwards through location 0 (the beginning of the string). This is useful in (for example) text editors where you want to be able to search forwards or backwards, or if you're parsing an html file and see a and want to find the matching , etc. phr ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516762&group_id=5470 From noreply@sourceforge.net Wed Feb 13 10:45:41 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 02:45:41 -0800 Subject: [Python-bugs-list] [ python-Bugs-516299 ] urlparse can get fragments wrong Message-ID: Bugs item #516299, was opened at 2002-02-11 20:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516299&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: A.M. Kuchling (akuchling) >Assigned to: Nobody/Anonymous (nobody) Summary: urlparse can get fragments wrong Initial Comment: urlparse.urlparse() goes wrong on a URL such as 'http://amk.ca#foo', where there's a fragment identifier and the hostname isn't followed by a slash. It returns 'amk.ca#foo' as the hostname portion of the URL. While looking at that, I realized that test_urlparse() only tests urljoin(), not urlparse() or urlunparse(). The attached patch also adds a minimal test suite for urlparse(), but it should be still more comprehensive. Unfortunately the RFC doesn't include test cases, so I haven't done this yet. (Assigned to you at random, Michael; feel free to unassign it if you lack the time.) ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-13 02:45 Message: Logged In: YES user_id=6656 Sorry, don't know *anything* about URLs and don't really have the time to learn now... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516299&group_id=5470 From noreply@sourceforge.net Wed Feb 13 12:12:14 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 04:12:14 -0800 Subject: [Python-bugs-list] [ python-Bugs-504343 ] Unicode docstrings and new style classes Message-ID: Bugs item #504343, was opened at 2002-01-16 04:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=504343&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) >Assigned to: Martin v. Löwis (loewis) Summary: Unicode docstrings and new style classes Initial Comment: Unicode docstrings don't work with new style classes. With old style classes they work: ---- class foo: u"föö" class bar(object): u"bär" print repr(foo.__doc__) print repr(bar.__doc__) ---- This prints ---- u'f\xf6\xf6' None ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-12 08:07 Message: Logged In: NO Not forgotten, but I've been busy, and will continue to be so... ;-( --Guido ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-02-12 07:21 Message: Logged In: YES user_id=146903 Just wondering if this bug has been forgotten or not. My patch came out a bit weird w.r.t. line wrapping, so you can get here instead: http://www.daa.com.au/~james/files/type-doc.patch I would have added it as an attachment if the SF bug tracker didn't prevent me from doing so (bugzilla is much nicer to use for things like this). ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-01-27 02:10 Message: Logged In: YES user_id=146903 Put together a patch that gets rid of the type.__doc__ property, and sets __doc__ in PyType_Ready() (if appropriate). Seems to work okay in my tests and as a bonus, "print type.__doc__" actually prints documentation on using the type() function :) SF doesn't seem to give me a way to attach a patch to this bug, so I will paste a copy of the patch here (if it is mangled, email me at james@daa.com.au for a copy): --- Python-2.2/Objects/typeobject.c.orig Tue Dec 18 01:14:22 2001 +++ Python-2.2/Objects/typeobject.c Sun Jan 27 17:56:37 2002 @@ -8,7 +8,6 @@ static PyMemberDef type_members[] = { {"__basicsize__", T_INT, offsetof(PyTypeObject,tp_basicsize),READONLY}, {"__itemsize__", T_INT, offsetof(PyTypeObject, tp_itemsize), READONLY}, {"__flags__", T_LONG, offsetof(PyTypeObject, tp_flags), READONLY}, - {"__doc__", T_STRING, offsetof(PyTypeObject, tp_doc), READONLY}, {"__weakrefoffset__", T_LONG, offsetof(PyTypeObject, tp_weaklistoffset), READONLY}, {"__base__", T_OBJECT, offsetof(PyTypeObject, tp_base), READONLY}, @@ -1044,9 +1043,9 @@ type_new(PyTypeObject *metatype, PyObjec } /* Set tp_doc to a copy of dict['__doc__'], if the latter is there - and is a string (tp_doc is a char* -- can't copy a general object - into it). - XXX What if it's a Unicode string? Don't know -- this ignores it. + and is a string. Note that the tp_doc slot will only be used + by C code -- python code will use the version in tp_dict, so + it isn't that important that non string __doc__'s are ignored. */ { PyObject *doc = PyDict_GetItemString(dict, "__doc__"); @@ -2024,6 +2023,19 @@ PyType_Ready(PyTypeObject *type) inherit_slots(type, (PyTypeObject *)b); } + /* if the type dictionary doesn't contain a __doc__, set it from + the tp_doc slot. + */ + if (PyDict_GetItemString(type->tp_dict, "__doc__") == NULL) { + if (type->tp_doc != NULL) { + PyObject *doc = PyString_FromString(type->tp_doc); + PyDict_SetItemString(type->tp_dict, "__doc__", doc); + Py_DECREF(doc); + } else { + PyDict_SetItemString(type->tp_dict, "__doc__", Py_None); + } + } + /* Some more special stuff */ base = type->tp_base; if (base != NULL) { ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-01-27 01:37 Message: Logged In: YES user_id=146903 I am posting some comments about this patch after my similar bug was closed as a duplicate: http://sourceforge.net/tracker/?group_id=5470&atid=105470&func=detail&aid=507394 I just tested the typeobject.c patch, and it doesn't work when using a descriptor as the __doc__ for an object (the descriptor itself is returned for class.__doc__ rather than the result of the tp_descr_get function). With the patch applied, the output of the program attached to the above mentioned bug is: OldClass.__doc__ = 'object=None type=OldClass' OldClass().__doc__ = 'object=OldClass instance type=OldClass' NewClass.__doc__ = <__main__.DocDescr object at 0x811ce34> NewClass().__doc__ = 'object=NewClass instance type=NewClass' The suggestion I gave in the other bug is to get rid of the type.__doc__ property/getset all together, and make PyType_Ready() set __doc__ in tp_dict based on the value of tp_doc. Is there any reason why this wouldn't work? (it would seem to give behaviour more consistant with old style classes, which would be good). I will look at producing a patch to do this shortly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-17 08:14 Message: Logged In: YES user_id=89016 This sound much better. With my current patch all the docstrings for the builltin types are gone, because int etc. never goes through typeobject.c/type_new(). I updated the patch to use Guido's method. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-01-17 06:25 Message: Logged In: YES user_id=6380 Wouldn't it be easier to set the __doc__ attribute in tp_dict and be done with it? That's what classic classes do. The accessor should still be a bit special: it should be implemented as a property (in tp_getsets), and first look for __doc__ in tp_dict and fall back to tp_doc. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-17 06:19 Message: Logged In: YES user_id=89016 OK, I've attached the patch. Note that I had to change the return value of PyStructSequence_InitType from void to int. Introducing tp_docobject should provide backwards compatibility for C extensions that still want to use tp_doc as char *. If this is not relevant then we could switch to PyObject *tp_doc immediately, but this complicates initializing a static type structure. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-17 05:45 Message: Logged In: YES user_id=21627 Adding tp_docobject would work, although it may be somewhat hackish (why should we have this kind of redundancy). I'm not sure how you will convert that to the 8bit version, though: what encoding? If you use the default encoding, tp_doc will be sometimes set, sometimes it won't. In any case, I'd encourage you to produce a patch. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-16 05:03 Message: Logged In: YES user_id=89016 What we could do is add a new slot tp_docobject, that holds the doc object. Then type_members would include {"__doc__", T_OBJECT, offsetof(PyTypeObject, tp_docobject), READONLY}, tp_doc should be initialized with an 8bit version of tp_docobject (using the default encoding and error='ignore' if tp_docobject is unicode). Does this sound reasonably? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-16 04:18 Message: Logged In: YES user_id=21627 There is a good chance that is caused by the lines following XXX What if it's a Unicode string? Don't know -- this ignores it. in Objects/typeobject.c. :-) Would you like to investigate the options and propose a patch? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=504343&group_id=5470 From noreply@sourceforge.net Wed Feb 13 12:39:01 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 04:39:01 -0800 Subject: [Python-bugs-list] [ python-Bugs-495401 ] Build troubles: --with-pymalloc Message-ID: Bugs item #495401, was opened at 2001-12-20 05:24 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 Category: Build Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Build troubles: --with-pymalloc Initial Comment: The build process segfaults with the current CVS version when using --with-pymalloc System is SuSE Linux 7.0 > uname -a Linux amazonas 2.2.16-SMP #1 SMP Wed Aug 2 20:01:21 GMT 2000 i686 unknown > gcc -v Reading specs from /usr/lib/gcc-lib/i486-suse- linux/2.95.2/specs gcc version 2.95.2 19991024 (release) Attached is the complete build log. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-13 04:39 Message: Logged In: YES user_id=38388 Tim, I ran the test on my notebook and guess what: when compiling Python with -mcpu=pentium I get 58 vs. 59.8 (with / without patch) when compiling Python with -mcpu=i686, it's 65.8 vs. 60.17 (with / without patch) No idea what chip is used in the notebook. It's pretty old, though. I used gcc 2.95.2 and some old SuSE Linux version (glibc 2). Would be interesting to check this on a modern pentium 4 machine and maybe a more recent sun sparc machine. Oh yes, and your Cray, of coure ;-) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-11 13:06 Message: Logged In: YES user_id=31435 MAL, cool -- I saw a major slowdown using the patch too, but not nearly as dramatic as you saw, so was curious about what could account for that. Chip, compiler and OS can all have major effects. I assume Martin is using a Pentium box, so assuming your notebook is running Linux too, it would be good to get another LinTel datapoint. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 12:50 Message: Logged In: YES user_id=38388 Tim: Yes, I'm still all AMD based... it's Athlon 1200 I'm running. PGCC (the Pentium GCC groups version) has a special AMD optimization mode for Athlon which is what I'm using. Somebody has to hold up the flag against the Wintel camp ;-) Hmm, I could do the same tests on my notebook which runs on one of those Inteliums. Maybe tomorrow... ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-11 12:06 Message: Logged In: YES user_id=31435 time.time() sucks for benchmarking on Windows (updates at about 18Hz). Running the test as-is, MSVC6 and Win98SE, it's 1.3 seconds with current CVS, and 1.7 with unicode3.diff. The quantization error in Windows time.time() is > 0.05 seconds, so no point pretending there are 3 significant digits there; luckily(?), it's apparent there's a major difference with just 2 digits. MAL, are you still using an AMD box? In a decade, nobody else has ever reproduced the timing results you see . ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 10:42 Message: Logged In: YES user_id=38388 Ok, with 100000 loops and time.clock() I get: 4.690 - 4.710 without your patch, 9.560 - 9.570 with your patch (again, without pymalloc and the same compiler/machine on SuSE 7.1). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 10:04 Message: Logged In: YES user_id=21627 time.clock vs. time.time does not make a big difference on an unloaded machine (except time.time has a higher resolution). Can you please run the test 10x more often? I then get 12.520 clocks with CVS python, glibc 2.2.4, gcc 2.95, and 10.890 with my patch. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-11 09:49 Message: Logged In: YES user_id=38388 I get different timings (note that you have to use time.clock() for benchmarks, not time.time()): without your patch: 0.470 seconds with your patch: 0.960 seconds This is on Linux with pgcc 2.95.2, glibc 2.2, without pymalloc (which is the normal configuration). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-11 09:06 Message: Logged In: YES user_id=21627 Marc: Please have a look at pymalloc; it cannot be "fixed". It is in the nature of a pool allocator that you have to copy when you want to move between pools - or you have to waste the extra space. I agree that UTF-8 coding needs to be fast; that's why I wrote this patch. I've revised it to fit the current implementation, and too add the assert that Tim has requested (unicode3.diff). For the test case time_utf8.zip (which is a UTF-8 converted Misc/ACKS), the version that first counts the size is about 10% faster on my system (Linux glibc 2.2.4) (see timings inside time_utf8.py; #592 is the patched version). So the price for counting the size turns out to be negligible, and to offer significant, and is more than compensated for by the reduction of calls to the memory management system. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-08 14:27 Message: Logged In: YES user_id=38388 Good news, Walter. Martin: As I explained in an earlier comment, pymalloc needs to be fixed to better address overallocation. The two pass logic would avoid overallocation, but at a high price. Copying memory (if at all needed) is likely to be *much* faster. The UTF-8 codec has to be as fast as possible since it is one of the most used codecs in Python's Unicode implementation. Also note that I have reduced overallocation to 2*size in the codec. I suggest to close the bug report. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-07 14:59 Message: Logged In: YES user_id=21627 I still think the current algorithm has serious problems as it is based on overallocation, and that it should be replaced with an algorithm that counts the memory requirements upfront. This will be particularly important for pymalloc, but will also avoid unnecessary copies for many other malloc implementations. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-07 01:58 Message: Logged In: YES user_id=89016 I tried the current CVS and make altinstall runs to completion. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-06 10:12 Message: Logged In: YES user_id=38388 I've checked in a patch which fixes the memory allocation problem. Please give it a try and tell me whether this fixes your problem, Walter. If so, I'd suggest to close the bug. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-31 11:36 Message: Logged In: YES user_id=31435 Martin, I like your second patch fine, but would like it a lot better with assert(p - PyString_AS_STRING(v) == cbAllocated); at the end. Else a piddly change in either loop can cause more miserably hard-to-track-down problems. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-31 05:46 Message: Logged In: YES user_id=21627 MAL, I'm 100% positive that the crash on my system was caused by the UTF-8 encoding; I've seen it in the debugger overwrite memory that it doesn't own. As for unicode.diff: Tim has proposed that this should not be done, but that 4*size should be allocated in advance. What do you think? On unicode2.diff: If pymalloc was changed to shrink the memory, it would have to copy the original string, since it would likely be in a different size class. This is less efficient than the approach taken in unicode2.diff. What specifically is it that you dislike about first counting the memory requirements? It actually simplifies the code. Notice that the current code is still buggy with regard to surrogates. If there is a high surrogate, but not a low one, it will write bogus UTF-8 (with no lead byte). This is fixed in unicode2.diff as well. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-12-31 04:48 Message: Logged In: YES user_id=38388 I like the unicode.diff, but not the unicode2.diff. Instead of fixing the UTF-8 codec here we should fix the pymalloc implementation, since overallocation is common thing to do and not only used in codecs. (Note that all Python extensions will start to use pymalloc too once we enable it per default.) I don't know whether it's relevant, but I found that core dumps can easily be triggered by mixing the various memory allocation APIs in Python and the C lib. The bug may not only be related to the UTF-8 codec but may also linger in some other extension modules. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-30 02:26 Message: Logged In: YES user_id=21627 I disabled threading, which fortunately gave me memory watchpoints back. Then I noticed that the final *p=0 corrupted a non-NULL freeblock pointer, slightly decreasing it. Then following the freeblock pointer, freeblock was changed to a bogus block, which had its next pointer as garbage. I had to trace this from the back, of course. As for overallocation, I wonder whether the UTF-8 encoding should overallocate at all. unicode2.diff is a modification where it runs over the string twice, counting the number of needed bytes the first time. This is likely slower (atleast if no reallocations occur), but doesn't waste that much memory (I notice that pymalloc will never copy objects when they shrink, to return the extra space). ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 19:54 Message: Logged In: YES user_id=31435 Good eye, Martin! It's clearly possible for the unpatched code to write beyond the memory allocated. The one thing that doesn't jibe is that you said bp is 0x2, which means 3 of its 4 bytes are 0x00, but UTF-8 doesn't produce 0 bytes except for one per \u0000 input character. Right? So, if this routine is the cause, where are the 0 bytes coming from? (It could be test_unicode sets up a UTF-8 encoding case with several \u0000 characters, but if so I didn't stumble into it.) Plausible: when a new pymalloc "page" is allocated, the 40-byte chunks in it are *not* linked together at the start. Instead a NULL pointer is stored at just the start of "the first" 40-byte chunk, and pymalloc-- on subsequent mallocs --finds that NULL and incrementally carves out additional 40-byte chunks. So as a startup-- but not a steady-state --condition, the "next free block" pointers will very often be NULLs, and then if this is a little-endian machine, writing a single 2 byte at the start of a free block would lead to a bogus pointer value of 0x2. About a fix, I'm in favor of junking all the cleverness here, by allocating size*4 bytes from the start. It's overallocating in all normal cases already, so we're going to incur the expense of cutting the result string back anyway; how *much* we overallocate doesn't matter to speed, except that if we don't have to keep checking inside the loop, the code gets simpler and quicker and more robust. The loop should instead merely assert that cbWritten <= cbAllocated at the end of each trip. Had this been done from the start, a debug build would have assert-failed a few nanoseconds after the wild store. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 17:44 Message: Logged In: YES user_id=21627 I've found one source of troubles, see the attached unicode.diff. Guido's intuition was right; it was an UCS-4 problem: EncodeUTF8 would over-allocate 3*size bytes, but can actually write 4*size in the worst case, which occurs in test_unicode. I'll leave the patch for review and experiments; it fixes the problem for me. The existing adjustment for surrogates is pointless, IMO: for the surrogate pair, it will allocate 6 bytes UTF-8 in advance, which is more than actually needed. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 15:13 Message: Logged In: YES user_id=21627 It's a Heisenbug. I found that even slightest modifications to the Python source make it come and go, or appear at different places. On my system, the crashes normally occur in the first run (.pyc). So I don't think the order of make invocations is the source of the problem. It's likely as Tim says: somebody overwrites memory somewhere. Unfortunately, even though I can reproduce crashes for the same pool, for some reason, my gdb memory watches don't trigger. Tim's approach of checking whether a the value came from following the free list did not help, either: the bug disappeared under the change. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 14:01 Message: Logged In: YES user_id=6656 Hmm. I now think that the stuff about extension modules is almost certainly a read herring. What I said about "make && make altinstall" vs "make altinstall" still seems to be true, though. If you compile with --with-pydebug, you crash right at the end of the second (-O) run of compileall.py -- I suspect this is something else, but it might not be. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2001-12-29 13:29 Message: Logged In: YES user_id=6656 I don't know if these are helpful observations or not, but anyway: (1) it doesn't core without the --enable-unicode=ucs4 option (2) if you just run "make altinstall" the library files are installed *and compiled* before the dynamically linked modules are built. Then we don't crash. (3) if you run "make altinstall" again, we crash. If you initially ran "make && make install", we crash. (4) when we crash, it's not long after the unicode tests are compiled. Are these real clues or just red herrings? I'm afraid I can't tell :( ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-29 10:43 Message: Logged In: YES user_id=31435 Ouch. Boosted priority back to 5, since Martin can reproduce it. Alas, where pymalloc got called *from* is almost certainly irrelevant -- we're seeing the end result of earlier corruption. Note that pymalloc is unusually sensitive to off-by-1 stores, since the chunks it hands out are contiguous (there's no hidden bookkeeping padding between them). Plausible: an earlier bogus store went beyond the end of its allocated chunk, overwriting the "next free block" pointer at the start of a previously free()'ed chunk of the same size (rounded up to a multiple of 8; 40 bytes in this case). At the time this blows up, bp is supposed to point to a previously free()'ed chunk of size 40 bytes (if there were none free()'ed and available, the earlier "pool != pool- >nextpool" guard should have failed). The first 4 bytes (let's simplify by assuming this is a 32-bit box) of the free chunks link the free chunks together, most recently free()'ed at the start of the (singly linked) list. So the code at this point is intent on returning bp, and "pool- >freeblock = *(block **)bp" is setting the 40-byte-chunk list header's idea of the *next* available 40-byte chunk. But bp is bogus. The value of bp is gotten out of the free list headers, the static array usedpools. This mechanism is horridly obscure, an array of pointer pairs that, in effect, capture just the first two members of the pool_header struct, once for each chunk size. It's possible that someone is overwriting usedpools[4 + 4]- >freeblock directly with 2, but that seems unlikely. More likely is that a free() operation linked a 40-byte chunk into the list headed at usedpools[4+4]->freeblock correctly, and a later bad store overwrote the first 4 bytes of the free()'ed block with 2. Then the "pool- >freeblock = *(block **)bp)" near the start of an unexceptional pymalloc would copy the 2 into the list header's freeblock without complaint. The error wouldn't show up until a subsequent malloc tried to use it. So that's one idea to get closer to the cause: add code to dereference pool->freeblock, before the "return (void *) bp". If that blows up earlier, then the first four bytes of bp were corrupted, and that gives you a useful data breakpoint address for the next run. If it doesn't blow up earlier, the corruption will be harder to find, but let's count on being lucky at first . ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 08:00 Message: Logged In: YES user_id=21627 Ok, I can reproduce it now; I did not 'make install' before. Here is a gdb back trace #0 _PyCore_ObjectMalloc (nbytes=33) at Objects/obmalloc.c:417 #1 0x805885c in PyString_FromString (str=0x816c6e8 "checkJoin") at Objects/stringobject.c:136 #2 0x805d772 in PyString_InternFromString (cp=0x816c6e8 "checkJoin") at Objects/stringobject.c:3640 #3 0x807c9c6 in com_addop_varname (c=0xbfffe87c, kind=0, name=0x816c6e8 "checkJoin") at Python/compile.c:939 #4 0x807de24 in com_atom (c=0xbfffe87c, n=0x816c258) at Python/compile.c:1478 #5 0x807f01c in com_power (c=0xbfffe87c, n=0x816c8b8) at Python/compile.c:1846 #6 0x807f545 in com_factor (c=0xbfffe87c, n=0x816c898) at Python/compile.c:1975 #7 0x807f56c in com_term (c=0xbfffe87c, n=0x816c878) at Python/compile.c:1985 #8 0x807f6bc in com_arith_expr (c=0xbfffe87c, n=0x816c858) at Python/compile.c:2020 #9 0x807f7dc in com_shift_expr (c=0xbfffe87c, n=0x816c838) at Python/compile.c:2046 #10 0x807f8fc in com_and_expr (c=0xbfffe87c, n=0x816c818) at Python/compile.c:2072 #11 0x807fa0c in com_xor_expr (c=0xbfffe87c, n=0x816c7f8) at Python/compile.c:2094 ... The access that crashes is *(block **)bp, since bp is 0x2. Not sure how that happens; I'll investigate (but would appreciate a clue). It seems that the pool chain got corrupted. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-29 06:52 Message: Logged In: YES user_id=6380 Aha! The --enable-unicode=ucs4 is more suspicious than the --with-pymalloc. I had missed that info when this was first reported. Not that I'm any closer to solving it... :-( ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-29 02:53 Message: Logged In: YES user_id=89016 OK, I did a "make distclean" which removed .o files and the build directory and redid a "./configure --enable- unicode=ucs4 --with-pymalloc && make && make altinstall". The build process still crashes in the same spot: Compiling /usr/local/lib/python2.2/test/test_urlparse.py ... make: *** [libinstall] Segmentation fault I also retried with a fresh untarred Python-2.2.tgz. This shows the same behaviour. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-29 01:23 Message: Logged In: YES user_id=21627 Atleast I cannot reproduce it, on SuSE 7.3. Can you retry this, building from a clean source tree (no .o files, no build directory)? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-28 14:30 Message: Logged In: YES user_id=6380 My prediction: this is irreproducible. Lowering the priority accordingly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2001-12-23 04:24 Message: Logged In: YES user_id=89016 Unfortunately no core file was generated. Can I somehow force core file generation? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:42 Message: Logged In: YES user_id=21627 Did that produce a core file? If so, can you attach a gdb backtrace as well? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495401&group_id=5470 From noreply@sourceforge.net Wed Feb 13 13:39:17 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 05:39:17 -0800 Subject: [Python-bugs-list] [ python-Bugs-516965 ] __del__ is not called correctly Message-ID: Bugs item #516965, was opened at 2002-02-13 05:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516965&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Christoph Wiedemann (wiedeman) Assigned to: Nobody/Anonymous (nobody) Summary: __del__ is not called correctly Initial Comment: Hello, I found some strange behaviour in classes which provide their own destructor. The python version is 2.2, compiled with gcc 3.0.1 running on linux (kernel 2.2.18, glibc 2.2). Following an interactive python session: >>> class A: ... def __init__(self): ... print "const" ... def __del__(self): ... print "dest" ... >>> a = A() const >>> a dest <__main__.A instance at 0x81c9894> >>> del a >>> As you can see, the destructor is called, before the object is deleted. On a Python 2.1 version, the behaviour is correct (the destructor is called _after_ 'del a') Bye, Christoph Wiedemann ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516965&group_id=5470 From noreply@sourceforge.net Wed Feb 13 13:45:34 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 05:45:34 -0800 Subject: [Python-bugs-list] [ python-Bugs-516965 ] __del__ is not called correctly Message-ID: Bugs item #516965, was opened at 2002-02-13 05:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516965&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Christoph Wiedemann (wiedeman) Assigned to: Nobody/Anonymous (nobody) Summary: __del__ is not called correctly Initial Comment: Hello, I found some strange behaviour in classes which provide their own destructor. The python version is 2.2, compiled with gcc 3.0.1 running on linux (kernel 2.2.18, glibc 2.2). Following an interactive python session: >>> class A: ... def __init__(self): ... print "const" ... def __del__(self): ... print "dest" ... >>> a = A() const >>> a dest <__main__.A instance at 0x81c9894> >>> del a >>> As you can see, the destructor is called, before the object is deleted. On a Python 2.1 version, the behaviour is correct (the destructor is called _after_ 'del a') Bye, Christoph Wiedemann ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-13 05:45 Message: Logged In: YES user_id=6380 I can't reproduce this. Can you try this in a clean session? I bet you can't, either. Most likely, the "dest" output you see is from a *previous* object that was saved in the "last output value register", the built-in "_". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516965&group_id=5470 From noreply@sourceforge.net Wed Feb 13 17:08:14 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 09:08:14 -0800 Subject: [Python-bugs-list] [ python-Bugs-516965 ] __del__ is not called correctly Message-ID: Bugs item #516965, was opened at 2002-02-13 05:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516965&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Christoph Wiedemann (wiedeman) Assigned to: Nobody/Anonymous (nobody) Summary: __del__ is not called correctly Initial Comment: Hello, I found some strange behaviour in classes which provide their own destructor. The python version is 2.2, compiled with gcc 3.0.1 running on linux (kernel 2.2.18, glibc 2.2). Following an interactive python session: >>> class A: ... def __init__(self): ... print "const" ... def __del__(self): ... print "dest" ... >>> a = A() const >>> a dest <__main__.A instance at 0x81c9894> >>> del a >>> As you can see, the destructor is called, before the object is deleted. On a Python 2.1 version, the behaviour is correct (the destructor is called _after_ 'del a') Bye, Christoph Wiedemann ---------------------------------------------------------------------- Comment By: Christoph Wiedemann (wiedeman) Date: 2002-02-13 09:08 Message: Logged In: YES user_id=457359 You are right. Sorry for bothering you. Christoph ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-13 05:45 Message: Logged In: YES user_id=6380 I can't reproduce this. Can you try this in a clean session? I bet you can't, either. Most likely, the "dest" output you see is from a *previous* object that was saved in the "last output value register", the built-in "_". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516965&group_id=5470 From noreply@sourceforge.net Wed Feb 13 17:17:56 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 09:17:56 -0800 Subject: [Python-bugs-list] [ python-Bugs-516965 ] __del__ is not called correctly Message-ID: Bugs item #516965, was opened at 2002-02-13 05:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516965&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Christoph Wiedemann (wiedeman) >Assigned to: Guido van Rossum (gvanrossum) Summary: __del__ is not called correctly Initial Comment: Hello, I found some strange behaviour in classes which provide their own destructor. The python version is 2.2, compiled with gcc 3.0.1 running on linux (kernel 2.2.18, glibc 2.2). Following an interactive python session: >>> class A: ... def __init__(self): ... print "const" ... def __del__(self): ... print "dest" ... >>> a = A() const >>> a dest <__main__.A instance at 0x81c9894> >>> del a >>> As you can see, the destructor is called, before the object is deleted. On a Python 2.1 version, the behaviour is correct (the destructor is called _after_ 'del a') Bye, Christoph Wiedemann ---------------------------------------------------------------------- Comment By: Christoph Wiedemann (wiedeman) Date: 2002-02-13 09:08 Message: Logged In: YES user_id=457359 You are right. Sorry for bothering you. Christoph ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-13 05:45 Message: Logged In: YES user_id=6380 I can't reproduce this. Can you try this in a clean session? I bet you can't, either. Most likely, the "dest" output you see is from a *previous* object that was saved in the "last output value register", the built-in "_". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516965&group_id=5470 From noreply@sourceforge.net Wed Feb 13 21:51:35 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 13:51:35 -0800 Subject: [Python-bugs-list] [ python-Bugs-516299 ] urlparse can get fragments wrong Message-ID: Bugs item #516299, was opened at 2002-02-11 20:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516299&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: A.M. Kuchling (akuchling) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: urlparse can get fragments wrong Initial Comment: urlparse.urlparse() goes wrong on a URL such as 'http://amk.ca#foo', where there's a fragment identifier and the hostname isn't followed by a slash. It returns 'amk.ca#foo' as the hostname portion of the URL. While looking at that, I realized that test_urlparse() only tests urljoin(), not urlparse() or urlunparse(). The attached patch also adds a minimal test suite for urlparse(), but it should be still more comprehensive. Unfortunately the RFC doesn't include test cases, so I haven't done this yet. (Assigned to you at random, Michael; feel free to unassign it if you lack the time.) ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-13 02:45 Message: Logged In: YES user_id=6656 Sorry, don't know *anything* about URLs and don't really have the time to learn now... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516299&group_id=5470 From noreply@sourceforge.net Wed Feb 13 21:57:51 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 13:57:51 -0800 Subject: [Python-bugs-list] [ python-Bugs-517214 ] expat.h not found when building in subdi Message-ID: Bugs item #517214, was opened at 2002-02-13 13:57 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517214&group_id=5470 Category: Extension Modules Group: Python 2.3 Status: Open Resolution: None Priority: 6 Submitted By: Jeremy Hylton (jhylton) Assigned to: Martin v. Löwis (loewis) Summary: expat.h not found when building in subdi Initial Comment: I build Python in a subdirectory of the source directory. The source is in python/dist/src; I build in python/dist/src/build. The recent changes to include expat in setup.py fails because it adds "Modules/expat" to include_dirs, but "Modules/expat" is being interpreted relative to the build directory not the source directory. I was able to get a successful build by changing the include_dir to "../Modules/expat" But obviously that is not a real solution. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517214&group_id=5470 From noreply@sourceforge.net Thu Feb 14 00:04:24 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 16:04:24 -0800 Subject: [Python-bugs-list] [ python-Bugs-507442 ] Thread-Support don't work with HP-UX 11 Message-ID: Bugs item #507442, was opened at 2002-01-23 02:13 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507442&group_id=5470 Category: Installation Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Stefan Walder (stefanwalder) Assigned to: Martin v. Löwis (loewis) Summary: Thread-Support don't work with HP-UX 11 Initial Comment: Hi, I've compiled Python 2.1.2 with the HP Ansi C-Compiler. I've used ./configure --with-threads and added -D_REENTRANT to the Makefile. But the test_thread.py don't work! [ek14] % ../../python test_thread.py creating task 1 Traceback (most recent call last): File "test_thread.py", line 46, in ? newtask() File "test_thread.py", line 41, in newtask thread.start_new_thread(task, (next_ident,)) thread.error: can't start new thread [ek14] % Any idea? More informations? Thanks Stefan Walder ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2002-02-13 16:04 Message: Logged In: YES user_id=29957 Unless someone with a) fairly deep knowledge of HP/UX, b) access to a HP/UX machine and c) the spare time and effort to debug this steps forward, the chances of this being fixed are zero. (I still think adding a resolution of 'HP/UX' to the bug tracker would allow us to close a whooole lotta bugs) ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-12 17:45 Message: Logged In: YES user_id=21627 This problem looks very much like a HP-UX bug. It crashes inside the malloc implementation, and not only that: it also crashes inside the thread mutex used by malloc. I would guess there is nothing we can do about this; please ask HP for advise (or just don't use threads if they don't work) ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-02-12 12:08 Message: Logged In: YES user_id=436029 Hi, I've thought threads now work! But I think they don't! I use python 2.1.2 with Zope. Now sometimes it works. But when i add a CMF-Object I get a core dump. So I've startetd gdb and here is the log: jojo 22: gdb /opt/zope/bin/python2.1 core HP gdb 2.0 Copyright 1986 - 1999 Free Software Foundation, Inc. Hewlett-Packard Wildebeest 2.0 (based on GDB 4.17-hpwdb-980821) Wildebeest is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for Wildebeest. Type "show warranty" for details. Wildebeest was built for PA-RISC 1.1 or 2.0 (narrow), HP-UX 11.00. .. Core was generated by `python2.1'. Program terminated with signal 10, Bus error. warning: The shared libraries were not privately mapped; setting a breakpoint in a shared library will not work until you rerun the program. #0 0xc2331920 in pthread_mutex_lock () from /usr/lib/libpthread.1 (gdb) bt #0 0xc2331920 in pthread_mutex_lock () from /usr/lib/libpthread.1 #1 0xc0123ed0 in __thread_mutex_lock () from /usr/lib/libc.2 #2 0xc00a0018 in _sigfillset () from /usr/lib/libc.2 #3 0xc009e22c in _memset () from /usr/lib/libc.2 #4 0xc00a37d8 in malloc () from /usr/lib/libc.2 #5 0x3bad0 in PyFrame_New (tstate=0x0, code=0x0, globals=0x0, locals=0x0) at Objects/frameobject.c:149 #6 0xc0123f94 in __thread_mutex_unlock () from /usr/lib/libc.2 #7 (gdb) I don't know if this is a python or zope Problem and I dont't know if this bug is at the right position. Please help. Thanks Stefan Walder ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-01-25 02:41 Message: Logged In: YES user_id=436029 Fileupload config.h ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-01-25 02:36 Message: Logged In: YES user_id=436029 Hi loewis, I've uploaded the wanted files. Next week I will test python 2.2. But I need python 2.1.2 because I want to use Zope. Thanks Stefan Walder ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-24 11:53 Message: Logged In: YES user_id=21627 I can't check, but in theory, configure should (already, atleast in 2.2): 1. detect to use pthreads on HP-UX 2. therefore, define _REENTRANT in pyconfig.h (config.h for 2.1) 3. automatically link with -lpthread Stefan, can you please attach the (original, unmodified) config.h, Makefile, and config.log to this report? In Python 2.1, the test for pthreads failed, since pthread_create is a macro, and the test failed to include the proper header. This was fixed in configure.in 1.266. So: Stefan, could you also try compiling Python 2.2 on your system, and report whether the thread test case passes there? This might be a duplicate of #416696, which would suggest that properly detection of pthreads on HP-UX really is the cure. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-01-24 06:34 Message: Logged In: NO Anthony, if you want an entry on a bugs page for 2.1.2, its no problem for me to create one. Please mail the exact text that you want to appear there to describe this bug (or any other bug in 2.1.2) to webmaster@python.org and I'll take care of it. --Guido (not logged in) ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-01-24 01:38 Message: Logged In: YES user_id=31435 I'm afraid threading on HP-UX never really works, no matter how many times users contribute config patches. They get it to work on their box, we check it in, and the next release it starts all over again. This has been going on for years and years. If you think it suddenly started working in 2.2, wait a few months . Note that the advice that you *may* have to use - D_REENTRANT on HP-UX is recorded in Python's main README file; apparently this is necessary on some unknown proper subset of HP-UX boxes. ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2002-01-24 00:57 Message: Logged In: YES user_id=29957 Hm. I'm not sure, either - but this could probably get an entry on the bugs page on creosote. Anyone? Is there a "known issues" page somewhere? ---------------------------------------------------------------------- Comment By: Stefan Walder (stefanwalder) Date: 2002-01-23 23:59 Message: Logged In: YES user_id=436029 Hi, I've found a solution. I've added a -D_REENTRANT to the CFLAGS and an -lpthread to the LIBS: OPT= -O -D_REENTRANT DEFS= -DHAVE_CONFIG_H CFLAGS= $(OPT) -I. -I$(srcdir)/Include $(DEFS) LIBS= -lnsl -ldld LIBM= -lm -lpthread LIBC= SYSLIBS= $(LIBM) $(LIBC) Now it works for me. But I don't have any idea to put this changes into the configure script. mfG Stefan Walder ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2002-01-23 07:22 Message: Logged In: YES user_id=29957 Unfortunately, I don't have access to a HP/UX system, and I couldn't find anyone during the process of doing 2.1.2 that was willing to spend the time figuring out how and why 2.2's threading finally started working on HP/UX. Without someone to do that, I'd say the chances of this ever being addressed are close to zero. Does it work on 2.2? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507442&group_id=5470 From noreply@sourceforge.net Thu Feb 14 01:26:00 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 13 Feb 2002 17:26:00 -0800 Subject: [Python-bugs-list] [ python-Bugs-517214 ] expat.h not found when building in subdi Message-ID: Bugs item #517214, was opened at 2002-02-13 13:57 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517214&group_id=5470 Category: Extension Modules Group: Python 2.3 >Status: Closed >Resolution: Fixed Priority: 6 Submitted By: Jeremy Hylton (jhylton) Assigned to: Martin v. Löwis (loewis) Summary: expat.h not found when building in subdi Initial Comment: I build Python in a subdirectory of the source directory. The source is in python/dist/src; I build in python/dist/src/build. The recent changes to include expat in setup.py fails because it adds "Modules/expat" to include_dirs, but "Modules/expat" is being interpreted relative to the build directory not the source directory. I was able to get a successful build by changing the include_dir to "../Modules/expat" But obviously that is not a real solution. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-13 17:26 Message: Logged In: YES user_id=21627 Fixed in setup.py 1.81. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517214&group_id=5470 From noreply@sourceforge.net Thu Feb 14 08:21:17 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 00:21:17 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-517371 ] Add .count() method to tuples Message-ID: Feature Requests item #517371, was opened at 2002-02-14 00:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=517371&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Raymond Hettinger (rhettinger) Assigned to: Nobody/Anonymous (nobody) Summary: Add .count() method to tuples Initial Comment: Tuples have every method afforded to lists except for those which mutate the list; however, there is one exception: .count() appears to have been left out eventhough it can be well-defined for tuples as well as lists. >>> s = 'the trump' >>> s.count('t') 2 >>> list(s).count('t') 2 >>> tuple(s).count('t') Traceback (most recent call last): File "", line 1, in ? tuple(s).count('t') AttributeError: 'tuple' object has no attribute 'count' ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=517371&group_id=5470 From noreply@sourceforge.net Thu Feb 14 11:17:00 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 03:17:00 -0800 Subject: [Python-bugs-list] [ python-Bugs-517447 ] Syntax error in tixwidgets.py Message-ID: Bugs item #517447, was opened at 2002-02-14 03:16 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517447&group_id=5470 Category: Tkinter Group: None Status: Open Resolution: None Priority: 5 Submitted By: Detlef Lannert (lannert) Assigned to: Nobody/Anonymous (nobody) Summary: Syntax error in tixwidgets.py Initial Comment: tixwidgets.py reports a syntax error; the following patch helps: *** .../Demo/tix/tixwidgets.py.orig Sun Nov 25 15:50:55 2001 --- .../Demo/tix/tixwidgets.py Thu Feb 14 11:59:47 2002 *************** *** 135,142 **** import tkMessageBox, traceback while self.exit < 0: try: ! while self.exit < 0: ! self.root.tk.dooneevent(TCL_ALL_EVENTS) except SystemExit: #print 'Exit' self.exit = 1 --- 135,141 ---- import tkMessageBox, traceback while self.exit < 0: try: ! self.root.tk.dooneevent(TCL_ALL_EVENTS) except SystemExit: #print 'Exit' self.exit = 1 (I.e., delete the extra while and indent the dooneevent call.) This is for Python 2.2, but the version from CVS looks just the same. BTW, when I select the Directory Listing pane, the application freezes (with a "busy" cursor); I don't know why (yet). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517447&group_id=5470 From noreply@sourceforge.net Thu Feb 14 11:33:33 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 03:33:33 -0800 Subject: [Python-bugs-list] [ python-Bugs-517451 ] Option processing in setup.cfg Message-ID: Bugs item #517451, was opened at 2002-02-14 03:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517451&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: Konrad Hinsen (hinsen) Assigned to: Nobody/Anonymous (nobody) Summary: Option processing in setup.cfg Initial Comment: When building RPM files with distutils, I noticed that adding "use_rpm_opt_flags=0" to the file setup.cfg had no effect, although the equivalent command-line option --no-rpm-opt-flags works as advertised. Some debugging showed the reason: in the first case, the variable self.use_rpm_opt_flags in commands/bdist_rpm.py has the value '0' (string), whereas in the second case it is 0 (integer). The test "if self.use_rpm_opt_flags" does not work as expected if the variable is a string, of course. I suppose that individual commands should not have to worry about the data type of binary options, so I suspect this is a general bug in distutils option processing. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517451&group_id=5470 From noreply@sourceforge.net Thu Feb 14 15:32:42 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 07:32:42 -0800 Subject: [Python-bugs-list] [ python-Bugs-517554 ] asyncore fails when EINTR happens in pol Message-ID: Bugs item #517554, was opened at 2002-02-14 07:32 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517554&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Cesar Eduardo Barros (cesarb) Assigned to: Nobody/Anonymous (nobody) Summary: asyncore fails when EINTR happens in pol Initial Comment: (submitting again -- this damn thing refused to accept my anonymous submission a few days ago) When a signal happens during the select call in asyncore.poll, the select fails with EINTR, which the code catches. However, the code fails to clear the r/w/e arrays (like poll3 does), which means it acts as if every descriptor had received all possible events. Patch attached, tested with the python2.2 package in Debian testing. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517554&group_id=5470 From noreply@sourceforge.net Thu Feb 14 16:49:19 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 08:49:19 -0800 Subject: [Python-bugs-list] [ python-Bugs-505150 ] mac module documentation inaccuracy. Message-ID: Bugs item #505150, was opened at 2002-01-17 15:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=505150&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Martin Miller (mrmiller) >Assigned to: Jack Jansen (jackjansen) Summary: mac module documentation inaccuracy. Initial Comment: The documentation at for the MacPython 2.2 mac module says, in part: > ==snip== >> One additional function is available: >> >> xstat(path) >> This function returns the same information as stat(), >> but with three additional values appended: the size of the >> resource fork of the file and its >> 4-character creator and type. > ==snip== The xstat() function is available only under PPC MacPython but not under Carbon MacPython. The documentation should be updated, assuming the ommision was intentional. Ideally, it would suggest alternatives for the Carbon version. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 08:49 Message: Logged In: YES user_id=3066 Jack -- the FSSpec object as documented allows access to the creator and type information, but not the size of the resource fork. How should the caller get that? Thanks. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-05 14:22 Message: Logged In: YES user_id=45365 Here is a patch for libmac.tex. I'll leave it to you to replace the \code{} sections with one of the gazillion macros I can never remember, hope you don't mind:-) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=505150&group_id=5470 From noreply@sourceforge.net Thu Feb 14 17:04:57 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 09:04:57 -0800 Subject: [Python-bugs-list] [ python-Bugs-515745 ] Missing docs for module knee Message-ID: Bugs item #515745, was opened at 2002-02-10 21:38 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515745&group_id=5470 >Category: Demos and Tools Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) >Assigned to: Tim Peters (tim_one) Summary: Missing docs for module knee Initial Comment: 3.21.1 in the lib manual sez: "A more complete example that implements hierarchical module names and includes a reload() function can be found in the standard module knee (which is intended as an example only -- don't rely on any part of it being a standard interface)." ...but knee is not in the module list, though it appears to be in the distribution. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 09:04 Message: Logged In: YES user_id=3066 Like it says, the knee module is supposed to be an example only. I don't think it should be included in the library at all; it should be somewhere in Demo/. I think Guido has resisted moving it before, but I don't recall clearly. I'll assign this to Tim since Guido's not available now. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515745&group_id=5470 From noreply@sourceforge.net Thu Feb 14 17:24:45 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 09:24:45 -0800 Subject: [Python-bugs-list] [ python-Bugs-515745 ] Missing docs for module knee Message-ID: Bugs item #515745, was opened at 2002-02-10 21:38 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515745&group_id=5470 Category: Demos and Tools Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) Assigned to: Tim Peters (tim_one) Summary: Missing docs for module knee Initial Comment: 3.21.1 in the lib manual sez: "A more complete example that implements hierarchical module names and includes a reload() function can be found in the standard module knee (which is intended as an example only -- don't rely on any part of it being a standard interface)." ...but knee is not in the module list, though it appears to be in the distribution. ---------------------------------------------------------------------- >Comment By: David Abrahams (david_abrahams) Date: 2002-02-14 09:24 Message: Logged In: YES user_id=52572 If you move it, please change the docs so that it no longer says it's a standard module. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 09:04 Message: Logged In: YES user_id=3066 Like it says, the knee module is supposed to be an example only. I don't think it should be included in the library at all; it should be somewhere in Demo/. I think Guido has resisted moving it before, but I don't recall clearly. I'll assign this to Tim since Guido's not available now. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515745&group_id=5470 From noreply@sourceforge.net Thu Feb 14 18:23:19 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 10:23:19 -0800 Subject: [Python-bugs-list] [ python-Bugs-210633 ] urlparse (PR#286) Message-ID: Bugs item #210633, was opened at 2000-07-31 14:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=210633&group_id=5470 Category: Python Library Group: Not a Bug Status: Closed Resolution: Invalid Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: urlparse (PR#286) Initial Comment: Jitterbug-Id: 286 Submitted-By: alex@shop.com Date: Mon, 10 Apr 2000 16:40:57 -0400 (EDT) Version: >=1.5 OS: win32 linux urlparse requires that the url contain a "/" so that urlparse("http://foo.com?q=a#blah") results in ("http","foo.com?q=a#blah",....) urlparse should not require slashes in urls that have fragments or query strings. ==================================================================== Audit trail: Tue Jul 11 08:29:15 2000 guido moved from incoming to open ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-02-14 10:23 Message: Logged In: YES user_id=89016 RFC2396 Section 3.2 states that: """The authority component is preceded by a double slash "//" and is terminated by the next slash "/", question-mark "?", or by the end of the URI.""" So IMHO this would mean that "http://foo.com?q=a#blah" should be parsed by urlsplit as ('http', 'foo.com', '', 'q=a', 'blah') (or maybe ('http', 'foo.com', '/', 'q=a', 'blah')) ---------------------------------------------------------------------- Comment By: Aaron Swartz (aaronsw) Date: 2001-11-26 16:44 Message: Logged In: YES user_id=122141 RFC2396, not RFC1738 is the latest RFC for URI/URL defintions. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2000-08-24 08:07 Message: RFC 1738, section 3.3, discusses the syntax for HTTP URLs. It implies that the "/" between the is required if either the path of searchpart of the URL is provided, but is not completely clear. I don't see anything relevant in RFC 1945 (HTTP 1.0), but RFC 2616 (HTTP 1.1), section 3.2.2 clearly indicates that the search part should only exist as a part of the path component, which is required to include the leading "/". There is some confusion as to how this should relate to parsing of relative URLs (RFC 1808). This bug can be re-opened if there's evidence urlparse is actually wrong or inconsistent with other URL parsers. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2000-08-16 18:54 Message: Assigned to me so I can deal with urlparse all at once. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=210633&group_id=5470 From noreply@sourceforge.net Thu Feb 14 19:33:41 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 11:33:41 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-517371 ] Add .count() method to tuples Message-ID: Feature Requests item #517371, was opened at 2002-02-14 00:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=517371&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Raymond Hettinger (rhettinger) Assigned to: Nobody/Anonymous (nobody) Summary: Add .count() method to tuples Initial Comment: Tuples have every method afforded to lists except for those which mutate the list; however, there is one exception: .count() appears to have been left out eventhough it can be well-defined for tuples as well as lists. >>> s = 'the trump' >>> s.count('t') 2 >>> list(s).count('t') 2 >>> tuple(s).count('t') Traceback (most recent call last): File "", line 1, in ? tuple(s).count('t') AttributeError: 'tuple' object has no attribute 'count' ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-14 11:33 Message: Logged In: YES user_id=31435 Guido has rejected this idea before, so don't hold your breath. tuples and lists are intended to be used in different ways, and it's "a feature" that their differing methods push you toward using them as intended. Note that tuples don't support .index() either, and that's also intentional. Note that you can use the operator.countOf() function on tuples (and, in 2.2, on any iterable object). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=517371&group_id=5470 From noreply@sourceforge.net Thu Feb 14 19:37:01 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 11:37:01 -0800 Subject: [Python-bugs-list] [ python-Bugs-516372 ] test_thread: unhandled exc. in thread Message-ID: Bugs item #516372, was opened at 2002-02-12 02:30 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516372&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: test_thread: unhandled exc. in thread Initial Comment: test_thread.py occasionally dumps a "Unhandled exception in thread" traceback at the last thread line "mutex.release()" about NoneType not having a release attribute. The problem is confusing for users thinking that something went wrong with the test (althought the regrtest suite doesn't detect such exceptions and tells that the test passed --- this could be another bug report BTW). The problem shows up with Psyco but could also appear on plain Python executions depending on the precise timing. It comes from the fact that the thread code ends with: ... done.release() mutex.release() where these two are mutexes. The main program ends with: ... done.acquire() print "All tasks done" so if 'done' is released, the main program may exit before the thread has a chance to release 'mutex', which happens to be a global variable that the Python module-unloading logic will replace with None. ---------------------------------------------------------------------- >Comment By: Armin Rigo (arigo) Date: 2002-02-14 11:37 Message: Logged In: YES user_id=4771 The problem is not specific to Psyco, as it actually showed up once with Python only in test_threaded_import, which exhibits a similar behavior. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516372&group_id=5470 From noreply@sourceforge.net Thu Feb 14 20:17:56 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 12:17:56 -0800 Subject: [Python-bugs-list] [ python-Bugs-517684 ] warnings.warn() misdocumented Message-ID: Bugs item #517684, was opened at 2002-02-14 12:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517684&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Barry Warsaw (bwarsaw) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: warnings.warn() misdocumented Initial Comment: warnings.warn() on http://www.python.org/doc/current/lib/warning-functions.html contains a sample function call. The argument `level' is not a valid keyword argument to warnings.warn(). The example should probably just be: def deprecation(message): warnings.warn(message, DeprecationWarning, 2) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517684&group_id=5470 From noreply@sourceforge.net Thu Feb 14 20:50:27 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 12:50:27 -0800 Subject: [Python-bugs-list] [ python-Bugs-517704 ] Installing Python 2.2 on Solaris 2.x Message-ID: Bugs item #517704, was opened at 2002-02-14 12:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517704&group_id=5470 Category: Installation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Jeff Bauer (jeffbauer) Assigned to: Nobody/Anonymous (nobody) Summary: Installing Python 2.2 on Solaris 2.x Initial Comment: I'm having problems installing Python 2.2 onto my Solaris 2.6 workstation. I am doing the boilerplate ... ./configure make make install I checked for prior related bug reports and posted on c.l.py. Chris Wysocki reported similar problems and Barry Warsaw mentioned on python-dev how setup.py agressively deletes .so files when it gets an import error after building the file. Note: No problems building Python 2.1 (2.1.2) on this platform. Log files attached. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517704&group_id=5470 From noreply@sourceforge.net Fri Feb 15 03:13:22 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 19:13:22 -0800 Subject: [Python-bugs-list] [ python-Bugs-517811 ] Extraneous \ escapes in code example Message-ID: Bugs item #517811, was opened at 2002-02-14 19:13 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517811&group_id=5470 Category: Documentation Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Robert Kern (kern) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Extraneous \ escapes in code example Initial Comment: The examples on http://python.sourceforge.net/maint-docs/lib/node389.html (email module examples) use % codes for string interpolation. The \% LaTeX escapes appear in the HTML version at least. The environment used for code examples appears not to need the % characters to be escaped. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517811&group_id=5470 From noreply@sourceforge.net Fri Feb 15 04:22:51 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 20:22:51 -0800 Subject: [Python-bugs-list] [ python-Bugs-517811 ] Extraneous \ escapes in code example Message-ID: Bugs item #517811, was opened at 2002-02-14 19:13 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517811&group_id=5470 Category: Documentation Group: Python 2.2.1 candidate >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Robert Kern (kern) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Extraneous \ escapes in code example Initial Comment: The examples on http://python.sourceforge.net/maint-docs/lib/node389.html (email module examples) use % codes for string interpolation. The \% LaTeX escapes appear in the HTML version at least. The environment used for code examples appears not to need the % characters to be escaped. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 20:22 Message: Logged In: YES user_id=3066 Fixed in Doc/lib/email.tex revisions 1.10 and 1.9.6.1. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=517811&group_id=5470 From noreply@sourceforge.net Fri Feb 15 06:09:32 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 22:09:32 -0800 Subject: [Python-bugs-list] [ python-Bugs-507713 ] mem leak in imaplib Message-ID: Bugs item #507713, was opened at 2002-01-23 13:28 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507713&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Scott Blomquist (scottdb) >Assigned to: Piers Lauder (pierslauder) Summary: mem leak in imaplib Initial Comment: When run in a multithreaded environment, the imaplib will leak memory if not run with the -O option. Long running, multithreaded programs that we have which use the imaplib will run fine for a undefined period of time, then suddenly start to grow in size until they take as much mem as the system will give to them. Once they start to grow, they continue to grow at a pretty consistent rate. Specifically: If the -O option is not used, in the _log method starting on line 1024 in the imaplib class, the imaplib keeps the last 10 commands that are sent. def _log(line): # Keep log of last `_cmd_log_len' interactions for debugging. if len(_cmd_log) == _cmd_log_len: del _cmd_log[0] _cmd_log.append((time.time(), line)) Unfortunately, in a multithreaded environment, eventually the len of the list will become larger than the _cmd_log_len, and since the test is for equality, rather than greater-than-equal-to, once the len of the _cmd_log gets larger than _cmd_log_len, nothing will ever be removed from the _cmd_log, and the list will grow without bound. We added the following to test this hypothesis, we created a basic test which creates 40 threads. These threads sit in while 1 loops and create an imaplib and then issue the logout command. We also added the following debug to the method above: if len(_cmd_log) > 10: print 'command log len is:', len(_cmd_log) We started the test, which ran fine, without leaking, for about 10 minutes, and without printing anything out. Somewhere around ten minutes, the process started to grow in size rapidly, and at the same time, the debug started printing out, and the size of the _cmd_log list did indeed grow very large, very fast. We repeated the test and the same symptoms occured, this time after only 5 minutes. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 22:09 Message: Logged In: YES user_id=3066 It looks like the problem still exists in Python 2.1.2, 2.2, and CVS. I've attached a patch that I think solves this problem, but this isn't easy for me to test. Please check this. Assigning to Piers Lauder since he knows more about this module than I do. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507713&group_id=5470 From noreply@sourceforge.net Fri Feb 15 06:13:12 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 14 Feb 2002 22:13:12 -0800 Subject: [Python-bugs-list] [ python-Bugs-505747 ] markupbase handling of HTML declarations Message-ID: Bugs item #505747, was opened at 2002-01-19 06:37 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=505747&group_id=5470 >Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Greg Chapman (glchapman) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: markupbase handling of HTML declarations Initial Comment: Using Python 2.2., I tried to use websucker.py on this page: http://magix.fri.uni-lj.si/orange/start/ This resulted in an exception in ParserBase._scan_name because _declname_match failed. Examining the source for the page above, I see there are several tags that look like: "" where the first character after "Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 22:13 Message: Logged In: YES user_id=3066 Ugh! I don't think that's legal HTML at all. I'll have to think about the right way to deal with it. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=505747&group_id=5470 From noreply@sourceforge.net Fri Feb 15 10:08:17 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Feb 2002 02:08:17 -0800 Subject: [Python-bugs-list] [ python-Bugs-505150 ] mac module documentation inaccuracy. Message-ID: Bugs item #505150, was opened at 2002-01-17 15:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=505150&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Martin Miller (mrmiller) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: mac module documentation inaccuracy. Initial Comment: The documentation at for the MacPython 2.2 mac module says, in part: > ==snip== >> One additional function is available: >> >> xstat(path) >> This function returns the same information as stat(), >> but with three additional values appended: the size of the >> resource fork of the file and its >> 4-character creator and type. > ==snip== The xstat() function is available only under PPC MacPython but not under Carbon MacPython. The documentation should be updated, assuming the ommision was intentional. Ideally, it would suggest alternatives for the Carbon version. ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-15 02:08 Message: Logged In: YES user_id=45365 You can't. After some discussion on the SIG this was deemed to not be important enough to stop us getting rid of xstat(), nobody on the list ever used the resource size. But you're right, it probably needs a note in the docs. Maybe add a line "This does not give you the resource fork size, but that information is of limited interest anyway". ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 08:49 Message: Logged In: YES user_id=3066 Jack -- the FSSpec object as documented allows access to the creator and type information, but not the size of the resource fork. How should the caller get that? Thanks. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-05 14:22 Message: Logged In: YES user_id=45365 Here is a patch for libmac.tex. I'll leave it to you to replace the \code{} sections with one of the gazillion macros I can never remember, hope you don't mind:-) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=505150&group_id=5470 From noreply@sourceforge.net Fri Feb 15 10:42:01 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Feb 2002 02:42:01 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-517920 ] = (assignment) as expression Message-ID: Feature Requests item #517920, was opened at 2002-02-15 02:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=517920&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Frank Sonnenburg (sonnenburg) Assigned to: Nobody/Anonymous (nobody) Summary: = (assignment) as expression Initial Comment: Hi Python-Developers I am new to python and maybe this was considered before. I think it would be VERY helpful, if one could use assignments as expressions as it is in C, e.g. in while-loops: import mailbox file = open('mailfile') mbox = mailbox.PortableUnixMailbox(file) # the following line produces # SyntaxError: invalid syntax while mail = mbox.next(): # do something with this mail ... Best regards Frank Sonnenburg ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=517920&group_id=5470 From noreply@sourceforge.net Fri Feb 15 14:19:46 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Feb 2002 06:19:46 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-501831 ] Bit support in "array" module Message-ID: Feature Requests item #501831, was opened at 2002-01-10 06:49 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=501831&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jesús Cea Avión (jcea) >Assigned to: Guido van Rossum (gvanrossum) >Summary: Bit support in "array" module Initial Comment: I think the standard "array" module should support single bit arrays. In fact, would be very nice a supplementary addition to support arbitrary bit size arrays (for example, 5 bit elements). Of course, single bit should be optimiced, since bitmaps management is a frequent task. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=501831&group_id=5470 From noreply@sourceforge.net Fri Feb 15 14:32:04 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Feb 2002 06:32:04 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-480967 ] SHA 256/384/512 in Python 2.2 Message-ID: Feature Requests item #480967, was opened at 2001-11-12 08:56 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=480967&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Jesús Cea Avión (jcea) >Assigned to: Guido van Rossum (gvanrossum) Summary: SHA 256/384/512 in Python 2.2 Initial Comment: I know that Python 2.2. window is closing, but this request seems to be very simple and low cost. What about updating the SHA module to the new SHA-256/384/512 standards?. See, for example: http://www.aarongifford.com/computers/sha.html ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-11-13 00:31 Message: Logged In: YES user_id=21627 Unless somebody is forthcoming RSN with a patch to Python, I don't think the sha module can be extended before 2.2. Notice that the last beta of 2.2 is scheduled for this week. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=480967&group_id=5470 From noreply@sourceforge.net Fri Feb 15 18:31:42 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Feb 2002 10:31:42 -0800 Subject: [Python-bugs-list] [ python-Bugs-518076 ] Error in tutorial chapter 4 Message-ID: Bugs item #518076, was opened at 2002-02-15 10:31 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518076&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Matt Behrens (mattbehrens) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Error in tutorial chapter 4 Initial Comment: Tutorial, 4.7.4, paragraph 1. Paragraph claims lambda function adds a+b when it is actually an incrementor. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518076&group_id=5470 From noreply@sourceforge.net Sat Feb 16 01:26:50 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Feb 2002 17:26:50 -0800 Subject: [Python-bugs-list] [ python-Bugs-502503 ] pickle interns strings Message-ID: Bugs item #502503, was opened at 2002-01-11 13:21 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=502503&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Brian Kelley (wc2so1) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: pickle interns strings Initial Comment: Pickle (and cPickle) use eval to reconstruct string variables from the stored format. Eval is used because it correctly reconstructs the repr of a string back into the original string object by translating all the appropriately escape characters like "\m" and "\n" There is an side effect in that eval interns string variables for faster lookup. This causes the following sample code to unexpectedly grow in memory consumption: import pickle import random import string def genstring(length=100): s = [random.choice(string.letters) for x in range(length)] return "".join(s) def test(): while 1: s = genstring() dump = pickle.dumps(s) s2 = pickle.loads(dump) assert s == s2 test() Note that all strings are not interned, just ones that, as Tim Peters once said, "look like", variable names. The above example is contrived to generate a lot of different names that "look like" variables names but since this has happened in practice it probably should documented. Interestingly, by inserting s.append(" ") before return "".join(s) The memory consumption is not seen because the names no longer "look like" variable names. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-15 17:26 Message: Logged In: YES user_id=72053 I agree about eval being dangerous. Also, the memory leak is itself a security concern: if an attacker can stuff enough strings into the unpickler to exhaust memory, that's a denial of service attack. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-01-11 13:36 Message: Logged In: YES user_id=31435 Noting that Security Geeks are uncomfortable with using eval () for this purpose regardless. Would be good if Python got refactored so that pickle and cPickle and the front end all called a new routine that simply parsed the escape sequences in a character buffer, returning a Python string object. Don't ask me about Unicode . ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=502503&group_id=5470 From noreply@sourceforge.net Sat Feb 16 02:22:36 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Feb 2002 18:22:36 -0800 Subject: [Python-bugs-list] [ python-Bugs-518283 ] Menus and winfo_children() KeyError Message-ID: Bugs item #518283, was opened at 2002-02-15 18:22 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518283&group_id=5470 Category: Tkinter Group: None Status: Open Resolution: None Priority: 5 Submitted By: Matthew Cowles (mdcowles) Assigned to: Nobody/Anonymous (nobody) Summary: Menus and winfo_children() KeyError Initial Comment: Sent to python-help If a window has a menubar, sending winfo_children() to the window produces a KeyError. I'll upload a small example. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518283&group_id=5470 From noreply@sourceforge.net Sat Feb 16 07:06:11 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Feb 2002 23:06:11 -0800 Subject: [Python-bugs-list] [ python-Bugs-516372 ] test_thread: unhandled exc. in thread Message-ID: Bugs item #516372, was opened at 2002-02-12 02:30 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516372&group_id=5470 >Category: Threads Group: None Status: Open Resolution: None Priority: 5 Submitted By: Armin Rigo (arigo) >Assigned to: Tim Peters (tim_one) Summary: test_thread: unhandled exc. in thread Initial Comment: test_thread.py occasionally dumps a "Unhandled exception in thread" traceback at the last thread line "mutex.release()" about NoneType not having a release attribute. The problem is confusing for users thinking that something went wrong with the test (althought the regrtest suite doesn't detect such exceptions and tells that the test passed --- this could be another bug report BTW). The problem shows up with Psyco but could also appear on plain Python executions depending on the precise timing. It comes from the fact that the thread code ends with: ... done.release() mutex.release() where these two are mutexes. The main program ends with: ... done.acquire() print "All tasks done" so if 'done' is released, the main program may exit before the thread has a chance to release 'mutex', which happens to be a global variable that the Python module-unloading logic will replace with None. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-15 23:06 Message: Logged In: YES user_id=31435 Changed Category to "Threads" and assigned to me. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2002-02-14 11:37 Message: Logged In: YES user_id=4771 The problem is not specific to Psyco, as it actually showed up once with Python only in test_threaded_import, which exhibits a similar behavior. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516372&group_id=5470 From noreply@sourceforge.net Sat Feb 16 07:07:29 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Feb 2002 23:07:29 -0800 Subject: [Python-bugs-list] [ python-Bugs-515745 ] Missing docs for module knee Message-ID: Bugs item #515745, was opened at 2002-02-10 21:38 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515745&group_id=5470 Category: Demos and Tools Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) >Assigned to: Guido van Rossum (gvanrossum) Summary: Missing docs for module knee Initial Comment: 3.21.1 in the lib manual sez: "A more complete example that implements hierarchical module names and includes a reload() function can be found in the standard module knee (which is intended as an example only -- don't rely on any part of it being a standard interface)." ...but knee is not in the module list, though it appears to be in the distribution. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-15 23:07 Message: Logged In: YES user_id=31435 Sorry, I can't channel Guido here -- AFAICT, this is the first time I ever heard about knee! Reassigned to Guido. ---------------------------------------------------------------------- Comment By: David Abrahams (david_abrahams) Date: 2002-02-14 09:24 Message: Logged In: YES user_id=52572 If you move it, please change the docs so that it no longer says it's a standard module. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 09:04 Message: Logged In: YES user_id=3066 Like it says, the knee module is supposed to be an example only. I don't think it should be included in the library at all; it should be somewhere in Demo/. I think Guido has resisted moving it before, but I don't recall clearly. I'll assign this to Tim since Guido's not available now. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=515745&group_id=5470 From noreply@sourceforge.net Sat Feb 16 07:27:51 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 15 Feb 2002 23:27:51 -0800 Subject: [Python-bugs-list] [ python-Bugs-516372 ] test_thread: unhandled exc. in thread Message-ID: Bugs item #516372, was opened at 2002-02-12 02:30 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516372&group_id=5470 Category: Threads Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Armin Rigo (arigo) Assigned to: Tim Peters (tim_one) Summary: test_thread: unhandled exc. in thread Initial Comment: test_thread.py occasionally dumps a "Unhandled exception in thread" traceback at the last thread line "mutex.release()" about NoneType not having a release attribute. The problem is confusing for users thinking that something went wrong with the test (althought the regrtest suite doesn't detect such exceptions and tells that the test passed --- this could be another bug report BTW). The problem shows up with Psyco but could also appear on plain Python executions depending on the precise timing. It comes from the fact that the thread code ends with: ... done.release() mutex.release() where these two are mutexes. The main program ends with: ... done.acquire() print "All tasks done" so if 'done' is released, the main program may exit before the thread has a chance to release 'mutex', which happens to be a global variable that the Python module-unloading logic will replace with None. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-15 23:27 Message: Logged In: YES user_id=31435 Hmm. You must be running on Linux. I agree with your analysis, but I'll never see it on uniprocessor Windows: when the main thread goes away on Windows, child threads don't get another cycle. I've seen other races "like this" pop up only on Linux -- it seems that Linux is uniquely slothful when killing off child threads. Anyway, I appreciate the analysis and have fixed the problems: Lib/test/test_thread.py; new revision: 1.10 Lib/test/test_threaded_import.py; new revision: 1.5 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-15 23:06 Message: Logged In: YES user_id=31435 Changed Category to "Threads" and assigned to me. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2002-02-14 11:37 Message: Logged In: YES user_id=4771 The problem is not specific to Psyco, as it actually showed up once with Python only in test_threaded_import, which exhibits a similar behavior. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516372&group_id=5470 From noreply@sourceforge.net Sat Feb 16 10:39:49 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 02:39:49 -0800 Subject: [Python-bugs-list] [ python-Bugs-507713 ] mem leak in imaplib Message-ID: Bugs item #507713, was opened at 2002-01-23 13:28 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507713&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Scott Blomquist (scottdb) Assigned to: Piers Lauder (pierslauder) Summary: mem leak in imaplib Initial Comment: When run in a multithreaded environment, the imaplib will leak memory if not run with the -O option. Long running, multithreaded programs that we have which use the imaplib will run fine for a undefined period of time, then suddenly start to grow in size until they take as much mem as the system will give to them. Once they start to grow, they continue to grow at a pretty consistent rate. Specifically: If the -O option is not used, in the _log method starting on line 1024 in the imaplib class, the imaplib keeps the last 10 commands that are sent. def _log(line): # Keep log of last `_cmd_log_len' interactions for debugging. if len(_cmd_log) == _cmd_log_len: del _cmd_log[0] _cmd_log.append((time.time(), line)) Unfortunately, in a multithreaded environment, eventually the len of the list will become larger than the _cmd_log_len, and since the test is for equality, rather than greater-than-equal-to, once the len of the _cmd_log gets larger than _cmd_log_len, nothing will ever be removed from the _cmd_log, and the list will grow without bound. We added the following to test this hypothesis, we created a basic test which creates 40 threads. These threads sit in while 1 loops and create an imaplib and then issue the logout command. We also added the following debug to the method above: if len(_cmd_log) > 10: print 'command log len is:', len(_cmd_log) We started the test, which ran fine, without leaking, for about 10 minutes, and without printing anything out. Somewhere around ten minutes, the process started to grow in size rapidly, and at the same time, the debug started printing out, and the size of the _cmd_log list did indeed grow very large, very fast. We repeated the test and the same symptoms occured, this time after only 5 minutes. ---------------------------------------------------------------------- >Comment By: Piers Lauder (pierslauder) Date: 2002-02-16 02:39 Message: Logged In: YES user_id=196212 I aggree that the line: if len(_cmd_log) == _cmd_log_len: should be changed, though I favour the form: while len(_cmd_log) >= _cmd_log_len: del _cmd_log[0] rather than the version suggested in the patch: if len(_cmd_log) > _cmd_log_len: del _cmd_log[:-_cmd_log_len] However, if imaplib is gpoing to be used by multiple threads, perhaps the best solution is to move these debugging routines entirely into the IMAP4 class, so that the logs are per-connection, rather than global? ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 22:09 Message: Logged In: YES user_id=3066 It looks like the problem still exists in Python 2.1.2, 2.2, and CVS. I've attached a patch that I think solves this problem, but this isn't easy for me to test. Please check this. Assigning to Piers Lauder since he knows more about this module than I do. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507713&group_id=5470 From noreply@sourceforge.net Sat Feb 16 23:58:23 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 15:58:23 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-517920 ] = (assignment) as expression Message-ID: Feature Requests item #517920, was opened at 2002-02-15 02:41 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=517920&group_id=5470 Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Works For Me Priority: 5 Submitted By: Frank Sonnenburg (sonnenburg) Assigned to: Nobody/Anonymous (nobody) Summary: = (assignment) as expression Initial Comment: Hi Python-Developers I am new to python and maybe this was considered before. I think it would be VERY helpful, if one could use assignments as expressions as it is in C, e.g. in while-loops: import mailbox file = open('mailfile') mbox = mailbox.PortableUnixMailbox(file) # the following line produces # SyntaxError: invalid syntax while mail = mbox.next(): # do something with this mail ... Best regards Frank Sonnenburg ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 15:58 Message: Logged In: YES user_id=21627 In Python 2.2, iterators solve this problem: for mail in mbox: #do something Closing this as "works for me". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=517920&group_id=5470 From noreply@sourceforge.net Sun Feb 17 00:03:23 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 16:03:23 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-516076 ] Assign boolean value to a weak reference Message-ID: Feature Requests item #516076, was opened at 2002-02-11 12:32 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=516076&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Stefan Franke (sfranke) >Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Assign boolean value to a weak reference Initial Comment: To test if a weak reference r is still alive, you type if r() is not None: print "Alive" Wouldn't be if r: print "Alive" more pythonic, since all values of any datatype that are not empty evaluate to "true"? Same if you think about r as a pointer. principle-of-least-surprise-ly yr's Stefan ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:03 Message: Logged In: YES user_id=21627 Looks reasonable. Fred? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=516076&group_id=5470 From noreply@sourceforge.net Sun Feb 17 00:06:05 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 16:06:05 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-515073 ] subtypable weak references Message-ID: Feature Requests item #515073, was opened at 2002-02-08 16:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515073&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) Assigned to: Nobody/Anonymous (nobody) Summary: subtypable weak references Initial Comment: I want to be able to create a subtype of weakref. Motivation: I use a trick to non-intrusively keep one Python object (ward) alive as long as another one (custodian) is: I build a weak reference to the custodian whose kill function object holds a reference to the ward. I "leak" the weakref, but the function decrements its refcount so it will eventually die. This scheme costs an extra allocation for the function object, and because there is a function object at all, there's no opportunity to re-use the weakref (please document this part of the re-use behavior, BTW!) I also want the re-use algorithm to check for object and type equality so that I can avoid creating multiple such references. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:06 Message: Logged In: YES user_id=21627 This is what the WeakKeyDictionary is for: use the custodian as the key, and the ward as the value. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515073&group_id=5470 From noreply@sourceforge.net Sun Feb 17 00:10:59 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 16:10:59 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-515074 ] Extended storage in new-style classes Message-ID: Feature Requests item #515074, was opened at 2002-02-08 16:51 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515074&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) Assigned to: Nobody/Anonymous (nobody) Summary: Extended storage in new-style classes Initial Comment: I want to be able to reserve some storage in my own new-style class objects. Ideally the storage would fall before the variable-length section so I didn't have to worry about alignment issues. -Dave ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:10 Message: Logged In: YES user_id=21627 I cannot fully understand the requirement. Are you talking about a type implemented in C, or a class in Python? Assuming it is a C type: Are you defining a type that inherits from a builtin, or one that doesn't. Assuming does not inherit: what is the problem with just setting tp_basicsize correctly? If it is a C type and it does inherit from a variable-length builtin, this won't be possible: the API for the base type won't know about the extra fields. In that case, extend the type so that it has an __dict__ and put everything into the dict. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515074&group_id=5470 From noreply@sourceforge.net Sun Feb 17 00:11:57 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 16:11:57 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-514532 ] Add "eu#" parser marker Message-ID: Feature Requests item #514532, was opened at 2002-02-07 13:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=514532&group_id=5470 Category: Unicode Group: None Status: Open Resolution: None Priority: 5 Submitted By: M.-A. Lemburg (lemburg) Assigned to: M.-A. Lemburg (lemburg) >Summary: Add "eu#" parser marker Initial Comment: As requested by Jack Janssen: """ Recently, "M.-A. Lemburg" said: > How about this: we add a wchar_t codec to Python and the "eu#" parser > marker. Then you could write: > > wchar_t value = NULL; > int len = 0; > if (PyArg_ParseTuple(tuple, "eu#", "wchar_t", &value, &len) < 0) > return NULL; I like it! """ The parser marker should return Py_UNICODE* instead of char* and work much like "et#" does now for strings. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:11 Message: Logged In: YES user_id=21627 Because of the memory management issues, I don't think having such a feature is desirable. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=514532&group_id=5470 From noreply@sourceforge.net Sun Feb 17 00:13:45 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 16:13:45 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-512494 ] multi-line comment block clarification Message-ID: Feature Requests item #512494, was opened at 2002-02-03 14:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512494&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: frobozz electric (frobozzelectric) Assigned to: Nobody/Anonymous (nobody) Summary: multi-line comment block clarification Initial Comment: The previous post did not show the indenting for the multi-line comment block. What I meant was this #: Comment line 1 Comment line 2 ... Comment line n Whatever. It's just an idea. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:13 Message: Logged In: YES user_id=21627 Why do you need that feature? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512494&group_id=5470 From noreply@sourceforge.net Sun Feb 17 00:16:57 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 16:16:57 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-504880 ] Optional argument for dict.popitem() Message-ID: Feature Requests item #504880, was opened at 2002-01-17 06:47 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=504880&group_id=5470 Category: None Group: None >Status: Closed >Resolution: Duplicate Priority: 5 Submitted By: Raymond Hettinger (rhettinger) Assigned to: Nobody/Anonymous (nobody) Summary: Optional argument for dict.popitem() Initial Comment: Have dict.popitem() allow an optional argument which specifies a particular rather than arbitrary key to be popped. It should behave like this: class mydict(dict): def popitem( self, key=None ): if key is None: return dict.popitem(self) value = self[key] del self[key] return (key, value) >>> d = {'spam':2, 'eggs':3} >>> print d.popitem('spam') ('spam', 2) >>> print d {'eggs': 3} The motivation is similar to the rationale for .setdefault() in making a simple, fast built-in replacement for a commonly used sequence of dictionary operations ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:16 Message: Logged In: YES user_id=21627 This seems to be a duplicate of http://sourceforge.net/tracker/index.php?func=detail&aid=495086&group_id=5470&atid=355470 ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-17 15:23 Message: Logged In: YES user_id=21627 Moved into feature requests tracker. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=504880&group_id=5470 From noreply@sourceforge.net Sun Feb 17 00:17:03 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 16:17:03 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-495086 ] dict.popitem(key=None) Message-ID: Feature Requests item #495086, was opened at 2001-12-19 08:26 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=495086&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: dict.popitem(key=None) Initial Comment: Would it be possible to add an extra argument to the popitem method of DictionaryType so one can both retrieve a dict item and delete it at the same time? It would be so handy. Without the optional argument, it would work the same way dict.popitem works now example:: >>> d = dict([(x,x) for x in range(10)]) >>> d {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9} >>> d.popitem() # retrieves "random" key->val pair (0, 0) >>> d.popitem(4) # val=d[4]; del d[4]; return val 4 >>> d.popitem(6) # val=d[6]; del d[6]; return val 6 >>> d # missing keys [0, 4, 6] {1: 1, 2: 2, 3: 3, 5: 5, 7: 7, 8: 8, 9: 9} ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:17 Message: Logged In: YES user_id=21627 Also requested as http://sourceforge.net/tracker/index.php?func=detail&aid=504880&group_id=5470&atid=355470 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=495086&group_id=5470 From noreply@sourceforge.net Sun Feb 17 00:19:04 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 16:19:04 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-512497 ] multi-line print statement Message-ID: Feature Requests item #512497, was opened at 2002-02-03 14:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512497&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: frobozz electric (frobozzelectric) Assigned to: Nobody/Anonymous (nobody) Summary: multi-line print statement Initial Comment: Similar to the multi-line comment block suggestion, instead of using \ to say the line continues use print: "line 1" "line 2" ... "line n" Ok, then...thanks ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:19 Message: Logged In: YES user_id=21627 Can you specify more precisely how this feature would work? E.g. would it be legal to write print: "foo" raise "Done" or print: for i in range(10): "bar" If so, what would be the meaning of the latter one? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512497&group_id=5470 From noreply@sourceforge.net Sun Feb 17 00:52:00 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 16:52:00 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-515073 ] subtypable weak references Message-ID: Feature Requests item #515073, was opened at 2002-02-08 16:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515073&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) Assigned to: Nobody/Anonymous (nobody) Summary: subtypable weak references Initial Comment: I want to be able to create a subtype of weakref. Motivation: I use a trick to non-intrusively keep one Python object (ward) alive as long as another one (custodian) is: I build a weak reference to the custodian whose kill function object holds a reference to the ward. I "leak" the weakref, but the function decrements its refcount so it will eventually die. This scheme costs an extra allocation for the function object, and because there is a function object at all, there's no opportunity to re-use the weakref (please document this part of the re-use behavior, BTW!) I also want the re-use algorithm to check for object and type equality so that I can avoid creating multiple such references. ---------------------------------------------------------------------- >Comment By: David Abrahams (david_abrahams) Date: 2002-02-16 16:52 Message: Logged In: YES user_id=52572 I wouldn't want to use WeakKeyDictionary directly for this, since I'm using it in a fairly time-critical place in C++ code. I could do the same thing with a proper subtype of dictionary using C++ code, and that's fine if you have a guarantee that a custodian has only one ward. Otherwise you need to use a collection of wards as the value, which again costs an extra allocation for the common case where a custodian really does have only one ward. Yes, it would be amortized over the number of wards for any custodian, but as I say the common case will have one ward per custodian. So I'd still really like to have weakref subclassing. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:06 Message: Logged In: YES user_id=21627 This is what the WeakKeyDictionary is for: use the custodian as the key, and the ward as the value. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515073&group_id=5470 From noreply@sourceforge.net Sun Feb 17 01:05:28 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 17:05:28 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-515074 ] Extended storage in new-style classes Message-ID: Feature Requests item #515074, was opened at 2002-02-08 16:51 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515074&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: David Abrahams (david_abrahams) Assigned to: Nobody/Anonymous (nobody) Summary: Extended storage in new-style classes Initial Comment: I want to be able to reserve some storage in my own new-style class objects. Ideally the storage would fall before the variable-length section so I didn't have to worry about alignment issues. -Dave ---------------------------------------------------------------------- >Comment By: David Abrahams (david_abrahams) Date: 2002-02-16 17:05 Message: Logged In: YES user_id=52572 It's a C subtype of PyBaseObjectType, and you can't just set tp_basicsize correctly because the base type has its own idea of where the variable length section starts and doesn't respect a difference in tp_basicsize. I've spoken with Guido about this; he understands all the details. I entered this feature request in the tracker at his request (the weakref subclassing was entered at Fred's request). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:10 Message: Logged In: YES user_id=21627 I cannot fully understand the requirement. Are you talking about a type implemented in C, or a class in Python? Assuming it is a C type: Are you defining a type that inherits from a builtin, or one that doesn't. Assuming does not inherit: what is the problem with just setting tp_basicsize correctly? If it is a C type and it does inherit from a variable-length builtin, this won't be possible: the API for the base type won't know about the extra fields. In that case, extend the type so that it has an __dict__ and put everything into the dict. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=515074&group_id=5470 From noreply@sourceforge.net Sun Feb 17 07:04:19 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 16 Feb 2002 23:04:19 -0800 Subject: [Python-bugs-list] [ python-Bugs-497839 ] reindent chokes on empty first lines Message-ID: Bugs item #497839, was opened at 2001-12-30 04:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=497839&group_id=5470 Category: Demos and Tools Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Martin v. Löwis (loewis) Assigned to: Tim Peters (tim_one) Summary: reindent chokes on empty first lines Initial Comment: If a file has an empty first line, and a hanging comment, reindent crashes. For the attached file, I get Traceback (most recent call last): File "/usr/src/python/Tools/scripts/reindent.py", line 271, in ? main() File "/usr/src/python/Tools/scripts/reindent.py", line 65, in main check(arg) File "/usr/src/python/Tools/scripts/reindent.py", line 90, in check if r.run(): File "/usr/src/python/Tools/scripts/reindent.py", line 187, in run want = have + getlspace(after[jline-1]) - \ IndexError: list index out of range ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-16 23:04 Message: Logged In: YES user_id=31435 Thanks for whittling this down! reindent wasn't expecting all-whitespace lines at the start of a file. That bad assumption is now repaired, in Tools/scripts/reindent.py; new revision: 1.3 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=497839&group_id=5470 From noreply@sourceforge.net Sun Feb 17 14:43:59 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 06:43:59 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: Later Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Nobody/Anonymous (nobody) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-17 06:43 Message: Logged In: YES user_id=72053 I hope there's a simple solution to this--it's obvious what the right result should be mathematically if you compare 1L<<10000 with 0.0. It should not raise an error. If the documented behavior leads to raising an error, then there's a bug in the document. I agree that it's not the highest priority bug in the world, but it doesn't seem that complicated. If n is a long and x is a float, both >= 0, what happens if you do this, to implement cmp(n,x): xl = long(x) # if x has a fraction part and int part is == n, then x>n if float(xl)!=x and xl==n: return 1 return cmp(n, xl) If both are < 0, change 1 to -1 above. If x and n are of opposite sign, the positive one is greater. Unless I missed something (which is possible--I'm not too alert right now) the above should be ok in all cases. Basically you use long as the common type to convert to; you do lose information when converting a non-integer, but for the comparison with an integer, you don't need the lost information other than knowing whether it was nonzero, which you find out by converting the long back to a float. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-09 07:42 Message: Logged In: YES user_id=418174 I completely agree it's not a high-priority item, especially because it may be complicated to fix. I think that the fundamental problem is that there is no common type to which both float and long can be converted without losing information, which complicates both the definition and implementation of comparison. Accordingly, it might make sense to think about this issue in conjunction with future consideration of rational numbers. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 23:33 Message: Logged In: YES user_id=31435 I reopened this, but unassigned it since I can't justify working on it (the benefit/cost ratio of fixing it is down in the noise compared to other things that should be done). I no longer think we'd need a PEP to change the behavior, and agree it would be nice to change it. Changing it may surprise people expecting Python to work like C (C99 says that when integral -> floating conversion is in range but can't be done exactly, either of the closest representable floating numbers may be returned; Python inherits the platform C's behavior here for Python int -> Python float conversion (C long -> C double); when the conversion is out of range, C doesn't define what happens, and Python inherits that too before 2.2 (Infinities and NaNs are what I've seen most often, varying by platform); in 2.2 it raises OverflowError). I'm not sure it's possible for a= c: . print `a`, `b`, `c` ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 07:33 Message: Logged In: YES user_id=418174 Here is yet another surprise: x=[1L<10000] y=[0.0] z=x+y Now I can execute x.sort() and y.sort() successfully, but z.sort blows up. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 05:28 Message: Logged In: YES user_id=418174 The difficulty is that as defined, < is not an order relation, because there exist values a, b, c such that a=T, converting x to long will not lose information, and if x is a long value <=T, converting x to float will not lose information. Therefore, instead of always converting to long, it suffices to convert in a direction chosen by comparing the operands to T (without conversion) first. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:59 Message: Logged In: YES user_id=31435 Oops! I meant """ could lead to a different result than the explicit coercion in somefloat == float(somelong) """ ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Sun Feb 17 14:58:21 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 06:58:21 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: Later Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Nobody/Anonymous (nobody) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-17 06:58 Message: Logged In: YES user_id=72053 Oops, I got confused about the order of the two args in the example below. I meant cmp(x,n) in the description and cmp(xl, n) in the code, rather than having n first. Anyway you get the idea. Now I should go back to bed ;-). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-17 06:43 Message: Logged In: YES user_id=72053 I hope there's a simple solution to this--it's obvious what the right result should be mathematically if you compare 1L<<10000 with 0.0. It should not raise an error. If the documented behavior leads to raising an error, then there's a bug in the document. I agree that it's not the highest priority bug in the world, but it doesn't seem that complicated. If n is a long and x is a float, both >= 0, what happens if you do this, to implement cmp(n,x): xl = long(x) # if x has a fraction part and int part is == n, then x>n if float(xl)!=x and xl==n: return 1 return cmp(n, xl) If both are < 0, change 1 to -1 above. If x and n are of opposite sign, the positive one is greater. Unless I missed something (which is possible--I'm not too alert right now) the above should be ok in all cases. Basically you use long as the common type to convert to; you do lose information when converting a non-integer, but for the comparison with an integer, you don't need the lost information other than knowing whether it was nonzero, which you find out by converting the long back to a float. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-09 07:42 Message: Logged In: YES user_id=418174 I completely agree it's not a high-priority item, especially because it may be complicated to fix. I think that the fundamental problem is that there is no common type to which both float and long can be converted without losing information, which complicates both the definition and implementation of comparison. Accordingly, it might make sense to think about this issue in conjunction with future consideration of rational numbers. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 23:33 Message: Logged In: YES user_id=31435 I reopened this, but unassigned it since I can't justify working on it (the benefit/cost ratio of fixing it is down in the noise compared to other things that should be done). I no longer think we'd need a PEP to change the behavior, and agree it would be nice to change it. Changing it may surprise people expecting Python to work like C (C99 says that when integral -> floating conversion is in range but can't be done exactly, either of the closest representable floating numbers may be returned; Python inherits the platform C's behavior here for Python int -> Python float conversion (C long -> C double); when the conversion is out of range, C doesn't define what happens, and Python inherits that too before 2.2 (Infinities and NaNs are what I've seen most often, varying by platform); in 2.2 it raises OverflowError). I'm not sure it's possible for a= c: . print `a`, `b`, `c` ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 07:33 Message: Logged In: YES user_id=418174 Here is yet another surprise: x=[1L<10000] y=[0.0] z=x+y Now I can execute x.sort() and y.sort() successfully, but z.sort blows up. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 05:28 Message: Logged In: YES user_id=418174 The difficulty is that as defined, < is not an order relation, because there exist values a, b, c such that a=T, converting x to long will not lose information, and if x is a long value <=T, converting x to float will not lose information. Therefore, instead of always converting to long, it suffices to convert in a direction chosen by comparing the operands to T (without conversion) first. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:59 Message: Logged In: YES user_id=31435 Oops! I meant """ could lead to a different result than the explicit coercion in somefloat == float(somelong) """ ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Sun Feb 17 15:01:25 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 07:01:25 -0800 Subject: [Python-bugs-list] [ python-Bugs-518767 ] array module has undocumented features Message-ID: Bugs item #518767, was opened at 2002-02-17 07:01 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518767&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: array module has undocumented features Initial Comment: It turns out arrays support list slice operations and string operations: p = array('B','potato') p[2:2]=array('B','banana') # works! insert banana re.search('ta',p).span() # also works! I wouldn't have guessed from the docs for the array module that the slice and regexp operations were supported. If they're "officially" supposed to work, the doc should be say so. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518767&group_id=5470 From noreply@sourceforge.net Sun Feb 17 15:31:30 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 07:31:30 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: Later Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Nobody/Anonymous (nobody) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-17 07:31 Message: Logged In: YES user_id=31435 Paul, this isn't an intellectual challenge -- I expect any numerical programmer of ordinary skill could write code to compare a float to a long delivering the mathematically sensible result. There are several ways to do it. Adding "me too" votes doesn't change the priority. How about taking a whack at writing a patch if this is important to you? It's so low on the list of PythonLabs priorities I doubt I'll ever get to it (which is why I unassigned myself: an unassigned bug report is looking for someone to fix it, not a cheerleader ). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-17 06:58 Message: Logged In: YES user_id=72053 Oops, I got confused about the order of the two args in the example below. I meant cmp(x,n) in the description and cmp(xl, n) in the code, rather than having n first. Anyway you get the idea. Now I should go back to bed ;-). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-17 06:43 Message: Logged In: YES user_id=72053 I hope there's a simple solution to this--it's obvious what the right result should be mathematically if you compare 1L<<10000 with 0.0. It should not raise an error. If the documented behavior leads to raising an error, then there's a bug in the document. I agree that it's not the highest priority bug in the world, but it doesn't seem that complicated. If n is a long and x is a float, both >= 0, what happens if you do this, to implement cmp(n,x): xl = long(x) # if x has a fraction part and int part is == n, then x>n if float(xl)!=x and xl==n: return 1 return cmp(n, xl) If both are < 0, change 1 to -1 above. If x and n are of opposite sign, the positive one is greater. Unless I missed something (which is possible--I'm not too alert right now) the above should be ok in all cases. Basically you use long as the common type to convert to; you do lose information when converting a non-integer, but for the comparison with an integer, you don't need the lost information other than knowing whether it was nonzero, which you find out by converting the long back to a float. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-09 07:42 Message: Logged In: YES user_id=418174 I completely agree it's not a high-priority item, especially because it may be complicated to fix. I think that the fundamental problem is that there is no common type to which both float and long can be converted without losing information, which complicates both the definition and implementation of comparison. Accordingly, it might make sense to think about this issue in conjunction with future consideration of rational numbers. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 23:33 Message: Logged In: YES user_id=31435 I reopened this, but unassigned it since I can't justify working on it (the benefit/cost ratio of fixing it is down in the noise compared to other things that should be done). I no longer think we'd need a PEP to change the behavior, and agree it would be nice to change it. Changing it may surprise people expecting Python to work like C (C99 says that when integral -> floating conversion is in range but can't be done exactly, either of the closest representable floating numbers may be returned; Python inherits the platform C's behavior here for Python int -> Python float conversion (C long -> C double); when the conversion is out of range, C doesn't define what happens, and Python inherits that too before 2.2 (Infinities and NaNs are what I've seen most often, varying by platform); in 2.2 it raises OverflowError). I'm not sure it's possible for a= c: . print `a`, `b`, `c` ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 07:33 Message: Logged In: YES user_id=418174 Here is yet another surprise: x=[1L<10000] y=[0.0] z=x+y Now I can execute x.sort() and y.sort() successfully, but z.sort blows up. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 05:28 Message: Logged In: YES user_id=418174 The difficulty is that as defined, < is not an order relation, because there exist values a, b, c such that a=T, converting x to long will not lose information, and if x is a long value <=T, converting x to float will not lose information. Therefore, instead of always converting to long, it suffices to convert in a direction chosen by comparing the operands to T (without conversion) first. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:59 Message: Logged In: YES user_id=31435 Oops! I meant """ could lead to a different result than the explicit coercion in somefloat == float(somelong) """ ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Sun Feb 17 15:37:03 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 07:37:03 -0800 Subject: [Python-bugs-list] [ python-Bugs-518775 ] buffer object API description truncated Message-ID: Bugs item #518775, was opened at 2002-02-17 07:36 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518775&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: paul rubin (phr) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: buffer object API description truncated Initial Comment: In section 10.6 of the C API reference manual, Python-2-2/Doc/html/api/buffer-structs.html the description for the last subroutine listed, int (*getcharbufferproc) (PyObject *self, int segment, const char **ptrptr) is omitted. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518775&group_id=5470 From noreply@sourceforge.net Sun Feb 17 15:46:58 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 07:46:58 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: Later Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Nobody/Anonymous (nobody) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- >Comment By: Andrew Koenig (arkoenig) Date: 2002-02-17 07:46 Message: Logged In: YES user_id=418174 I think there's a slightly more straightforward algorithm than the one that Paul Rubin (phr) suggested. Again, assume that x is a float and n is a long. We note first that the comparison is trivial unless x and n are both nonzero and have the same sign. We will therefore assume in the rest of this discussion that x and n are strictly positive; the case where they are negative is analogous. Every floating-point implementation has many numbers with the property that the least significant bit in those numbers' representations has a value of 1. In general, if the floating-point representation has k bits, then any integer in the range [2**(k-1),2**k) qualifies. Let K be any of these numbers; it doesn't matter which one. Precompute K and store it in both float and long form. This computation is exact because K is an integer that has an exact representation in floating-point form. It is now possible to compare x with K and n with K exactly, without conversion, because we already have K exactly in both forms. If x < K and n >= K, then x < n and we're done. If x > K and n <= K, then x > n and we're done. Otherwise, x and n are on the same side of K (possibly being equal to K). If x >= K and n >= K, then the LSB of x is at least 1, so we can convert x to long without losing information. Therefore, cmp(x, n) is cmp(long(x), n). If x <= K and n <= K, then then n is small enough that it has an exact representation as a float. Therefore cmp(x, n) is cmp(x, float(n)). So I don't think there's any profound algorithmic problem here. Unfortunately, I don't know enough about the details of how comparison is implemented to be willing to try my hand at a patch. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-17 07:31 Message: Logged In: YES user_id=31435 Paul, this isn't an intellectual challenge -- I expect any numerical programmer of ordinary skill could write code to compare a float to a long delivering the mathematically sensible result. There are several ways to do it. Adding "me too" votes doesn't change the priority. How about taking a whack at writing a patch if this is important to you? It's so low on the list of PythonLabs priorities I doubt I'll ever get to it (which is why I unassigned myself: an unassigned bug report is looking for someone to fix it, not a cheerleader ). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-17 06:58 Message: Logged In: YES user_id=72053 Oops, I got confused about the order of the two args in the example below. I meant cmp(x,n) in the description and cmp(xl, n) in the code, rather than having n first. Anyway you get the idea. Now I should go back to bed ;-). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-17 06:43 Message: Logged In: YES user_id=72053 I hope there's a simple solution to this--it's obvious what the right result should be mathematically if you compare 1L<<10000 with 0.0. It should not raise an error. If the documented behavior leads to raising an error, then there's a bug in the document. I agree that it's not the highest priority bug in the world, but it doesn't seem that complicated. If n is a long and x is a float, both >= 0, what happens if you do this, to implement cmp(n,x): xl = long(x) # if x has a fraction part and int part is == n, then x>n if float(xl)!=x and xl==n: return 1 return cmp(n, xl) If both are < 0, change 1 to -1 above. If x and n are of opposite sign, the positive one is greater. Unless I missed something (which is possible--I'm not too alert right now) the above should be ok in all cases. Basically you use long as the common type to convert to; you do lose information when converting a non-integer, but for the comparison with an integer, you don't need the lost information other than knowing whether it was nonzero, which you find out by converting the long back to a float. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-09 07:42 Message: Logged In: YES user_id=418174 I completely agree it's not a high-priority item, especially because it may be complicated to fix. I think that the fundamental problem is that there is no common type to which both float and long can be converted without losing information, which complicates both the definition and implementation of comparison. Accordingly, it might make sense to think about this issue in conjunction with future consideration of rational numbers. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 23:33 Message: Logged In: YES user_id=31435 I reopened this, but unassigned it since I can't justify working on it (the benefit/cost ratio of fixing it is down in the noise compared to other things that should be done). I no longer think we'd need a PEP to change the behavior, and agree it would be nice to change it. Changing it may surprise people expecting Python to work like C (C99 says that when integral -> floating conversion is in range but can't be done exactly, either of the closest representable floating numbers may be returned; Python inherits the platform C's behavior here for Python int -> Python float conversion (C long -> C double); when the conversion is out of range, C doesn't define what happens, and Python inherits that too before 2.2 (Infinities and NaNs are what I've seen most often, varying by platform); in 2.2 it raises OverflowError). I'm not sure it's possible for a= c: . print `a`, `b`, `c` ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 07:33 Message: Logged In: YES user_id=418174 Here is yet another surprise: x=[1L<10000] y=[0.0] z=x+y Now I can execute x.sort() and y.sort() successfully, but z.sort blows up. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 05:28 Message: Logged In: YES user_id=418174 The difficulty is that as defined, < is not an order relation, because there exist values a, b, c such that a=T, converting x to long will not lose information, and if x is a long value <=T, converting x to float will not lose information. Therefore, instead of always converting to long, it suffices to convert in a direction chosen by comparing the operands to T (without conversion) first. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:59 Message: Logged In: YES user_id=31435 Oops! I meant """ could lead to a different result than the explicit coercion in somefloat == float(somelong) """ ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Sun Feb 17 16:08:43 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 08:08:43 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-512497 ] multi-line print statement Message-ID: Feature Requests item #512497, was opened at 2002-02-03 14:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512497&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: frobozz electric (frobozzelectric) Assigned to: Nobody/Anonymous (nobody) Summary: multi-line print statement Initial Comment: Similar to the multi-line comment block suggestion, instead of using \ to say the line continues use print: "line 1" "line 2" ... "line n" Ok, then...thanks ---------------------------------------------------------------------- >Comment By: frobozz electric (frobozzelectric) Date: 2002-02-17 08:08 Message: Logged In: YES user_id=447750 Well, what I was thinking about was more for when you have a large block of text to display, likely with no variables to be evaluated. So, your example, using raise and for, would not raise an exception nor begin a for loop inside a print: block, rather, they would print stdout, i.e., print: "foo" raise "Done" would display fooraise Done print: foo\n raise Done would display foo raise Done The use of quotation marks, would likely be superfluous. I'm not sure how you could cleanly introduce variable evaluation into this type of print block. Mostly, I was just interested in being able to put several lines of text into one print block, as opposed to using \ or several print statements. Thanks ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:19 Message: Logged In: YES user_id=21627 Can you specify more precisely how this feature would work? E.g. would it be legal to write print: "foo" raise "Done" or print: for i in range(10): "bar" If so, what would be the meaning of the latter one? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512497&group_id=5470 From noreply@sourceforge.net Sun Feb 17 16:15:06 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 08:15:06 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-512494 ] multi-line comment block clarification Message-ID: Feature Requests item #512494, was opened at 2002-02-03 14:04 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512494&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: frobozz electric (frobozzelectric) Assigned to: Nobody/Anonymous (nobody) Summary: multi-line comment block clarification Initial Comment: The previous post did not show the indenting for the multi-line comment block. What I meant was this #: Comment line 1 Comment line 2 ... Comment line n Whatever. It's just an idea. ---------------------------------------------------------------------- >Comment By: frobozz electric (frobozzelectric) Date: 2002-02-17 08:15 Message: Logged In: YES user_id=447750 I don't _need_ the feature, but I would like it. Mostly I wanted to be able to do something like this #: comment comment comment comment rather than # comment # comment # comment # comment There is no reason for it other than aesthetics. Thanks ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:13 Message: Logged In: YES user_id=21627 Why do you need that feature? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512494&group_id=5470 From noreply@sourceforge.net Sun Feb 17 16:22:10 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 08:22:10 -0800 Subject: [Python-bugs-list] [ python-Bugs-513866 ] Float/long comparison anomaly Message-ID: Bugs item #513866, was opened at 2002-02-06 10:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: Later Priority: 5 Submitted By: Andrew Koenig (arkoenig) Assigned to: Nobody/Anonymous (nobody) Summary: Float/long comparison anomaly Initial Comment: Comparing a float and a long appears to convert the long to float and then compare the two floats. This strategy is a problem because the conversion might lose precision. As a result, == is not an equivalence relation and < is not an order relation. For example, it is possible to create three numbers a, b, and c such that a==b, b==c, and a!=c. ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-17 08:22 Message: Logged In: YES user_id=72053 It looks like the complication is not in finding an algorithm but rather in fitting it into the implementation. I'm not at all sure this is right, but glancing at the code, the comparison seems to happen in the function try_3way_compare in Objects/object.c, which calls PyNumber_CoerceEx if the types aren't equal. PyNumber_CoerceEx ends up calling float_coerce on x,n which "promotes" n to float, similar to what happens when you do mixed arithmetic (like x+n). My guess is that a suitable patch would go into try_3way_compare to specially notice when you're comparing a float and a long, and avoid the coercion. I'm unfamiliar enough with the implementation that I'd probably take a while to get it right, and still possibly end up forgetting to update a refcount or something, leading to leaked memory or mysterious crashes later. Anyway, no, this isn't real important to me, at least at the moment. It just wasn't clear whether there was any difficulty figuring out a useable algorithm. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-17 07:46 Message: Logged In: YES user_id=418174 I think there's a slightly more straightforward algorithm than the one that Paul Rubin (phr) suggested. Again, assume that x is a float and n is a long. We note first that the comparison is trivial unless x and n are both nonzero and have the same sign. We will therefore assume in the rest of this discussion that x and n are strictly positive; the case where they are negative is analogous. Every floating-point implementation has many numbers with the property that the least significant bit in those numbers' representations has a value of 1. In general, if the floating-point representation has k bits, then any integer in the range [2**(k-1),2**k) qualifies. Let K be any of these numbers; it doesn't matter which one. Precompute K and store it in both float and long form. This computation is exact because K is an integer that has an exact representation in floating-point form. It is now possible to compare x with K and n with K exactly, without conversion, because we already have K exactly in both forms. If x < K and n >= K, then x < n and we're done. If x > K and n <= K, then x > n and we're done. Otherwise, x and n are on the same side of K (possibly being equal to K). If x >= K and n >= K, then the LSB of x is at least 1, so we can convert x to long without losing information. Therefore, cmp(x, n) is cmp(long(x), n). If x <= K and n <= K, then then n is small enough that it has an exact representation as a float. Therefore cmp(x, n) is cmp(x, float(n)). So I don't think there's any profound algorithmic problem here. Unfortunately, I don't know enough about the details of how comparison is implemented to be willing to try my hand at a patch. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-17 07:31 Message: Logged In: YES user_id=31435 Paul, this isn't an intellectual challenge -- I expect any numerical programmer of ordinary skill could write code to compare a float to a long delivering the mathematically sensible result. There are several ways to do it. Adding "me too" votes doesn't change the priority. How about taking a whack at writing a patch if this is important to you? It's so low on the list of PythonLabs priorities I doubt I'll ever get to it (which is why I unassigned myself: an unassigned bug report is looking for someone to fix it, not a cheerleader ). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-17 06:58 Message: Logged In: YES user_id=72053 Oops, I got confused about the order of the two args in the example below. I meant cmp(x,n) in the description and cmp(xl, n) in the code, rather than having n first. Anyway you get the idea. Now I should go back to bed ;-). ---------------------------------------------------------------------- Comment By: paul rubin (phr) Date: 2002-02-17 06:43 Message: Logged In: YES user_id=72053 I hope there's a simple solution to this--it's obvious what the right result should be mathematically if you compare 1L<<10000 with 0.0. It should not raise an error. If the documented behavior leads to raising an error, then there's a bug in the document. I agree that it's not the highest priority bug in the world, but it doesn't seem that complicated. If n is a long and x is a float, both >= 0, what happens if you do this, to implement cmp(n,x): xl = long(x) # if x has a fraction part and int part is == n, then x>n if float(xl)!=x and xl==n: return 1 return cmp(n, xl) If both are < 0, change 1 to -1 above. If x and n are of opposite sign, the positive one is greater. Unless I missed something (which is possible--I'm not too alert right now) the above should be ok in all cases. Basically you use long as the common type to convert to; you do lose information when converting a non-integer, but for the comparison with an integer, you don't need the lost information other than knowing whether it was nonzero, which you find out by converting the long back to a float. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-09 07:42 Message: Logged In: YES user_id=418174 I completely agree it's not a high-priority item, especially because it may be complicated to fix. I think that the fundamental problem is that there is no common type to which both float and long can be converted without losing information, which complicates both the definition and implementation of comparison. Accordingly, it might make sense to think about this issue in conjunction with future consideration of rational numbers. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-08 23:33 Message: Logged In: YES user_id=31435 I reopened this, but unassigned it since I can't justify working on it (the benefit/cost ratio of fixing it is down in the noise compared to other things that should be done). I no longer think we'd need a PEP to change the behavior, and agree it would be nice to change it. Changing it may surprise people expecting Python to work like C (C99 says that when integral -> floating conversion is in range but can't be done exactly, either of the closest representable floating numbers may be returned; Python inherits the platform C's behavior here for Python int -> Python float conversion (C long -> C double); when the conversion is out of range, C doesn't define what happens, and Python inherits that too before 2.2 (Infinities and NaNs are what I've seen most often, varying by platform); in 2.2 it raises OverflowError). I'm not sure it's possible for a= c: . print `a`, `b`, `c` ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 07:33 Message: Logged In: YES user_id=418174 Here is yet another surprise: x=[1L<10000] y=[0.0] z=x+y Now I can execute x.sort() and y.sort() successfully, but z.sort blows up. ---------------------------------------------------------------------- Comment By: Andrew Koenig (arkoenig) Date: 2002-02-07 05:28 Message: Logged In: YES user_id=418174 The difficulty is that as defined, < is not an order relation, because there exist values a, b, c such that a=T, converting x to long will not lose information, and if x is a long value <=T, converting x to float will not lose information. Therefore, instead of always converting to long, it suffices to convert in a direction chosen by comparing the operands to T (without conversion) first. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:59 Message: Logged In: YES user_id=31435 Oops! I meant """ could lead to a different result than the explicit coercion in somefloat == float(somelong) """ ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-06 20:52 Message: Logged In: YES user_id=31435 Since the coercion to float is documented and intended, it's not "a bug" (it's functioning as designed), although you may wish to argue for a different design, in which case making an incompatible change would first require a PEP and community debate. Information loss in operations involving floats comes with the territory, and I don't see a reason to single this particular case out as especially surprising. OTOH, I expect it would be especially surprising to a majority of users if the implicit coercion in somefloat == somelong could lead to a different result than the explicit coercion in long(somefloat) == somelong Note that the "long" type isn't unique here: the same is true of mixing Python ints with Python floats on boxes where C longs have more bits of precision than C doubles (e.g., Linux for IA64, and Crays). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=513866&group_id=5470 From noreply@sourceforge.net Sun Feb 17 17:20:05 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 09:20:05 -0800 Subject: [Python-bugs-list] [ python-Bugs-514443 ] Python cores with "viewcvs" - Cygwin Message-ID: Bugs item #514443, was opened at 2002-02-07 11:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514443&group_id=5470 Category: Threads Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jari Aalto (jaalto) Assigned to: Nobody/Anonymous (nobody) Summary: Python cores with "viewcvs" - Cygwin Initial Comment: ViewCVS dies on startup with Python 2.2 under W2k Pro sp2 / Cygwin http://www.sourceforge.net/projects/viewcvs This may be problem in the Cygwin python itself, So this bug has been reported to python dev team as well. //root@W2KPICASSO /usr/src/cvs-source/python-viewcvs $ ./standalone.py -g -r /cygdrive/h/data/version- control/cvsroot Traceback (most recent call last): File "./standalone.py", line 540, in ? File "./standalone.py", line 495, in cli File "./standalone.py", line 467, in gui File "./standalone.py", line 406, in __init__ File "/usr/lib/python2.2/threading.py", line 5, in ? import thread ImportError: No module named thread cygcheck -s report: 751k 2002/01/19 h:\unix-root\u\bin\cygwin1.dll Cygwin DLL version info: DLL version: 1.3.7 DLL epoch: 19 DLL bad signal mask: 19005 DLL old termios: 5 DLL malloc env: 28 API major: 0 API minor: 51 Shared data: 3 DLL identifier: cygwin1 Mount registry: 2 Cygnus registry name: Cygnus Solutions Cygwin registry name: Cygwin Program options name: Program Options Cygwin mount registry name: mounts v2 Cygdrive flags: cygdrive flags Cygdrive prefix: cygdrive prefix Cygdrive default prefix: Build date: Sat Jan 19 13:20:32 EST 2002 Shared id: cygwin1S3 653k 1998/10/30 h:\bin\sql\mysql- w2k\bin\cygwinb19.dll Cygwin Package Information Package Version ash 20011018-1 autoconf 2.52a-1 autoconf-devel 2.52-4 autoconf-stable 2.13-4 automake 1.5b-1 automake-devel 1.5b-1 automake-stable 1.4p5-5 bash 2.05a-2 bc 1.06-1 binutils 20011002-1 bison 1.30-1 byacc 1.9-1 bzip2 1.0.1-6 clear 1.0 compface 1.4-5 cpio 2.4.2 cron 3.0.1-5 crypt 1.0-1 ctags 5.2-1 curl 7.9.2-1 cvs 1.11.0-1 cygrunsrv 0.94-2 cygutils 0.9.7-1 cygwin 1.3.7-1 dejagnu 20010117-1 diff 0.0 ed 0.2-1 expect 20010117-1 figlet 2.2-1 file 3.37-1 fileutils 4.1-1 findutils 4.1 flex 2.5.4-1 fortune 1.8-1 gawk 3.0.4-1 gcc 2.95.3-5 gdb 20010428-3 gdbm 1.8.0-3 gettext 0.10.40-1 ghostscript 6.51-1 gperf 0.0 grep 2.4.2-1 groff 1.17.2-1 gzip 1.3.2-1 inetutils 1.3.2-17 irc 20010101-1 jbigkit 1.2-4 jpeg 6b-4 less 358-3 libintl 0.10.38-3 libintl1 0.10.40-1 libncurses5 5.2-1 libncurses6 5.2-8 libpng 1.0.12-1 libpng2 1.0.12-1 libreadline4 4.1-2 libreadline5 4.2a-1 libtool 20010531a-1 libtool-devel 20010531-6 libtool-stable 1.4.2-2 libxml2 2.4.13-1 libxslt 1.0.9-1 login 1.4-3 lynx 2.8.4-1 m4 0.0 make 3.79.1-5 man 1.5g-2 mingw 20010917-1 mingw-runtime 1.2-1 mktemp 1.4-1 mt 2.0.1-1 mutt 1.2.5i-6 nano 1.0.7-1 ncftp 3.0.2-2 ncurses 5.2-8 newlib-man 20001118-1 opengl 1.1.0-5 openssh 3.0.2p1-4 openssl 0.9.6c-3 openssl-devel 0.9.6c-2 patch 2.5-2 pcre 3.7-1 perl 5.6.1-2 popt 1.6.2-1 postgresql 7.1.3-2 python 2.2-1 readline 4.2a-1 regex 4.4-2 robots 2.0-1 rsync 2.5.1-2 rxvt 2.7.2-6 rxvt 2.7.2-6 sed 3.02-1 sh-utils 2.0-2 sharutils 4.2.1-2 shellutils 0.0 shutdown 1.2-2 squid 2.4.PRE-STABLE ssmtp 2.38.7-3 tar 1.13.19-1 tcltk 20001125-1 tcsh 6.11.00-3 termcap 20010825-1 terminfo 5.2-1 tetex-beta 20001218-4 texinfo 4.0-5 textutils 2.0.16-1 tiff 3.5.6beta-2 time 1.7-1 units 1.77-1 unzip 5.41-1 vim 6.0.93-1 w32api 20010520-1 wget 1.7.1-1 which 1.5-1 whois 4.5.17-1 xpm 4.0.0-2 xpm-nox 4.1.0-1 zip 2.3-1 zlib 1.1.3-6 ---------------------------------------------------------------------- Comment By: Norman Vine (nhv) Date: 2002-02-17 09:20 Message: Logged In: YES user_id=1020 Your error message says it all :-) >File "/usr/lib/python2.2/threading.py", line 5, in ? >import thread >ImportError: No module named thread You need to compile Python for yourself if you want threading support in Cygwin. Note that threading with Cygwin is problematic yet however I have good results with W2k sp2 and a locally compiled Cygwin Python. IMHO this report should be considered closed Norman ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514443&group_id=5470 From noreply@sourceforge.net Sun Feb 17 20:10:16 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 12:10:16 -0800 Subject: [Python-bugs-list] [ python-Bugs-518846 ] exception cannot be new-style class Message-ID: Bugs item #518846, was opened at 2002-02-17 12:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518846&group_id=5470 Category: Type/class unification Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Magnus Heino (magnusheino) Assigned to: Nobody/Anonymous (nobody) Summary: exception cannot be new-style class Initial Comment: [magnus@gills magnus]$ python2.2 Python 2.2 (#1, Jan 26 2002, 14:27:24) [GCC 2.96 20000731 (Red Hat Linux 7.1 2.96-98)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class foo(object): ... pass ... >>> raise foo() Traceback (most recent call last): File "", line 1, in ? TypeError: exceptions must be strings, classes, or instances, not foo >>> ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518846&group_id=5470 From noreply@sourceforge.net Sun Feb 17 21:55:14 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 13:55:14 -0800 Subject: [Python-bugs-list] [ python-Bugs-514443 ] Python cores with "viewcvs" - Cygwin Message-ID: Bugs item #514443, was opened at 2002-02-07 11:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514443&group_id=5470 Category: Threads >Group: Not a Bug >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Jari Aalto (jaalto) >Assigned to: Tim Peters (tim_one) >Summary: Python cores with "viewcvs" - Cygwin Initial Comment: ViewCVS dies on startup with Python 2.2 under W2k Pro sp2 / Cygwin http://www.sourceforge.net/projects/viewcvs This may be problem in the Cygwin python itself, So this bug has been reported to python dev team as well. //root@W2KPICASSO /usr/src/cvs-source/python-viewcvs $ ./standalone.py -g -r /cygdrive/h/data/version- control/cvsroot Traceback (most recent call last): File "./standalone.py", line 540, in ? File "./standalone.py", line 495, in cli File "./standalone.py", line 467, in gui File "./standalone.py", line 406, in __init__ File "/usr/lib/python2.2/threading.py", line 5, in ? import thread ImportError: No module named thread cygcheck -s report: 751k 2002/01/19 h:\unix-root\u\bin\cygwin1.dll Cygwin DLL version info: DLL version: 1.3.7 DLL epoch: 19 DLL bad signal mask: 19005 DLL old termios: 5 DLL malloc env: 28 API major: 0 API minor: 51 Shared data: 3 DLL identifier: cygwin1 Mount registry: 2 Cygnus registry name: Cygnus Solutions Cygwin registry name: Cygwin Program options name: Program Options Cygwin mount registry name: mounts v2 Cygdrive flags: cygdrive flags Cygdrive prefix: cygdrive prefix Cygdrive default prefix: Build date: Sat Jan 19 13:20:32 EST 2002 Shared id: cygwin1S3 653k 1998/10/30 h:\bin\sql\mysql- w2k\bin\cygwinb19.dll Cygwin Package Information Package Version ash 20011018-1 autoconf 2.52a-1 autoconf-devel 2.52-4 autoconf-stable 2.13-4 automake 1.5b-1 automake-devel 1.5b-1 automake-stable 1.4p5-5 bash 2.05a-2 bc 1.06-1 binutils 20011002-1 bison 1.30-1 byacc 1.9-1 bzip2 1.0.1-6 clear 1.0 compface 1.4-5 cpio 2.4.2 cron 3.0.1-5 crypt 1.0-1 ctags 5.2-1 curl 7.9.2-1 cvs 1.11.0-1 cygrunsrv 0.94-2 cygutils 0.9.7-1 cygwin 1.3.7-1 dejagnu 20010117-1 diff 0.0 ed 0.2-1 expect 20010117-1 figlet 2.2-1 file 3.37-1 fileutils 4.1-1 findutils 4.1 flex 2.5.4-1 fortune 1.8-1 gawk 3.0.4-1 gcc 2.95.3-5 gdb 20010428-3 gdbm 1.8.0-3 gettext 0.10.40-1 ghostscript 6.51-1 gperf 0.0 grep 2.4.2-1 groff 1.17.2-1 gzip 1.3.2-1 inetutils 1.3.2-17 irc 20010101-1 jbigkit 1.2-4 jpeg 6b-4 less 358-3 libintl 0.10.38-3 libintl1 0.10.40-1 libncurses5 5.2-1 libncurses6 5.2-8 libpng 1.0.12-1 libpng2 1.0.12-1 libreadline4 4.1-2 libreadline5 4.2a-1 libtool 20010531a-1 libtool-devel 20010531-6 libtool-stable 1.4.2-2 libxml2 2.4.13-1 libxslt 1.0.9-1 login 1.4-3 lynx 2.8.4-1 m4 0.0 make 3.79.1-5 man 1.5g-2 mingw 20010917-1 mingw-runtime 1.2-1 mktemp 1.4-1 mt 2.0.1-1 mutt 1.2.5i-6 nano 1.0.7-1 ncftp 3.0.2-2 ncurses 5.2-8 newlib-man 20001118-1 opengl 1.1.0-5 openssh 3.0.2p1-4 openssl 0.9.6c-3 openssl-devel 0.9.6c-2 patch 2.5-2 pcre 3.7-1 perl 5.6.1-2 popt 1.6.2-1 postgresql 7.1.3-2 python 2.2-1 readline 4.2a-1 regex 4.4-2 robots 2.0-1 rsync 2.5.1-2 rxvt 2.7.2-6 rxvt 2.7.2-6 sed 3.02-1 sh-utils 2.0-2 sharutils 4.2.1-2 shellutils 0.0 shutdown 1.2-2 squid 2.4.PRE-STABLE ssmtp 2.38.7-3 tar 1.13.19-1 tcltk 20001125-1 tcsh 6.11.00-3 termcap 20010825-1 terminfo 5.2-1 tetex-beta 20001218-4 texinfo 4.0-5 textutils 2.0.16-1 tiff 3.5.6beta-2 time 1.7-1 units 1.77-1 unzip 5.41-1 vim 6.0.93-1 w32api 20010520-1 wget 1.7.1-1 which 1.5-1 whois 4.5.17-1 xpm 4.0.0-2 xpm-nox 4.1.0-1 zip 2.3-1 zlib 1.1.3-6 ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-17 13:55 Message: Logged In: YES user_id=31435 I'm closing this as Not-a-Bug since Norman is right that thread support isn't yet enabled by default in the Cygwin port. However, feel free to open this again if you really meant "cores". An ordinary Python exception (like failure to import a module that isn't there) should never lead to an actual core dump. If the program simply quit without leaving a core file, "cores" was an incorrect claim. ---------------------------------------------------------------------- Comment By: Norman Vine (nhv) Date: 2002-02-17 09:20 Message: Logged In: YES user_id=1020 Your error message says it all :-) >File "/usr/lib/python2.2/threading.py", line 5, in ? >import thread >ImportError: No module named thread You need to compile Python for yourself if you want threading support in Cygwin. Note that threading with Cygwin is problematic yet however I have good results with W2k sp2 and a locally compiled Cygwin Python. IMHO this report should be considered closed Norman ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=514443&group_id=5470 From noreply@sourceforge.net Sun Feb 17 23:44:13 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 15:44:13 -0800 Subject: [Python-bugs-list] [ python-Bugs-518846 ] exception cannot be new-style class Message-ID: Bugs item #518846, was opened at 2002-02-17 12:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518846&group_id=5470 Category: Type/class unification Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Magnus Heino (magnusheino) Assigned to: Nobody/Anonymous (nobody) Summary: exception cannot be new-style class Initial Comment: [magnus@gills magnus]$ python2.2 Python 2.2 (#1, Jan 26 2002, 14:27:24) [GCC 2.96 20000731 (Red Hat Linux 7.1 2.96-98)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> class foo(object): ... pass ... >>> raise foo() Traceback (most recent call last): File "", line 1, in ? TypeError: exceptions must be strings, classes, or instances, not foo >>> ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-17 15:44 Message: Logged In: YES user_id=21627 Interesting. I think we need to deprecate, then remove string exception before allowing arbitrary objects as exceptions. Or we could allow strings to be caught either by __builtin__.str, or by an identical string. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518846&group_id=5470 From noreply@sourceforge.net Mon Feb 18 00:16:49 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 16:16:49 -0800 Subject: [Python-bugs-list] [ python-Bugs-507713 ] mem leak in imaplib Message-ID: Bugs item #507713, was opened at 2002-01-23 13:28 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507713&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Scott Blomquist (scottdb) Assigned to: Piers Lauder (pierslauder) Summary: mem leak in imaplib Initial Comment: When run in a multithreaded environment, the imaplib will leak memory if not run with the -O option. Long running, multithreaded programs that we have which use the imaplib will run fine for a undefined period of time, then suddenly start to grow in size until they take as much mem as the system will give to them. Once they start to grow, they continue to grow at a pretty consistent rate. Specifically: If the -O option is not used, in the _log method starting on line 1024 in the imaplib class, the imaplib keeps the last 10 commands that are sent. def _log(line): # Keep log of last `_cmd_log_len' interactions for debugging. if len(_cmd_log) == _cmd_log_len: del _cmd_log[0] _cmd_log.append((time.time(), line)) Unfortunately, in a multithreaded environment, eventually the len of the list will become larger than the _cmd_log_len, and since the test is for equality, rather than greater-than-equal-to, once the len of the _cmd_log gets larger than _cmd_log_len, nothing will ever be removed from the _cmd_log, and the list will grow without bound. We added the following to test this hypothesis, we created a basic test which creates 40 threads. These threads sit in while 1 loops and create an imaplib and then issue the logout command. We also added the following debug to the method above: if len(_cmd_log) > 10: print 'command log len is:', len(_cmd_log) We started the test, which ran fine, without leaking, for about 10 minutes, and without printing anything out. Somewhere around ten minutes, the process started to grow in size rapidly, and at the same time, the debug started printing out, and the size of the _cmd_log list did indeed grow very large, very fast. We repeated the test and the same symptoms occured, this time after only 5 minutes. ---------------------------------------------------------------------- >Comment By: Piers Lauder (pierslauder) Date: 2002-02-17 16:16 Message: Logged In: YES user_id=196212 On further consideration, I think that the command logging should be done with a circular buffer implemented using a dictionary - it has two advantages: it's immune to thread contentions, and it's twice as fast as a truncating list. I'm also convinced these routines should be per IMAP4 instance - ie: per socket - so i've moved them into the class. I've attached a diff with the current CVS If noone disagrees, I'll make these changes. ---------------------------------------------------------------------- Comment By: Piers Lauder (pierslauder) Date: 2002-02-16 02:39 Message: Logged In: YES user_id=196212 I aggree that the line: if len(_cmd_log) == _cmd_log_len: should be changed, though I favour the form: while len(_cmd_log) >= _cmd_log_len: del _cmd_log[0] rather than the version suggested in the patch: if len(_cmd_log) > _cmd_log_len: del _cmd_log[:-_cmd_log_len] However, if imaplib is gpoing to be used by multiple threads, perhaps the best solution is to move these debugging routines entirely into the IMAP4 class, so that the logs are per-connection, rather than global? ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 22:09 Message: Logged In: YES user_id=3066 It looks like the problem still exists in Python 2.1.2, 2.2, and CVS. I've attached a patch that I think solves this problem, but this isn't easy for me to test. Please check this. Assigning to Piers Lauder since he knows more about this module than I do. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507713&group_id=5470 From noreply@sourceforge.net Mon Feb 18 06:43:56 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 22:43:56 -0800 Subject: [Python-bugs-list] [ python-Bugs-518985 ] Ellipsis semantics undefined Message-ID: Bugs item #518985, was opened at 2002-02-17 22:43 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518985&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Wayne C. Smith (wcsmith) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Ellipsis semantics undefined Initial Comment: The Python Reference Manual 2/4/02 2.3a0 does not describe the effect of the ellipsis in a list expression. Either that or one of the two entries in the Index is misdirected. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518985&group_id=5470 From noreply@sourceforge.net Mon Feb 18 07:16:52 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 17 Feb 2002 23:16:52 -0800 Subject: [Python-bugs-list] [ python-Bugs-518989 ] Import statement Index ref. broken Message-ID: Bugs item #518989, was opened at 2002-02-17 23:16 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518989&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Wayne C. Smith (wcsmith) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Import statement Index ref. broken Initial Comment: In the Python Reference Manual 2/4/02 2.3a0 the Index entry for the Import statement points to the wrong place. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=518989&group_id=5470 From noreply@sourceforge.net Mon Feb 18 10:24:40 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 02:24:40 -0800 Subject: [Python-bugs-list] [ python-Bugs-519028 ] make-pyexpat failed Message-ID: Bugs item #519028, was opened at 2002-02-18 02:22 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=519028&group_id=5470 Category: XML Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Chalaoux (copter) Assigned to: Nobody/Anonymous (nobody) Summary: make-pyexpat failed Initial Comment: Hi, This the make_pyexpat report after my compilation of python 2.1.1 on a sun machine. Bye #################################################### >python Lib/test/test_pyexpat.py OK. OK. OK. OK. PI: 'xml-stylesheet' 'href="stylesheet.css"' Comment: ' comment data ' Notation declared: ('notation', None, 'notation.jpeg', None) Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation') Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\xe1 \xbd\x80'} NS decl: 'myns' 'http://www.python.org/namespace' Start element: 'http://www.python.org/namespace!subelement' {} Character data: 'Contents of subelements' End element: 'http://www.python.org/namespace!subelement' End of NS decl: 'myns' Start element: 'sub2' {} Start of CDATA section Character data: 'contents of CDATA section' End of CDATA section End element: 'sub2' External entity ref: (None, 'entity.file', None) End element: 'root' PI: u'xml-stylesheet' u'href="stylesheet.css"' Comment: u' comment data ' Notation declared: (u'notation', None, u'notation.jpeg', None) Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation') Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\u1f40'} NS decl: u'myns' u'http://www.python.org/namespace' Start element: u'http://www.python.org/namespace!subelement' {} Character data: u'Contents of subelements' End element: u'http://www.python.org/namespace!subelement' End of NS decl: u'myns' Start element: u'sub2' {} Start of CDATA section Character data: u'contents of CDATA section' End of CDATA section End element: u'sub2' External entity ref: (None, u'entity.file', None) End element: u'root' PI: u'xml-stylesheet' u'href="stylesheet.css"' Comment: u' comment data ' Notation declared: (u'notation', None, u'notation.jpeg', None) Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation') Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\u1f40'} NS decl: u'myns' u'http://www.python.org/namespace' Start element: u'http://www.python.org/namespace!subelement' {} Character data: u'Contents of subelements' End element: u'http://www.python.org/namespace!subelement' End of NS decl: u'myns' Start element: u'sub2' {} Start of CDATA section Character data: u'contents of CDATA section' End of CDATA section End element: u'sub2' External entity ref: (None, u'entity.file', None) End element: u'root' Testing constructor for proper handling of namespace_separator values: Legal values tested o.k. Caught expected TypeError: ParserCreate() argument 2 must be string or None, not int Caught expected ValueError: namespace_separator must be at most one character, omitted, or None Failed to catch expected ValueError. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=519028&group_id=5470 From noreply@sourceforge.net Mon Feb 18 17:14:06 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 09:14:06 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-519227 ] hook method for 'is' operator Message-ID: Feature Requests item #519227, was opened at 2002-02-18 09:12 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: hook method for 'is' operator Initial Comment: Being able to overload the 'is' operator would lead to nicer more readable code: # constant Open = ("OPEN",) # dummy class for my example class File: id = 0 def __init__(self, file=None): if file is not None: self.open(file) # overload 'is' operator def __is__(self, other): if id(self)==id(other): # default return 1 elif other==("OPEN",) and self.id!=0: return 1 return 0 def open(self, file): self.id = open(file).fileno f = File("myfile.txt") if f is Open: print "File is open!" else: print "File is not open" 'is not' could just test __is__ and return 'not retval' ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 From noreply@sourceforge.net Mon Feb 18 17:17:32 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 09:17:32 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-519230 ] hook method for 'is' operator Message-ID: Feature Requests item #519230, was opened at 2002-02-18 09:13 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519230&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: hook method for 'is' operator Initial Comment: Being able to overload the 'is' operator would lead to nicer more readable code: # constant Open = ("OPEN",) # dummy class for my example class File: id = 0 def __init__(self, file=None): if file is not None: self.open(file) # overload 'is' operator def __is__(self, other): if id(self)==id(other): # default return 1 elif other==("OPEN",) and self.id!=0: return 1 return 0 def open(self, file): self.id = open(file).fileno f = File("myfile.txt") if f is Open: print "File is open!" else: print "File is not open" 'is not' could just test __is__ and return 'not retval' ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519230&group_id=5470 From noreply@sourceforge.net Mon Feb 18 17:30:09 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 09:30:09 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-519227 ] hook method for 'is' operator Message-ID: Feature Requests item #519227, was opened at 2002-02-18 09:12 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: hook method for 'is' operator Initial Comment: Being able to overload the 'is' operator would lead to nicer more readable code: # constant Open = ("OPEN",) # dummy class for my example class File: id = 0 def __init__(self, file=None): if file is not None: self.open(file) # overload 'is' operator def __is__(self, other): if id(self)==id(other): # default return 1 elif other==("OPEN",) and self.id!=0: return 1 return 0 def open(self, file): self.id = open(file).fileno f = File("myfile.txt") if f is Open: print "File is open!" else: print "File is not open" 'is not' could just test __is__ and return 'not retval' ---------------------------------------------------------------------- >Comment By: Neil Schemenauer (nascheme) Date: 2002-02-18 09:30 Message: Logged In: YES user_id=35752 The "is" operator has well defined semantics. It compares object identity. Allowing it to be redefined would a terrible idea, IMHO. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 From noreply@sourceforge.net Mon Feb 18 17:48:48 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 09:48:48 -0800 Subject: [Python-bugs-list] [ python-Bugs-504343 ] Unicode docstrings and new style classes Message-ID: Bugs item #504343, was opened at 2002-01-16 04:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=504343&group_id=5470 Category: Type/class unification Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Walter Dörwald (doerwalter) Assigned to: Martin v. Löwis (loewis) Summary: Unicode docstrings and new style classes Initial Comment: Unicode docstrings don't work with new style classes. With old style classes they work: ---- class foo: u"föö" class bar(object): u"bär" print repr(foo.__doc__) print repr(bar.__doc__) ---- This prints ---- u'f\xf6\xf6' None ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-18 09:47 Message: Logged In: YES user_id=21627 Thanks for the patch. Applied as typeobject.c 2.127. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2002-02-12 08:07 Message: Logged In: NO Not forgotten, but I've been busy, and will continue to be so... ;-( --Guido ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-02-12 07:21 Message: Logged In: YES user_id=146903 Just wondering if this bug has been forgotten or not. My patch came out a bit weird w.r.t. line wrapping, so you can get here instead: http://www.daa.com.au/~james/files/type-doc.patch I would have added it as an attachment if the SF bug tracker didn't prevent me from doing so (bugzilla is much nicer to use for things like this). ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-01-27 02:10 Message: Logged In: YES user_id=146903 Put together a patch that gets rid of the type.__doc__ property, and sets __doc__ in PyType_Ready() (if appropriate). Seems to work okay in my tests and as a bonus, "print type.__doc__" actually prints documentation on using the type() function :) SF doesn't seem to give me a way to attach a patch to this bug, so I will paste a copy of the patch here (if it is mangled, email me at james@daa.com.au for a copy): --- Python-2.2/Objects/typeobject.c.orig Tue Dec 18 01:14:22 2001 +++ Python-2.2/Objects/typeobject.c Sun Jan 27 17:56:37 2002 @@ -8,7 +8,6 @@ static PyMemberDef type_members[] = { {"__basicsize__", T_INT, offsetof(PyTypeObject,tp_basicsize),READONLY}, {"__itemsize__", T_INT, offsetof(PyTypeObject, tp_itemsize), READONLY}, {"__flags__", T_LONG, offsetof(PyTypeObject, tp_flags), READONLY}, - {"__doc__", T_STRING, offsetof(PyTypeObject, tp_doc), READONLY}, {"__weakrefoffset__", T_LONG, offsetof(PyTypeObject, tp_weaklistoffset), READONLY}, {"__base__", T_OBJECT, offsetof(PyTypeObject, tp_base), READONLY}, @@ -1044,9 +1043,9 @@ type_new(PyTypeObject *metatype, PyObjec } /* Set tp_doc to a copy of dict['__doc__'], if the latter is there - and is a string (tp_doc is a char* -- can't copy a general object - into it). - XXX What if it's a Unicode string? Don't know -- this ignores it. + and is a string. Note that the tp_doc slot will only be used + by C code -- python code will use the version in tp_dict, so + it isn't that important that non string __doc__'s are ignored. */ { PyObject *doc = PyDict_GetItemString(dict, "__doc__"); @@ -2024,6 +2023,19 @@ PyType_Ready(PyTypeObject *type) inherit_slots(type, (PyTypeObject *)b); } + /* if the type dictionary doesn't contain a __doc__, set it from + the tp_doc slot. + */ + if (PyDict_GetItemString(type->tp_dict, "__doc__") == NULL) { + if (type->tp_doc != NULL) { + PyObject *doc = PyString_FromString(type->tp_doc); + PyDict_SetItemString(type->tp_dict, "__doc__", doc); + Py_DECREF(doc); + } else { + PyDict_SetItemString(type->tp_dict, "__doc__", Py_None); + } + } + /* Some more special stuff */ base = type->tp_base; if (base != NULL) { ---------------------------------------------------------------------- Comment By: James Henstridge (jhenstridge) Date: 2002-01-27 01:37 Message: Logged In: YES user_id=146903 I am posting some comments about this patch after my similar bug was closed as a duplicate: http://sourceforge.net/tracker/?group_id=5470&atid=105470&func=detail&aid=507394 I just tested the typeobject.c patch, and it doesn't work when using a descriptor as the __doc__ for an object (the descriptor itself is returned for class.__doc__ rather than the result of the tp_descr_get function). With the patch applied, the output of the program attached to the above mentioned bug is: OldClass.__doc__ = 'object=None type=OldClass' OldClass().__doc__ = 'object=OldClass instance type=OldClass' NewClass.__doc__ = <__main__.DocDescr object at 0x811ce34> NewClass().__doc__ = 'object=NewClass instance type=NewClass' The suggestion I gave in the other bug is to get rid of the type.__doc__ property/getset all together, and make PyType_Ready() set __doc__ in tp_dict based on the value of tp_doc. Is there any reason why this wouldn't work? (it would seem to give behaviour more consistant with old style classes, which would be good). I will look at producing a patch to do this shortly. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-17 08:14 Message: Logged In: YES user_id=89016 This sound much better. With my current patch all the docstrings for the builltin types are gone, because int etc. never goes through typeobject.c/type_new(). I updated the patch to use Guido's method. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-01-17 06:25 Message: Logged In: YES user_id=6380 Wouldn't it be easier to set the __doc__ attribute in tp_dict and be done with it? That's what classic classes do. The accessor should still be a bit special: it should be implemented as a property (in tp_getsets), and first look for __doc__ in tp_dict and fall back to tp_doc. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-17 06:19 Message: Logged In: YES user_id=89016 OK, I've attached the patch. Note that I had to change the return value of PyStructSequence_InitType from void to int. Introducing tp_docobject should provide backwards compatibility for C extensions that still want to use tp_doc as char *. If this is not relevant then we could switch to PyObject *tp_doc immediately, but this complicates initializing a static type structure. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-17 05:45 Message: Logged In: YES user_id=21627 Adding tp_docobject would work, although it may be somewhat hackish (why should we have this kind of redundancy). I'm not sure how you will convert that to the 8bit version, though: what encoding? If you use the default encoding, tp_doc will be sometimes set, sometimes it won't. In any case, I'd encourage you to produce a patch. ---------------------------------------------------------------------- Comment By: Walter Dörwald (doerwalter) Date: 2002-01-16 05:03 Message: Logged In: YES user_id=89016 What we could do is add a new slot tp_docobject, that holds the doc object. Then type_members would include {"__doc__", T_OBJECT, offsetof(PyTypeObject, tp_docobject), READONLY}, tp_doc should be initialized with an 8bit version of tp_docobject (using the default encoding and error='ignore' if tp_docobject is unicode). Does this sound reasonably? ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-16 04:18 Message: Logged In: YES user_id=21627 There is a good chance that is caused by the lines following XXX What if it's a Unicode string? Don't know -- this ignores it. in Objects/typeobject.c. :-) Would you like to investigate the options and propose a patch? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=504343&group_id=5470 From noreply@sourceforge.net Mon Feb 18 17:50:03 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 09:50:03 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-519227 ] hook method for 'is' operator Message-ID: Feature Requests item #519227, was opened at 2002-02-18 09:12 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: hook method for 'is' operator Initial Comment: Being able to overload the 'is' operator would lead to nicer more readable code: # constant Open = ("OPEN",) # dummy class for my example class File: id = 0 def __init__(self, file=None): if file is not None: self.open(file) # overload 'is' operator def __is__(self, other): if id(self)==id(other): # default return 1 elif other==("OPEN",) and self.id!=0: return 1 return 0 def open(self, file): self.id = open(file).fileno f = File("myfile.txt") if f is Open: print "File is open!" else: print "File is not open" 'is not' could just test __is__ and return 'not retval' ---------------------------------------------------------------------- >Comment By: Dan Parisien (mathematician) Date: 2002-02-18 09:50 Message: Logged In: YES user_id=118203 You can say the same for all the operators in python. The default behavior would be object identity, but: x is y is the same as doing id(x)==id(y) So the 'is' operator is actually superfluous except for its readability value. Your comment seems to me like a knee jerk resistance to change. Now if you were to tell me that it would make python drastically slower or that it would be difficult to implement, then you would have a good point... ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2002-02-18 09:30 Message: Logged In: YES user_id=35752 The "is" operator has well defined semantics. It compares object identity. Allowing it to be redefined would a terrible idea, IMHO. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 From noreply@sourceforge.net Mon Feb 18 17:51:02 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 09:51:02 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-519230 ] hook method for 'is' operator Message-ID: Feature Requests item #519230, was opened at 2002-02-18 09:13 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519230&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: hook method for 'is' operator Initial Comment: Being able to overload the 'is' operator would lead to nicer more readable code: # constant Open = ("OPEN",) # dummy class for my example class File: id = 0 def __init__(self, file=None): if file is not None: self.open(file) # overload 'is' operator def __is__(self, other): if id(self)==id(other): # default return 1 elif other==("OPEN",) and self.id!=0: return 1 return 0 def open(self, file): self.id = open(file).fileno f = File("myfile.txt") if f is Open: print "File is open!" else: print "File is not open" 'is not' could just test __is__ and return 'not retval' ---------------------------------------------------------------------- >Comment By: Dan Parisien (mathematician) Date: 2002-02-18 09:51 Message: Logged In: YES user_id=118203 This is a duplicate entry. sf.net was timing out, so I submitted it twice. It seems that the first time it went through... oops! see http://sourceforge.net/tracker/index.php?func=detail&aid=519227&group_id=5470&atid=355470 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519230&group_id=5470 From noreply@sourceforge.net Mon Feb 18 17:52:00 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 09:52:00 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-519230 ] hook method for 'is' operator Message-ID: Feature Requests item #519230, was opened at 2002-02-18 09:13 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519230&group_id=5470 Category: Python Interpreter Core Group: None >Status: Deleted Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: hook method for 'is' operator Initial Comment: Being able to overload the 'is' operator would lead to nicer more readable code: # constant Open = ("OPEN",) # dummy class for my example class File: id = 0 def __init__(self, file=None): if file is not None: self.open(file) # overload 'is' operator def __is__(self, other): if id(self)==id(other): # default return 1 elif other==("OPEN",) and self.id!=0: return 1 return 0 def open(self, file): self.id = open(file).fileno f = File("myfile.txt") if f is Open: print "File is open!" else: print "File is not open" 'is not' could just test __is__ and return 'not retval' ---------------------------------------------------------------------- Comment By: Dan Parisien (mathematician) Date: 2002-02-18 09:51 Message: Logged In: YES user_id=118203 This is a duplicate entry. sf.net was timing out, so I submitted it twice. It seems that the first time it went through... oops! see http://sourceforge.net/tracker/index.php?func=detail&aid=519227&group_id=5470&atid=355470 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519230&group_id=5470 From noreply@sourceforge.net Mon Feb 18 17:52:22 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 09:52:22 -0800 Subject: [Python-bugs-list] [ python-Bugs-519028 ] make-pyexpat failed Message-ID: Bugs item #519028, was opened at 2002-02-18 02:22 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=519028&group_id=5470 Category: XML Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Chalaoux (copter) Assigned to: Nobody/Anonymous (nobody) Summary: make-pyexpat failed Initial Comment: Hi, This the make_pyexpat report after my compilation of python 2.1.1 on a sun machine. Bye #################################################### >python Lib/test/test_pyexpat.py OK. OK. OK. OK. PI: 'xml-stylesheet' 'href="stylesheet.css"' Comment: ' comment data ' Notation declared: ('notation', None, 'notation.jpeg', None) Unparsed entity decl: ('unparsed_entity', None, 'entity.file', None, 'notation') Start element: 'root' {'attr1': 'value1', 'attr2': 'value2\xe1 \xbd\x80'} NS decl: 'myns' 'http://www.python.org/namespace' Start element: 'http://www.python.org/namespace!subelement' {} Character data: 'Contents of subelements' End element: 'http://www.python.org/namespace!subelement' End of NS decl: 'myns' Start element: 'sub2' {} Start of CDATA section Character data: 'contents of CDATA section' End of CDATA section End element: 'sub2' External entity ref: (None, 'entity.file', None) End element: 'root' PI: u'xml-stylesheet' u'href="stylesheet.css"' Comment: u' comment data ' Notation declared: (u'notation', None, u'notation.jpeg', None) Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation') Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\u1f40'} NS decl: u'myns' u'http://www.python.org/namespace' Start element: u'http://www.python.org/namespace!subelement' {} Character data: u'Contents of subelements' End element: u'http://www.python.org/namespace!subelement' End of NS decl: u'myns' Start element: u'sub2' {} Start of CDATA section Character data: u'contents of CDATA section' End of CDATA section End element: u'sub2' External entity ref: (None, u'entity.file', None) End element: u'root' PI: u'xml-stylesheet' u'href="stylesheet.css"' Comment: u' comment data ' Notation declared: (u'notation', None, u'notation.jpeg', None) Unparsed entity decl: (u'unparsed_entity', None, u'entity.file', None, u'notation') Start element: u'root' {u'attr1': u'value1', u'attr2': u'value2\u1f40'} NS decl: u'myns' u'http://www.python.org/namespace' Start element: u'http://www.python.org/namespace!subelement' {} Character data: u'Contents of subelements' End element: u'http://www.python.org/namespace!subelement' End of NS decl: u'myns' Start element: u'sub2' {} Start of CDATA section Character data: u'contents of CDATA section' End of CDATA section End element: u'sub2' External entity ref: (None, u'entity.file', None) End element: u'root' Testing constructor for proper handling of namespace_separator values: Legal values tested o.k. Caught expected TypeError: ParserCreate() argument 2 must be string or None, not int Caught expected ValueError: namespace_separator must be at most one character, omitted, or None Failed to catch expected ValueError. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-18 09:52 Message: Logged In: YES user_id=21627 What is the expat version you are using? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=519028&group_id=5470 From noreply@sourceforge.net Mon Feb 18 21:52:36 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 13:52:36 -0800 Subject: [Python-bugs-list] [ python-Bugs-519621 ] __slots__ may lead to undetected cycles Message-ID: Bugs item #519621, was opened at 2002-02-18 13:51 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=519621&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 6 Submitted By: Martin v. Löwis (loewis) Assigned to: Nobody/Anonymous (nobody) Summary: __slots__ may lead to undetected cycles Initial Comment: Please see the attached script. It should print Deleting Deleted done [and actually does when you remove the cycle], but prints Deleting done ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=519621&group_id=5470 From noreply@sourceforge.net Tue Feb 19 00:34:21 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 18 Feb 2002 16:34:21 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-519227 ] hook method for 'is' operator Message-ID: Feature Requests item #519227, was opened at 2002-02-18 09:12 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: hook method for 'is' operator Initial Comment: Being able to overload the 'is' operator would lead to nicer more readable code: # constant Open = ("OPEN",) # dummy class for my example class File: id = 0 def __init__(self, file=None): if file is not None: self.open(file) # overload 'is' operator def __is__(self, other): if id(self)==id(other): # default return 1 elif other==("OPEN",) and self.id!=0: return 1 return 0 def open(self, file): self.id = open(file).fileno f = File("myfile.txt") if f is Open: print "File is open!" else: print "File is not open" 'is not' could just test __is__ and return 'not retval' ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-18 16:34 Message: Logged In: YES user_id=31435 I'm afraid I agree with Neil that this would be a disaster. There's code that absolutely depends on "is" meaning object identity. One example (you'll find others if you just look for them): _deepcopy_tuple() in the standard copy.py relies on it, in its second loop. If the operator ever "lied" about object identity, the semantics of deep copies could break in amazing ways. There's lots of "foundational" code in a similar boat (e.g., my own Cyclops.py replies on current "is" semantics all over the place, and that's an important example because it's not in the standard distribution: we have no way to locate, let alone repair, all the code that would break). If you want to pursue this, then because it's not backward compatible, it will require a PEP to propose the change and introduce a corresponding __future__ statement. The other thing you'll get resistance on is that "is" is dirt cheap today, and some code relies on that too. If it has to look for an object override, what's currently an exceptionally fast implementation: case PyCmp_IS: case PyCmp_IS_NOT: res = (v == w); if (op == (int) PyCmp_IS_NOT) res = !res; break; will at least have to do new indirection dances too through the type objects (to see first whether either operand overrides "is"). ---------------------------------------------------------------------- Comment By: Dan Parisien (mathematician) Date: 2002-02-18 09:50 Message: Logged In: YES user_id=118203 You can say the same for all the operators in python. The default behavior would be object identity, but: x is y is the same as doing id(x)==id(y) So the 'is' operator is actually superfluous except for its readability value. Your comment seems to me like a knee jerk resistance to change. Now if you were to tell me that it would make python drastically slower or that it would be difficult to implement, then you would have a good point... ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2002-02-18 09:30 Message: Logged In: YES user_id=35752 The "is" operator has well defined semantics. It compares object identity. Allowing it to be redefined would a terrible idea, IMHO. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 From noreply@sourceforge.net Tue Feb 19 14:33:34 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Feb 2002 06:33:34 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-519227 ] hook method for 'is' operator Message-ID: Feature Requests item #519227, was opened at 2002-02-18 09:12 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: hook method for 'is' operator Initial Comment: Being able to overload the 'is' operator would lead to nicer more readable code: # constant Open = ("OPEN",) # dummy class for my example class File: id = 0 def __init__(self, file=None): if file is not None: self.open(file) # overload 'is' operator def __is__(self, other): if id(self)==id(other): # default return 1 elif other==("OPEN",) and self.id!=0: return 1 return 0 def open(self, file): self.id = open(file).fileno f = File("myfile.txt") if f is Open: print "File is open!" else: print "File is not open" 'is not' could just test __is__ and return 'not retval' ---------------------------------------------------------------------- >Comment By: Dan Parisien (mathematician) Date: 2002-02-19 06:33 Message: Logged In: YES user_id=118203 what about: x is y -> id(x)==id(y) or x.__is__(y) than old code would not break & one could use is for more than just object identity equivalence. Of course if the two operands are the same object, then it always returns true. I would rather see if dbrow is empty: # do something than if dbrow.isEmpty(): # do something which is like java's string equivalency test strvar.isequal("to another string"). This way an object could 'be' anything :) Hey, well maybe for Python 3000. If so, I also recommend adding an operator called 'is a' which is equivalent to isinstance() in current python. if d is a dict: # do something ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-18 16:34 Message: Logged In: YES user_id=31435 I'm afraid I agree with Neil that this would be a disaster. There's code that absolutely depends on "is" meaning object identity. One example (you'll find others if you just look for them): _deepcopy_tuple() in the standard copy.py relies on it, in its second loop. If the operator ever "lied" about object identity, the semantics of deep copies could break in amazing ways. There's lots of "foundational" code in a similar boat (e.g., my own Cyclops.py replies on current "is" semantics all over the place, and that's an important example because it's not in the standard distribution: we have no way to locate, let alone repair, all the code that would break). If you want to pursue this, then because it's not backward compatible, it will require a PEP to propose the change and introduce a corresponding __future__ statement. The other thing you'll get resistance on is that "is" is dirt cheap today, and some code relies on that too. If it has to look for an object override, what's currently an exceptionally fast implementation: case PyCmp_IS: case PyCmp_IS_NOT: res = (v == w); if (op == (int) PyCmp_IS_NOT) res = !res; break; will at least have to do new indirection dances too through the type objects (to see first whether either operand overrides "is"). ---------------------------------------------------------------------- Comment By: Dan Parisien (mathematician) Date: 2002-02-18 09:50 Message: Logged In: YES user_id=118203 You can say the same for all the operators in python. The default behavior would be object identity, but: x is y is the same as doing id(x)==id(y) So the 'is' operator is actually superfluous except for its readability value. Your comment seems to me like a knee jerk resistance to change. Now if you were to tell me that it would make python drastically slower or that it would be difficult to implement, then you would have a good point... ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2002-02-18 09:30 Message: Logged In: YES user_id=35752 The "is" operator has well defined semantics. It compares object identity. Allowing it to be redefined would a terrible idea, IMHO. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 From noreply@sourceforge.net Tue Feb 19 16:40:12 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Feb 2002 08:40:12 -0800 Subject: [Python-bugs-list] [ python-Bugs-520045 ] memory leak in descr_new Message-ID: Bugs item #520045, was opened at 2002-02-19 08:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520045&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Steve Glaser (sglaser) Assigned to: Nobody/Anonymous (nobody) Summary: memory leak in descr_new Initial Comment: I was trying to understand how the new descriptor stuff worked and ran across this. It's unlikely that anyone ever got caught by this since it's a leak only when you InternFromString fails and you want to actually delete a type. Current CVS tree doesn't fix this (but this is my first time on sourceforge so I might not be looking in the right place). static PyDescrObject * descr_new(PyTypeObject *descrtype, PyTypeObject *type, char *name) { PyDescrObject *descr; descr = (PyDescrObject *)PyType_GenericAlloc (descrtype, 0); if (descr != NULL) { Py_XINCREF(type); descr->d_type = type; descr->d_name = PyString_InternFromString(name); if (descr->d_name == NULL) { Py_DECREF(descr); Py_XDECREF(type); // BUGFIX descr = NULL; } } return descr; } ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520045&group_id=5470 From noreply@sourceforge.net Tue Feb 19 18:14:59 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Feb 2002 10:14:59 -0800 Subject: [Python-bugs-list] [ python-Bugs-520045 ] memory leak in descr_new Message-ID: Bugs item #520045, was opened at 2002-02-19 08:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520045&group_id=5470 Category: Python Interpreter Core Group: Python 2.2 >Status: Closed >Resolution: Fixed >Priority: 1 Submitted By: Steve Glaser (sglaser) Assigned to: Nobody/Anonymous (nobody) Summary: memory leak in descr_new Initial Comment: I was trying to understand how the new descriptor stuff worked and ran across this. It's unlikely that anyone ever got caught by this since it's a leak only when you InternFromString fails and you want to actually delete a type. Current CVS tree doesn't fix this (but this is my first time on sourceforge so I might not be looking in the right place). static PyDescrObject * descr_new(PyTypeObject *descrtype, PyTypeObject *type, char *name) { PyDescrObject *descr; descr = (PyDescrObject *)PyType_GenericAlloc (descrtype, 0); if (descr != NULL) { Py_XINCREF(type); descr->d_type = type; descr->d_name = PyString_InternFromString(name); if (descr->d_name == NULL) { Py_DECREF(descr); Py_XDECREF(type); // BUGFIX descr = NULL; } } return descr; } ---------------------------------------------------------------------- >Comment By: Steve Glaser (sglaser) Date: 2002-02-19 10:14 Message: Logged In: YES user_id=463610 as rose ann adanna would say never mind. I misread the code so the DECREF just before my "bugfix" does the trick. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520045&group_id=5470 From noreply@sourceforge.net Tue Feb 19 18:29:07 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Feb 2002 10:29:07 -0800 Subject: [Python-bugs-list] [ python-Bugs-520087 ] Invalid PyWeakref_GetObject info Message-ID: Bugs item #520087, was opened at 2002-02-19 10:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520087&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Yakov Markovitch (markovitch) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Invalid PyWeakref_GetObject info Initial Comment: PyWeakref_GetObject() is said to return a new reference, whereas it returns borrowed reference instead. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520087&group_id=5470 From noreply@sourceforge.net Wed Feb 20 01:12:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Feb 2002 17:12:38 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-519227 ] hook method for 'is' operator Message-ID: Feature Requests item #519227, was opened at 2002-02-18 09:12 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: hook method for 'is' operator Initial Comment: Being able to overload the 'is' operator would lead to nicer more readable code: # constant Open = ("OPEN",) # dummy class for my example class File: id = 0 def __init__(self, file=None): if file is not None: self.open(file) # overload 'is' operator def __is__(self, other): if id(self)==id(other): # default return 1 elif other==("OPEN",) and self.id!=0: return 1 return 0 def open(self, file): self.id = open(file).fileno f = File("myfile.txt") if f is Open: print "File is open!" else: print "File is not open" 'is not' could just test __is__ and return 'not retval' ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-19 17:12 Message: Logged In: YES user_id=31435 Unless I'm missing something intended, x is y -> id(x)==id(y) or x.__is__(y) is true whenever "x is y" is true today, but may be true even in cases where "x is y" is false today. If so, it's not backward compatible, and general code relying on current semantics would still break. For example, any kind of general code that's crawling over an object graph needs to know whether it's seen an object before, current "is" can and is used to answer that question precisely, and it's as bad to tell it that two distinct objects are identical as it is to tell it that two identical objects are distinct. The standard copy.deepcopy() is one example of "general code that's crawling over an object graph". OO languages with object identity really need a way to ask about object identity, and "is" has always been that way in Python (btw, "is" existed long before "id()" was introduced). For that reason, if you write a PEP, I think you'd get farther by leaving "is" alone and proposing another spelling instead. ---------------------------------------------------------------------- Comment By: Dan Parisien (mathematician) Date: 2002-02-19 06:33 Message: Logged In: YES user_id=118203 what about: x is y -> id(x)==id(y) or x.__is__(y) than old code would not break & one could use is for more than just object identity equivalence. Of course if the two operands are the same object, then it always returns true. I would rather see if dbrow is empty: # do something than if dbrow.isEmpty(): # do something which is like java's string equivalency test strvar.isequal("to another string"). This way an object could 'be' anything :) Hey, well maybe for Python 3000. If so, I also recommend adding an operator called 'is a' which is equivalent to isinstance() in current python. if d is a dict: # do something ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-18 16:34 Message: Logged In: YES user_id=31435 I'm afraid I agree with Neil that this would be a disaster. There's code that absolutely depends on "is" meaning object identity. One example (you'll find others if you just look for them): _deepcopy_tuple() in the standard copy.py relies on it, in its second loop. If the operator ever "lied" about object identity, the semantics of deep copies could break in amazing ways. There's lots of "foundational" code in a similar boat (e.g., my own Cyclops.py replies on current "is" semantics all over the place, and that's an important example because it's not in the standard distribution: we have no way to locate, let alone repair, all the code that would break). If you want to pursue this, then because it's not backward compatible, it will require a PEP to propose the change and introduce a corresponding __future__ statement. The other thing you'll get resistance on is that "is" is dirt cheap today, and some code relies on that too. If it has to look for an object override, what's currently an exceptionally fast implementation: case PyCmp_IS: case PyCmp_IS_NOT: res = (v == w); if (op == (int) PyCmp_IS_NOT) res = !res; break; will at least have to do new indirection dances too through the type objects (to see first whether either operand overrides "is"). ---------------------------------------------------------------------- Comment By: Dan Parisien (mathematician) Date: 2002-02-18 09:50 Message: Logged In: YES user_id=118203 You can say the same for all the operators in python. The default behavior would be object identity, but: x is y is the same as doing id(x)==id(y) So the 'is' operator is actually superfluous except for its readability value. Your comment seems to me like a knee jerk resistance to change. Now if you were to tell me that it would make python drastically slower or that it would be difficult to implement, then you would have a good point... ---------------------------------------------------------------------- Comment By: Neil Schemenauer (nascheme) Date: 2002-02-18 09:30 Message: Logged In: YES user_id=35752 The "is" operator has well defined semantics. It compares object identity. Allowing it to be redefined would a terrible idea, IMHO. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=519227&group_id=5470 From noreply@sourceforge.net Wed Feb 20 04:44:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Feb 2002 20:44:38 -0800 Subject: [Python-bugs-list] [ python-Bugs-520325 ] Double underscore needs clarification Message-ID: Bugs item #520325, was opened at 2002-02-19 20:44 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520325&group_id=5470 Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Submitted By: Wayne C. Smith (wcsmith) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Double underscore needs clarification Initial Comment: Double underscore (DU) is pervasive in Python but nowhere is clearly explained. In print and onscreen from formatted html it is visually ambiguous because the DUs run together to appear as one. The table in 2.3.2 of the Reference Manual 2.3a0 2/4/02 presents the contrast with single underscore but it is subtle. The usage of DU should be clarified in accompanying text and perhaps mentioned again in the Library Reference. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520325&group_id=5470 From noreply@sourceforge.net Wed Feb 20 05:08:57 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 19 Feb 2002 21:08:57 -0800 Subject: [Python-bugs-list] [ python-Bugs-520087 ] Invalid PyWeakref_GetObject info Message-ID: Bugs item #520087, was opened at 2002-02-19 10:29 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520087&group_id=5470 Category: Documentation Group: Python 2.2 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Yakov Markovitch (markovitch) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Invalid PyWeakref_GetObject info Initial Comment: PyWeakref_GetObject() is said to return a new reference, whereas it returns borrowed reference instead. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-19 21:08 Message: Logged In: YES user_id=3066 Fixed in Doc/api/refcounts.dat revisions 1.39 and 1.38.6.1. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520087&group_id=5470 From noreply@sourceforge.net Wed Feb 20 08:22:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Feb 2002 00:22:38 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-495086 ] dict.popitem(key=None) Message-ID: Feature Requests item #495086, was opened at 2001-12-19 08:26 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=495086&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Parisien (mathematician) Assigned to: Nobody/Anonymous (nobody) Summary: dict.popitem(key=None) Initial Comment: Would it be possible to add an extra argument to the popitem method of DictionaryType so one can both retrieve a dict item and delete it at the same time? It would be so handy. Without the optional argument, it would work the same way dict.popitem works now example:: >>> d = dict([(x,x) for x in range(10)]) >>> d {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9} >>> d.popitem() # retrieves "random" key->val pair (0, 0) >>> d.popitem(4) # val=d[4]; del d[4]; return val 4 >>> d.popitem(6) # val=d[6]; del d[6]; return val 6 >>> d # missing keys [0, 4, 6] {1: 1, 2: 2, 3: 3, 5: 5, 7: 7, 8: 8, 9: 9} ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2002-02-20 00:22 Message: Logged In: YES user_id=80475 Great idea! The rationale is just like that for .setdefault() in providing a fast, simple, single method, single look-up replacement for a commonly used sequence of dictionary operations. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:17 Message: Logged In: YES user_id=21627 Also requested as http://sourceforge.net/tracker/index.php?func=detail&aid=504880&group_id=5470&atid=355470 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=495086&group_id=5470 From noreply@sourceforge.net Wed Feb 20 08:39:34 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Feb 2002 00:39:34 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-520382 ] Update Shelve to be more dictionary like Message-ID: Feature Requests item #520382, was opened at 2002-02-20 00:39 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=520382&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Raymond Hettinger (rhettinger) Assigned to: Nobody/Anonymous (nobody) Summary: Update Shelve to be more dictionary like Initial Comment: It's great to be able to add persistence by replacing a dictionary declaration with a shelf; however, it is not as substitutable as it could be. Most importantly, we should add __iter__ so that 'for k in d' works for shelves as well as dictionaries. Also add .items .iteritems .iterkeys .itervalues .popitem . setdefault .update and .values These methods could be added to increase substitutability without affecting existing code. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=520382&group_id=5470 From noreply@sourceforge.net Wed Feb 20 13:56:46 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Feb 2002 05:56:46 -0800 Subject: [Python-bugs-list] [ python-Bugs-516299 ] urlparse can get fragments wrong Message-ID: Bugs item #516299, was opened at 2002-02-11 20:10 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516299&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: A.M. Kuchling (akuchling) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: urlparse can get fragments wrong Initial Comment: urlparse.urlparse() goes wrong on a URL such as 'http://amk.ca#foo', where there's a fragment identifier and the hostname isn't followed by a slash. It returns 'amk.ca#foo' as the hostname portion of the URL. While looking at that, I realized that test_urlparse() only tests urljoin(), not urlparse() or urlunparse(). The attached patch also adds a minimal test suite for urlparse(), but it should be still more comprehensive. Unfortunately the RFC doesn't include test cases, so I haven't done this yet. (Assigned to you at random, Michael; feel free to unassign it if you lack the time.) ---------------------------------------------------------------------- Comment By: Richard Brodie (leogah) Date: 2002-02-20 05:56 Message: Logged In: YES user_id=356893 The current version of the URI specification (RFC2396) includes a regexp for parsing URIs. For evil edge cases, I usually cut and paste directly into re. Would it be an idea just to incorporate it rather than hammer the kinks out of the ad-hoc parser? If so, I'll hack on it. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-13 02:45 Message: Logged In: YES user_id=6656 Sorry, don't know *anything* about URLs and don't really have the time to learn now... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=516299&group_id=5470 From noreply@sourceforge.net Wed Feb 20 20:50:46 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Feb 2002 12:50:46 -0800 Subject: [Python-bugs-list] [ python-Bugs-520644 ] __slots__ are not pickled Message-ID: Bugs item #520644, was opened at 2002-02-20 12:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Samuele Pedroni (pedronis) Assigned to: Nobody/Anonymous (nobody) Summary: __slots__ are not pickled Initial Comment: [Posted on behalf of Kevin Jacobs] I have been hacking on ways to make lighter-weight Python objects using the __slots__ mechanism that came with Python 2.2 new- style class. Everything has gone swimmingly until I noticed that slots do not get pickled/cPickled at all! Here is a simple test case: import pickle,cPickle class Test(object): __slots__ = ['x'] def __init__(self): self.x = 66666 test = Test() pickle_str = pickle.dumps( test ) cpickle_str = cPickle.dumps( test ) untest = pickle.loads( pickle_str ) untestc = cPickle.loads( cpickle_str ) print untest.x # raises AttributeError print untextc.x # raises AttributeError ... see http://aspn.activestate.com/ASPN/Mail/Message/python- dev/1031499 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 From noreply@sourceforge.net Wed Feb 20 21:00:56 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Feb 2002 13:00:56 -0800 Subject: [Python-bugs-list] [ python-Bugs-520645 ] unpickable basic types => confusing err Message-ID: Bugs item #520645, was opened at 2002-02-20 13:00 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520645&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Samuele Pedroni (pedronis) Assigned to: Nobody/Anonymous (nobody) Summary: unpickable basic types => confusing err Initial Comment: E.g. Python 2.2 >>> f=open('c:/autoexec.bat','r') >>> w=open('c:/transit/p','w') >>> import pickle as pic >>> pic.dump(f,w) Traceback (most recent call last): <> TypeError: coercing to Unicode: need string or buffer, file found >>> import cPickle as cpic >>> cpic.dump(f,w) Traceback (most recent call last): File "", line 1, in ? File "C:\USR\PYTHON22\lib\copy_reg.py", line 56, in _reduce state = base(self) TypeError: coercing to Unicode: need string or buffer, file found VS. Python 2.1 >>> f=open('c:/autoexec.bat','r') >>> w=open('c:/transit/p','w') >>> import pickle as pic >>> pic.dump(f,w) Traceback (most recent call last): <> pickle.PicklingError: can't pickle 'file' object: >>> import cPickle as cpic >>> cpic.dump(f,w) Traceback (most recent call last): File "", line 1, in ? cPickle.UnpickleableError: Cannot pickle objects >>> ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520645&group_id=5470 From noreply@sourceforge.net Thu Feb 21 05:37:44 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 20 Feb 2002 21:37:44 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-512497 ] multi-line print statement Message-ID: Feature Requests item #512497, was opened at 2002-02-03 14:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512497&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: frobozz electric (frobozzelectric) Assigned to: Nobody/Anonymous (nobody) Summary: multi-line print statement Initial Comment: Similar to the multi-line comment block suggestion, instead of using \ to say the line continues use print: "line 1" "line 2" ... "line n" Ok, then...thanks ---------------------------------------------------------------------- >Comment By: frobozz electric (frobozzelectric) Date: 2002-02-20 21:37 Message: Logged In: YES user_id=447750 I'm just learning Python, so I'm sorry that I didn't know that you already have print """ Large block of text """ which is what all I had in mind for print: Large block of text I still think the latter looks cleaner, but, anyway... Thanks for your time. ---------------------------------------------------------------------- Comment By: frobozz electric (frobozzelectric) Date: 2002-02-17 08:08 Message: Logged In: YES user_id=447750 Well, what I was thinking about was more for when you have a large block of text to display, likely with no variables to be evaluated. So, your example, using raise and for, would not raise an exception nor begin a for loop inside a print: block, rather, they would print stdout, i.e., print: "foo" raise "Done" would display fooraise Done print: foo\n raise Done would display foo raise Done The use of quotation marks, would likely be superfluous. I'm not sure how you could cleanly introduce variable evaluation into this type of print block. Mostly, I was just interested in being able to put several lines of text into one print block, as opposed to using \ or several print statements. Thanks ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:19 Message: Logged In: YES user_id=21627 Can you specify more precisely how this feature would work? E.g. would it be legal to write print: "foo" raise "Done" or print: for i in range(10): "bar" If so, what would be the meaning of the latter one? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512497&group_id=5470 From noreply@sourceforge.net Thu Feb 21 10:27:27 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Feb 2002 02:27:27 -0800 Subject: [Python-bugs-list] [ python-Bugs-520904 ] Regex object finditer not documented Message-ID: Bugs item #520904, was opened at 2002-02-21 02:27 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520904&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Duncan Booth (duncanb) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Regex object finditer not documented Initial Comment: The finditer method of regex objects is not listed in the documentation. doc/current/lib/re-objects.html should include a description of this method. Oh, and there is another undocumented method 'scanner' which is a lot less intuitively obvious than finditer. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520904&group_id=5470 From noreply@sourceforge.net Thu Feb 21 12:39:22 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Feb 2002 04:39:22 -0800 Subject: [Python-bugs-list] [ python-Bugs-436131 ] freeze: global symbols not exported Message-ID: Bugs item #436131, was opened at 2001-06-25 10:26 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436131&group_id=5470 Category: Demos and Tools Group: None Status: Open Resolution: None Priority: 5 Submitted By: Charles Schwieters (chuckorama) Assigned to: Mark Hammond (mhammond) Summary: freeze: global symbols not exported Initial Comment: python-2.1 linux-2.2, others? the freeze tool does not export global symbols. As a result the frozen executable fails with unresolved symbols in shared objects. fix: include the LINKFORSHARED flag in freeze.py: *** freeze.py~ Tue Mar 20 15:43:33 2001 --- freeze.py Fri Jun 22 14:36:23 2001 *************** *** 434,440 **** somevars[key] = makevars[key] somevars['CFLAGS'] = string.join(cflags) # override ! files = ['$(OPT)', '$(LDFLAGS)', base_config_c, base_frozen_c] + \ files + supp_sources + addfiles + libs + \ ['$(MODLIBS)', '$(LIBS)', '$(SYSLIBS)'] --- 434,440 ---- somevars[key] = makevars[key] somevars['CFLAGS'] = string.join(cflags) # override ! files = ['$(OPT)', '$(LDFLAGS)', '$(LINKFORSHARED)',base_config_c, base_frozen_c] + \ files + supp_sources + addfiles + libs + \ ['$(MODLIBS)', '$(LIBS)', '$(SYSLIBS)'] ---------------------------------------------------------------------- Comment By: Jens Krinke (krinke) Date: 2002-02-21 04:39 Message: Logged In: YES user_id=345110 I think this patch will fix most of the "freeze fails" reports and requests in newsgroups. The bug itself is still in 2.2 and 2.1.2 :-( ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=436131&group_id=5470 From noreply@sourceforge.net Thu Feb 21 13:27:14 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Feb 2002 05:27:14 -0800 Subject: [Python-bugs-list] [ python-Bugs-520959 ] ref.pdf dictionary display doc error Message-ID: Bugs item #520959, was opened at 2002-02-21 05:27 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520959&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Ray Foulkes (rfoulkes) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: ref.pdf dictionary display doc error Initial Comment: In Python Reference Manual Release 2.2 dated december 21 2001, ref.pdf in pdf-a4-2.2.zip I think 5.2.5 Dictionary displays A dictionary display is a possibly empty series of key/datum pairs enclosed in curly braces: dict display ::= "" [key datum list] "" key datum list ::= key datum ("," key datum)* [","] key datum ::= expression ":" expression Should read 5.2.5 Dictionary displays A dictionary display is a possibly empty series of key/datum pairs enclosed in curly braces: dict display ::= "{" [key datum list] "}" key datum list ::= key datum ("," key datum)* [","] key datum ::= expression ":" expression i.e. the curly braces vanished when looking at it using Acrobat reader 5.0 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520959&group_id=5470 From noreply@sourceforge.net Thu Feb 21 17:51:27 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Feb 2002 09:51:27 -0800 Subject: [Python-bugs-list] [ python-Bugs-520644 ] __slots__ are not pickled Message-ID: Bugs item #520644, was opened at 2002-02-20 12:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Samuele Pedroni (pedronis) Assigned to: Nobody/Anonymous (nobody) Summary: __slots__ are not pickled Initial Comment: [Posted on behalf of Kevin Jacobs] I have been hacking on ways to make lighter-weight Python objects using the __slots__ mechanism that came with Python 2.2 new- style class. Everything has gone swimmingly until I noticed that slots do not get pickled/cPickled at all! Here is a simple test case: import pickle,cPickle class Test(object): __slots__ = ['x'] def __init__(self): self.x = 66666 test = Test() pickle_str = pickle.dumps( test ) cpickle_str = cPickle.dumps( test ) untest = pickle.loads( pickle_str ) untestc = cPickle.loads( cpickle_str ) print untest.x # raises AttributeError print untextc.x # raises AttributeError ... see http://aspn.activestate.com/ASPN/Mail/Message/python- dev/1031499 ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-21 09:51 Message: Logged In: YES user_id=459565 This bug raises questions about what a slot really is. After a fair amount of discussion on Python-dev, we have come up with basically two answers: 1) a slot is a struct-member that is part of the private implementation of an object. Slots should have their own semantics and not be expected to act like Python instance attributes. 2) slots should be treated just like dict instance attributes except they are allocated statically within the object itself, and require slightly different reflection methods. Under (1), this bug isn't really a bug. The class should implement a __reduce__ function or otherwise hook into the copy_reg system. Under (2), this bug is just the tip of the iceberg. There are about 8 other problems with the current slot implementation that need to be resolved before slots act almost identically to normal instance attributes. Thankfully, I am fairly confident that I can supply patches that can achieve this, though I am waiting for Guido to comment on this issue when he returns from his trip. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 From noreply@sourceforge.net Thu Feb 21 18:02:45 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Feb 2002 10:02:45 -0800 Subject: [Python-bugs-list] [ python-Bugs-507713 ] mem leak in imaplib Message-ID: Bugs item #507713, was opened at 2002-01-23 13:28 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507713&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Scott Blomquist (scottdb) Assigned to: Piers Lauder (pierslauder) Summary: mem leak in imaplib Initial Comment: When run in a multithreaded environment, the imaplib will leak memory if not run with the -O option. Long running, multithreaded programs that we have which use the imaplib will run fine for a undefined period of time, then suddenly start to grow in size until they take as much mem as the system will give to them. Once they start to grow, they continue to grow at a pretty consistent rate. Specifically: If the -O option is not used, in the _log method starting on line 1024 in the imaplib class, the imaplib keeps the last 10 commands that are sent. def _log(line): # Keep log of last `_cmd_log_len' interactions for debugging. if len(_cmd_log) == _cmd_log_len: del _cmd_log[0] _cmd_log.append((time.time(), line)) Unfortunately, in a multithreaded environment, eventually the len of the list will become larger than the _cmd_log_len, and since the test is for equality, rather than greater-than-equal-to, once the len of the _cmd_log gets larger than _cmd_log_len, nothing will ever be removed from the _cmd_log, and the list will grow without bound. We added the following to test this hypothesis, we created a basic test which creates 40 threads. These threads sit in while 1 loops and create an imaplib and then issue the logout command. We also added the following debug to the method above: if len(_cmd_log) > 10: print 'command log len is:', len(_cmd_log) We started the test, which ran fine, without leaking, for about 10 minutes, and without printing anything out. Somewhere around ten minutes, the process started to grow in size rapidly, and at the same time, the debug started printing out, and the size of the _cmd_log list did indeed grow very large, very fast. We repeated the test and the same symptoms occured, this time after only 5 minutes. ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-21 10:02 Message: Logged In: YES user_id=3066 No objection here, but don't hold back on my account! ---------------------------------------------------------------------- Comment By: Piers Lauder (pierslauder) Date: 2002-02-17 16:16 Message: Logged In: YES user_id=196212 On further consideration, I think that the command logging should be done with a circular buffer implemented using a dictionary - it has two advantages: it's immune to thread contentions, and it's twice as fast as a truncating list. I'm also convinced these routines should be per IMAP4 instance - ie: per socket - so i've moved them into the class. I've attached a diff with the current CVS If noone disagrees, I'll make these changes. ---------------------------------------------------------------------- Comment By: Piers Lauder (pierslauder) Date: 2002-02-16 02:39 Message: Logged In: YES user_id=196212 I aggree that the line: if len(_cmd_log) == _cmd_log_len: should be changed, though I favour the form: while len(_cmd_log) >= _cmd_log_len: del _cmd_log[0] rather than the version suggested in the patch: if len(_cmd_log) > _cmd_log_len: del _cmd_log[:-_cmd_log_len] However, if imaplib is gpoing to be used by multiple threads, perhaps the best solution is to move these debugging routines entirely into the IMAP4 class, so that the logs are per-connection, rather than global? ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 22:09 Message: Logged In: YES user_id=3066 It looks like the problem still exists in Python 2.1.2, 2.2, and CVS. I've attached a patch that I think solves this problem, but this isn't easy for me to test. Please check this. Assigning to Piers Lauder since he knows more about this module than I do. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507713&group_id=5470 From noreply@sourceforge.net Fri Feb 22 01:20:43 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Feb 2002 17:20:43 -0800 Subject: [Python-bugs-list] [ python-Bugs-507713 ] mem leak in imaplib Message-ID: Bugs item #507713, was opened at 2002-01-23 13:28 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507713&group_id=5470 Category: Python Library Group: Python 2.1.1 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Scott Blomquist (scottdb) Assigned to: Piers Lauder (pierslauder) Summary: mem leak in imaplib Initial Comment: When run in a multithreaded environment, the imaplib will leak memory if not run with the -O option. Long running, multithreaded programs that we have which use the imaplib will run fine for a undefined period of time, then suddenly start to grow in size until they take as much mem as the system will give to them. Once they start to grow, they continue to grow at a pretty consistent rate. Specifically: If the -O option is not used, in the _log method starting on line 1024 in the imaplib class, the imaplib keeps the last 10 commands that are sent. def _log(line): # Keep log of last `_cmd_log_len' interactions for debugging. if len(_cmd_log) == _cmd_log_len: del _cmd_log[0] _cmd_log.append((time.time(), line)) Unfortunately, in a multithreaded environment, eventually the len of the list will become larger than the _cmd_log_len, and since the test is for equality, rather than greater-than-equal-to, once the len of the _cmd_log gets larger than _cmd_log_len, nothing will ever be removed from the _cmd_log, and the list will grow without bound. We added the following to test this hypothesis, we created a basic test which creates 40 threads. These threads sit in while 1 loops and create an imaplib and then issue the logout command. We also added the following debug to the method above: if len(_cmd_log) > 10: print 'command log len is:', len(_cmd_log) We started the test, which ran fine, without leaking, for about 10 minutes, and without printing anything out. Somewhere around ten minutes, the process started to grow in size rapidly, and at the same time, the debug started printing out, and the size of the _cmd_log list did indeed grow very large, very fast. We repeated the test and the same symptoms occured, this time after only 5 minutes. ---------------------------------------------------------------------- >Comment By: Piers Lauder (pierslauder) Date: 2002-02-21 17:20 Message: Logged In: YES user_id=196212 command logging routines moved into IMAP4 class ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-21 10:02 Message: Logged In: YES user_id=3066 No objection here, but don't hold back on my account! ---------------------------------------------------------------------- Comment By: Piers Lauder (pierslauder) Date: 2002-02-17 16:16 Message: Logged In: YES user_id=196212 On further consideration, I think that the command logging should be done with a circular buffer implemented using a dictionary - it has two advantages: it's immune to thread contentions, and it's twice as fast as a truncating list. I'm also convinced these routines should be per IMAP4 instance - ie: per socket - so i've moved them into the class. I've attached a diff with the current CVS If noone disagrees, I'll make these changes. ---------------------------------------------------------------------- Comment By: Piers Lauder (pierslauder) Date: 2002-02-16 02:39 Message: Logged In: YES user_id=196212 I aggree that the line: if len(_cmd_log) == _cmd_log_len: should be changed, though I favour the form: while len(_cmd_log) >= _cmd_log_len: del _cmd_log[0] rather than the version suggested in the patch: if len(_cmd_log) > _cmd_log_len: del _cmd_log[:-_cmd_log_len] However, if imaplib is gpoing to be used by multiple threads, perhaps the best solution is to move these debugging routines entirely into the IMAP4 class, so that the logs are per-connection, rather than global? ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-14 22:09 Message: Logged In: YES user_id=3066 It looks like the problem still exists in Python 2.1.2, 2.2, and CVS. I've attached a patch that I think solves this problem, but this isn't easy for me to test. Please check this. Assigning to Piers Lauder since he knows more about this module than I do. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=507713&group_id=5470 From noreply@sourceforge.net Fri Feb 22 01:33:28 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Feb 2002 17:33:28 -0800 Subject: [Python-bugs-list] [ python-Bugs-520644 ] __slots__ are not pickled Message-ID: Bugs item #520644, was opened at 2002-02-20 12:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Samuele Pedroni (pedronis) Assigned to: Nobody/Anonymous (nobody) Summary: __slots__ are not pickled Initial Comment: [Posted on behalf of Kevin Jacobs] I have been hacking on ways to make lighter-weight Python objects using the __slots__ mechanism that came with Python 2.2 new- style class. Everything has gone swimmingly until I noticed that slots do not get pickled/cPickled at all! Here is a simple test case: import pickle,cPickle class Test(object): __slots__ = ['x'] def __init__(self): self.x = 66666 test = Test() pickle_str = pickle.dumps( test ) cpickle_str = cPickle.dumps( test ) untest = pickle.loads( pickle_str ) untestc = cPickle.loads( cpickle_str ) print untest.x # raises AttributeError print untextc.x # raises AttributeError ... see http://aspn.activestate.com/ASPN/Mail/Message/python- dev/1031499 ---------------------------------------------------------------------- >Comment By: Samuele Pedroni (pedronis) Date: 2002-02-21 17:33 Message: Logged In: YES user_id=61408 some slots more like attrs illustrative python code ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-21 09:51 Message: Logged In: YES user_id=459565 This bug raises questions about what a slot really is. After a fair amount of discussion on Python-dev, we have come up with basically two answers: 1) a slot is a struct-member that is part of the private implementation of an object. Slots should have their own semantics and not be expected to act like Python instance attributes. 2) slots should be treated just like dict instance attributes except they are allocated statically within the object itself, and require slightly different reflection methods. Under (1), this bug isn't really a bug. The class should implement a __reduce__ function or otherwise hook into the copy_reg system. Under (2), this bug is just the tip of the iceberg. There are about 8 other problems with the current slot implementation that need to be resolved before slots act almost identically to normal instance attributes. Thankfully, I am fairly confident that I can supply patches that can achieve this, though I am waiting for Guido to comment on this issue when he returns from his trip. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 From noreply@sourceforge.net Fri Feb 22 01:36:40 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 21 Feb 2002 17:36:40 -0800 Subject: [Python-bugs-list] [ python-Bugs-521270 ] SMTP does not handle UNICODE Message-ID: Bugs item #521270, was opened at 2002-02-21 17:36 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521270&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Noah Spurrier (noah) Assigned to: Nobody/Anonymous (nobody) Summary: SMTP does not handle UNICODE Initial Comment: The SMTP library does not gracefully handle strings. This type of string is frequently returned from a databases and particulary when working with COM objects. For example, we pull email TO addresses and messages from from a database. We would like to call: server.sendmail(FROM, TO, message) instead we have to do this: server.sendmail(FROM, str(TO), str(message)) >From a users point of view it is easy to get around this by putting str() around every string before calling STMP methods, but I think it would make more sense for SMTP to convert them or gracefully handle them. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521270&group_id=5470 From noreply@sourceforge.net Fri Feb 22 13:15:43 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 05:15:43 -0800 Subject: [Python-bugs-list] [ python-Bugs-521448 ] Undocumented Py_InitModule Message-ID: Bugs item #521448, was opened at 2002-02-22 05:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521448&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Jesús Cea Avión (jcea) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Undocumented Py_InitModule Initial Comment: Python 2.2 docs. Function "Py_InitModule" is not documented in section 5.3. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521448&group_id=5470 From noreply@sourceforge.net Fri Feb 22 13:17:19 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 05:17:19 -0800 Subject: [Python-bugs-list] [ python-Feature Requests-512497 ] multi-line print statement Message-ID: Feature Requests item #512497, was opened at 2002-02-03 14:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512497&group_id=5470 Category: None Group: None >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: frobozz electric (frobozzelectric) Assigned to: Nobody/Anonymous (nobody) Summary: multi-line print statement Initial Comment: Similar to the multi-line comment block suggestion, instead of using \ to say the line continues use print: "line 1" "line 2" ... "line n" Ok, then...thanks ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-22 05:17 Message: Logged In: YES user_id=21627 I take it that you no longer request that feature? Closing it. ---------------------------------------------------------------------- Comment By: frobozz electric (frobozzelectric) Date: 2002-02-20 21:37 Message: Logged In: YES user_id=447750 I'm just learning Python, so I'm sorry that I didn't know that you already have print """ Large block of text """ which is what all I had in mind for print: Large block of text I still think the latter looks cleaner, but, anyway... Thanks for your time. ---------------------------------------------------------------------- Comment By: frobozz electric (frobozzelectric) Date: 2002-02-17 08:08 Message: Logged In: YES user_id=447750 Well, what I was thinking about was more for when you have a large block of text to display, likely with no variables to be evaluated. So, your example, using raise and for, would not raise an exception nor begin a for loop inside a print: block, rather, they would print stdout, i.e., print: "foo" raise "Done" would display fooraise Done print: foo\n raise Done would display foo raise Done The use of quotation marks, would likely be superfluous. I'm not sure how you could cleanly introduce variable evaluation into this type of print block. Mostly, I was just interested in being able to put several lines of text into one print block, as opposed to using \ or several print statements. Thanks ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-16 16:19 Message: Logged In: YES user_id=21627 Can you specify more precisely how this feature would work? E.g. would it be legal to write print: "foo" raise "Done" or print: for i in range(10): "bar" If so, what would be the meaning of the latter one? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=355470&aid=512497&group_id=5470 From noreply@sourceforge.net Fri Feb 22 13:20:09 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 05:20:09 -0800 Subject: [Python-bugs-list] [ python-Bugs-521448 ] Undocumented Py_InitModule Message-ID: Bugs item #521448, was opened at 2002-02-22 05:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521448&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Jesús Cea Avión (jcea) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Undocumented Py_InitModule Initial Comment: Python 2.2 docs. Function "Py_InitModule" is not documented in section 5.3. ---------------------------------------------------------------------- >Comment By: Jesús Cea Avión (jcea) Date: 2002-02-22 05:20 Message: Logged In: YES user_id=97460 I'm talking about "Python/C API Reference Manual". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521448&group_id=5470 From noreply@sourceforge.net Fri Feb 22 13:25:48 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 05:25:48 -0800 Subject: [Python-bugs-list] [ python-Bugs-521450 ] Trivial Misspelling Message-ID: Bugs item #521450, was opened at 2002-02-22 05:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521450&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Jesús Cea Avión (jcea) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Trivial Misspelling Initial Comment: "Python Reference Manual", release 2.2 In page 51 ("the 'try' statement"), near the end of the page, we have: "The reason is a problem with the current implementation - THSI restriction may be lifted in the future". We must correct "thsi" to "this". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521450&group_id=5470 From noreply@sourceforge.net Fri Feb 22 14:05:25 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 06:05:25 -0800 Subject: [Python-bugs-list] [ python-Bugs-495693 ] urllib doesn't support passive FTP Message-ID: Bugs item #495693, was opened at 2001-12-20 16:51 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495693&group_id=5470 Category: Python Library Group: Python 2.2.1 candidate >Status: Closed Resolution: Fixed Priority: 5 Submitted By: Matthias Klose (doko) >Assigned to: Michael Hudson (mwh) Summary: urllib doesn't support passive FTP Initial Comment: [please CC 40981@bugs.debian.org on replies; complete report can be found at http://bugs.debian.org/40981] urllib.urlopen/urlretrieve doesn't support passive FTP urllib doesn't support passive FTP, even though the underlying ftplib module does. I dunno what the right approach is (perhaps a urllib module global variable). I know some tools (I'm aware of at least ncftp and wget) autodetect whether PASV is supported by FTP servers; perhaps that intelligence could be added to ftplib. (Also: the FTP class's set_pasv() method isn't documented in my version of python-docs; I haven't checked the new 1.5.2 docs yet however.) At the moment, I'm using this ugly hack to get around it: # Really ugly hack; don't try this at home: def ftpwrapper_init(self): import ftplib self.busy = 0 self.ftp = ftplib.FTP() self.ftp.set_pasv(1) self.ftp.connect(self.host, self.port) self.ftp.login(self.user, self.passwd) for dir in self.dirs: self.ftp.cwd(dir) urllib.ftpwrapper.init = ftpwrapper_init # End really ugly hack ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-22 06:05 Message: Logged In: YES user_id=6656 This was ported to the branch some time ago (by me, in fact). ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-27 22:05 Message: Logged In: YES user_id=6380 For a more configurable urllib, try urllib2. If that doesn't what you want, please submit a new bug report or feature request. I'm leaving this open only because the bugfix is a 2.2.1 candidate; the problem is fixed in CVS. ---------------------------------------------------------------------- Comment By: Matthias Klose (doko) Date: 2001-12-27 09:54 Message: Logged In: YES user_id=60903 [Martin, I'll summarize in the Debian BTS, typo in bug number, it's #40891] The original report (as I read it),wanted to have a configurable urllib.ftpwrapper. So probably adding another argument "mode" to ftpwrapper.__init__ ? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-23 05:56 Message: Logged In: YES user_id=6380 Right now I have this listed as a bug, with fix, in python.org/2.2/bugs.html. When we get more, I agree that MoinMoin would be a good idea. I've checked in the fix on the trunk. This is definitely a 2.2.1 release candidate. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-23 03:16 Message: Logged In: YES user_id=21627 Yes, that is quite unfortunate, and an error. In itojun's original patch, there was still self.passiveserver=0 in the context (it was against 1.46). That patch did not apply after your changes (in 1.48 and 1.52) anymore, so I asked him to regenerate the patches, but neither of us noticed that particular change. I'll attach the obvious change below; perhaps we should revive the MoinMoin pages to distribute hotfixes? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-22 07:08 Message: Logged In: YES user_id=6380 Martin, in ftplib.py, there's a self.passiveserver = 0" in the connect method that overrides the default "passiveserver = 1" at the class level. This was introduced in rev. 1.54 when you integrated IPV6 support. Shouldn't this be taken out? Rev 1.48 announces "default to passive mode". The IPV6 patch must have broken this. (I'm sorry I didn't look at this before the release; this is an unfortunate glitch in 2.2!) ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2001-12-22 06:51 Message: Logged In: YES user_id=21627 Is the debian bug number correct? The URL gives "An error occurred. Dammit. Error was: Couldn't get bug status: No such file or directory." Also, CC'ing the Debian BTS is not easy through SF, would it be feasible that you forward all comments to the BTS yourself? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=495693&group_id=5470 From noreply@sourceforge.net Fri Feb 22 14:52:43 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 06:52:43 -0800 Subject: [Python-bugs-list] [ python-Bugs-520644 ] __slots__ are not pickled Message-ID: Bugs item #520644, was opened at 2002-02-20 12:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Samuele Pedroni (pedronis) Assigned to: Nobody/Anonymous (nobody) Summary: __slots__ are not pickled Initial Comment: [Posted on behalf of Kevin Jacobs] I have been hacking on ways to make lighter-weight Python objects using the __slots__ mechanism that came with Python 2.2 new- style class. Everything has gone swimmingly until I noticed that slots do not get pickled/cPickled at all! Here is a simple test case: import pickle,cPickle class Test(object): __slots__ = ['x'] def __init__(self): self.x = 66666 test = Test() pickle_str = pickle.dumps( test ) cpickle_str = cPickle.dumps( test ) untest = pickle.loads( pickle_str ) untestc = cPickle.loads( cpickle_str ) print untest.x # raises AttributeError print untextc.x # raises AttributeError ... see http://aspn.activestate.com/ASPN/Mail/Message/python- dev/1031499 ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-22 06:52 Message: Logged In: YES user_id=459565 Samuele's sltattr.py is an interesting approach, though I am not entirely sure it is necessary or feasible sufficiently address the significant problems with slots via proxying __dict__ (see #5 below). Here is a mostly complete list of smaller changes that are somewhat orthogonal to how we address accesses to __dict__: 1) Flatten slot lists: Change obj.__class__.__slots__ to return an immutable list of all slot descriptors in the object (including all those of base classes). The motivation for this is similar in spirit to storing a flattened __mro__. The advantages of this change are: a) allows for fast and explicit object reflection that correctly finds all dict attributes, all slot attributes. b) allows reflection implementations (like vars (object) and pickle) to treat dict and slot attrs differently if we choose not to proxy __dict__. This has several advantages, as explained in change #2. Also importantly, this way it is not possible to "lose" descriptors permanently by deleting them from obj.__class__.__dict__. 2) Update reflection API even if we do not choose to proxy __dict__: Alter vars(object) to return a dictionary of all attributes, including both the contents of the non-proxied __dict__ and the valid attributes that result from iterating over __slots__ and evaluating the descriptors. The details of how this is best implemented depend on how we wish to define the behavior of modifying the resulting dictionary. It could be either: a) explicitly immutable, which involves creating proxy objects b) mutable, which involves copying c) undefined, which means implicitly immutable Aside from the questions over the nature of the return type, this implementation (coupled with #1) has distinct advantages. Specifically the native object.__dict__ has a very natural internal representation that pairs attribute names directly with values. In contrast, a fair amount of additional work is needed to extract the slots that store values and create a dictionary of their names and values. Other implementations will require a great deal more work since they would have to traverse though base classes to collecting slot descriptors. 3) Flatten slot inheritance: Update the new-style object inheritance mechanism to re-use slots of the same name, rather than creating a new slot and hiding the old. This makes the inheritance semantics of slots equivalent to those of normal instance attributes and avoids introducing an ad-hoc and obscure method of data hiding. 4) Update standard library to use new reflection API (and make them robust to properies at the same time) if we choose not to proxy __dict__. Virtually all of the changes are simple and involve updating these constructs: a) obj.__dict__ b) obj.__dict__[blah] c) obj.__dict__[blah] = x (What these will become depends on other factors, including the context and semantics of vars(obj).) Here is a fairly complete list of Python 2.2 modules that will need to be updated: copy, copy_reg, inspect, pickle, pydoc, cPickle, Bastion, codeop, dis, doctest, gettext, ihooks, imputil, knee, pdb, profile, rexec, rlcompleter, tempfile, unittest, xmllib, xmlrpclib 5) (NB: potentially controversial and not required) We could alter the descriptor protocol to make slots (and properties) more transparent when the values they reference do not exist. Here is an example to illustrate this: class A(object): foo = 1 class B(A): __slots__ = ('foo',) b = B() print b.foo > 1 or AttributeError? Currently an AttributeError is raised. However, it is a fairly easy change to make AttributeErrors signal that attribute resolution is to continue until either a valid descriptor is evaluated, an instance-attribute is found, or until the resolution fails after search the meta-type, the type and the instance dictionary. The problem illustrated by the above code also occurs when trying to create proxies for __dict__, if the proxy worked on the basis of the collected slot descriptors (__allslots__ in Samuele's example). I am prepared to submit patches to address each of these issues. However, I do want feedback beforehand, so that I do not waste time implementing something that will never be accepted. ---------------------------------------------------------------------- Comment By: Samuele Pedroni (pedronis) Date: 2002-02-21 17:33 Message: Logged In: YES user_id=61408 some slots more like attrs illustrative python code ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-21 09:51 Message: Logged In: YES user_id=459565 This bug raises questions about what a slot really is. After a fair amount of discussion on Python-dev, we have come up with basically two answers: 1) a slot is a struct-member that is part of the private implementation of an object. Slots should have their own semantics and not be expected to act like Python instance attributes. 2) slots should be treated just like dict instance attributes except they are allocated statically within the object itself, and require slightly different reflection methods. Under (1), this bug isn't really a bug. The class should implement a __reduce__ function or otherwise hook into the copy_reg system. Under (2), this bug is just the tip of the iceberg. There are about 8 other problems with the current slot implementation that need to be resolved before slots act almost identically to normal instance attributes. Thankfully, I am fairly confident that I can supply patches that can achieve this, though I am waiting for Guido to comment on this issue when he returns from his trip. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 From noreply@sourceforge.net Fri Feb 22 15:03:44 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 07:03:44 -0800 Subject: [Python-bugs-list] [ python-Bugs-520644 ] __slots__ are not pickled Message-ID: Bugs item #520644, was opened at 2002-02-20 12:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Samuele Pedroni (pedronis) Assigned to: Nobody/Anonymous (nobody) Summary: __slots__ are not pickled Initial Comment: [Posted on behalf of Kevin Jacobs] I have been hacking on ways to make lighter-weight Python objects using the __slots__ mechanism that came with Python 2.2 new- style class. Everything has gone swimmingly until I noticed that slots do not get pickled/cPickled at all! Here is a simple test case: import pickle,cPickle class Test(object): __slots__ = ['x'] def __init__(self): self.x = 66666 test = Test() pickle_str = pickle.dumps( test ) cpickle_str = cPickle.dumps( test ) untest = pickle.loads( pickle_str ) untestc = cPickle.loads( cpickle_str ) print untest.x # raises AttributeError print untextc.x # raises AttributeError ... see http://aspn.activestate.com/ASPN/Mail/Message/python- dev/1031499 ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-22 07:03 Message: Logged In: YES user_id=459565 Oops. Please ignore the last paragraph of point #5. Samuele's __allslots__ is fine with regard to the example I presented. ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-22 06:52 Message: Logged In: YES user_id=459565 Samuele's sltattr.py is an interesting approach, though I am not entirely sure it is necessary or feasible sufficiently address the significant problems with slots via proxying __dict__ (see #5 below). Here is a mostly complete list of smaller changes that are somewhat orthogonal to how we address accesses to __dict__: 1) Flatten slot lists: Change obj.__class__.__slots__ to return an immutable list of all slot descriptors in the object (including all those of base classes). The motivation for this is similar in spirit to storing a flattened __mro__. The advantages of this change are: a) allows for fast and explicit object reflection that correctly finds all dict attributes, all slot attributes. b) allows reflection implementations (like vars (object) and pickle) to treat dict and slot attrs differently if we choose not to proxy __dict__. This has several advantages, as explained in change #2. Also importantly, this way it is not possible to "lose" descriptors permanently by deleting them from obj.__class__.__dict__. 2) Update reflection API even if we do not choose to proxy __dict__: Alter vars(object) to return a dictionary of all attributes, including both the contents of the non-proxied __dict__ and the valid attributes that result from iterating over __slots__ and evaluating the descriptors. The details of how this is best implemented depend on how we wish to define the behavior of modifying the resulting dictionary. It could be either: a) explicitly immutable, which involves creating proxy objects b) mutable, which involves copying c) undefined, which means implicitly immutable Aside from the questions over the nature of the return type, this implementation (coupled with #1) has distinct advantages. Specifically the native object.__dict__ has a very natural internal representation that pairs attribute names directly with values. In contrast, a fair amount of additional work is needed to extract the slots that store values and create a dictionary of their names and values. Other implementations will require a great deal more work since they would have to traverse though base classes to collecting slot descriptors. 3) Flatten slot inheritance: Update the new-style object inheritance mechanism to re-use slots of the same name, rather than creating a new slot and hiding the old. This makes the inheritance semantics of slots equivalent to those of normal instance attributes and avoids introducing an ad-hoc and obscure method of data hiding. 4) Update standard library to use new reflection API (and make them robust to properies at the same time) if we choose not to proxy __dict__. Virtually all of the changes are simple and involve updating these constructs: a) obj.__dict__ b) obj.__dict__[blah] c) obj.__dict__[blah] = x (What these will become depends on other factors, including the context and semantics of vars(obj).) Here is a fairly complete list of Python 2.2 modules that will need to be updated: copy, copy_reg, inspect, pickle, pydoc, cPickle, Bastion, codeop, dis, doctest, gettext, ihooks, imputil, knee, pdb, profile, rexec, rlcompleter, tempfile, unittest, xmllib, xmlrpclib 5) (NB: potentially controversial and not required) We could alter the descriptor protocol to make slots (and properties) more transparent when the values they reference do not exist. Here is an example to illustrate this: class A(object): foo = 1 class B(A): __slots__ = ('foo',) b = B() print b.foo > 1 or AttributeError? Currently an AttributeError is raised. However, it is a fairly easy change to make AttributeErrors signal that attribute resolution is to continue until either a valid descriptor is evaluated, an instance-attribute is found, or until the resolution fails after search the meta-type, the type and the instance dictionary. The problem illustrated by the above code also occurs when trying to create proxies for __dict__, if the proxy worked on the basis of the collected slot descriptors (__allslots__ in Samuele's example). I am prepared to submit patches to address each of these issues. However, I do want feedback beforehand, so that I do not waste time implementing something that will never be accepted. ---------------------------------------------------------------------- Comment By: Samuele Pedroni (pedronis) Date: 2002-02-21 17:33 Message: Logged In: YES user_id=61408 some slots more like attrs illustrative python code ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-21 09:51 Message: Logged In: YES user_id=459565 This bug raises questions about what a slot really is. After a fair amount of discussion on Python-dev, we have come up with basically two answers: 1) a slot is a struct-member that is part of the private implementation of an object. Slots should have their own semantics and not be expected to act like Python instance attributes. 2) slots should be treated just like dict instance attributes except they are allocated statically within the object itself, and require slightly different reflection methods. Under (1), this bug isn't really a bug. The class should implement a __reduce__ function or otherwise hook into the copy_reg system. Under (2), this bug is just the tip of the iceberg. There are about 8 other problems with the current slot implementation that need to be resolved before slots act almost identically to normal instance attributes. Thankfully, I am fairly confident that I can supply patches that can achieve this, though I am waiting for Guido to comment on this issue when he returns from his trip. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 From noreply@sourceforge.net Fri Feb 22 15:43:07 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 07:43:07 -0800 Subject: [Python-bugs-list] [ python-Bugs-521450 ] Trivial Misspelling Message-ID: Bugs item #521450, was opened at 2002-02-22 05:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521450&group_id=5470 Category: Documentation Group: Python 2.2 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jesús Cea Avión (jcea) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Trivial Misspelling Initial Comment: "Python Reference Manual", release 2.2 In page 51 ("the 'try' statement"), near the end of the page, we have: "The reason is a problem with the current implementation - THSI restriction may be lifted in the future". We must correct "thsi" to "this". ---------------------------------------------------------------------- >Comment By: Fred L. Drake, Jr. (fdrake) Date: 2002-02-22 07:43 Message: Logged In: YES user_id=3066 Fixed on the trunk & for the 2.2 and 2.1 maintenance branches, as revisions 1.31, 1.29.8.2, and 1.24.2.3. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521450&group_id=5470 From noreply@sourceforge.net Fri Feb 22 17:11:33 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 09:11:33 -0800 Subject: [Python-bugs-list] [ python-Bugs-521526 ] Problems when python is renamed Message-ID: Bugs item #521526, was opened at 2002-02-22 09:11 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521526&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: R. Lindsay Todd (rltodd) Assigned to: Nobody/Anonymous (nobody) Summary: Problems when python is renamed Initial Comment: I use a RedHat 7.2 system where Python 2.2 in an executable /usr/bin/python2. This causes some problems with using distutils. 1) If I say "python2 setup.py bdist_rpm" it creates an RPM spec file that uses plain "python" instead of "python2". Seems to me that this should make use of the path to the interpreter that is actually running. Fortunately this fails, so I can manually hack the spec file... 2) When including scripts to be interpreted, distutils looks for the leading #! and the word "python". My scripts have the word "python2", since I want to be able to test them directly. It seems like distutils could somehow handle versioned python's, like looking for a word that begins with "python", or perhaps some other magic sequence. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521526&group_id=5470 From noreply@sourceforge.net Fri Feb 22 17:19:54 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 09:19:54 -0800 Subject: [Python-bugs-list] [ python-Bugs-497854 ] Short-cuts missing for All Users Message-ID: Bugs item #497854, was opened at 2001-12-30 06:31 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=497854&group_id=5470 Category: Installation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Martin v. Löwis (loewis) Assigned to: Nobody/Anonymous (nobody) Summary: Short-cuts missing for All Users Initial Comment: Using the Windows installer of Python 2.2 on Windows XP Professional, as a user "root" who is member of the Administrator's group, performing an admin installation, the Python 2.2 program group does not show up in the start menu of other users. The cause for this problem is that the installer puts the shortcuts into \Documents and Settings\root\Start Menu, not into \Documents and Settings\All Users\Start Menu. Notice that it is difficult to login as Administrator on XP, since the Administrator account is not displayed on the welcome screen (only if the old-style login screen is selected). Even if installing Python as Administrator, the shortcuts still end up in \Documents and Settings\Administrator\Start Menu. ---------------------------------------------------------------------- Comment By: R. Lindsay Todd (rltodd) Date: 2002-02-22 09:19 Message: Logged In: YES user_id=283405 This is also a problem under Windows 2000 Professional, where I am actually logged in as "Administrator" and have made sure it is a full administrative install I'm doing. Registry settings are properly made for everyone; it is just the short cuts that don't appear. I've been working around this by manually moving the program group folder to "All Users" and changing the ACLs. This should be done before installing win32all, which will create the program group under "All Users". ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=497854&group_id=5470 From noreply@sourceforge.net Fri Feb 22 17:37:51 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 09:37:51 -0800 Subject: [Python-bugs-list] [ python-Bugs-521526 ] Problems when python is renamed Message-ID: Bugs item #521526, was opened at 2002-02-22 09:11 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521526&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: R. Lindsay Todd (rltodd) >Assigned to: M.-A. Lemburg (lemburg) Summary: Problems when python is renamed Initial Comment: I use a RedHat 7.2 system where Python 2.2 in an executable /usr/bin/python2. This causes some problems with using distutils. 1) If I say "python2 setup.py bdist_rpm" it creates an RPM spec file that uses plain "python" instead of "python2". Seems to me that this should make use of the path to the interpreter that is actually running. Fortunately this fails, so I can manually hack the spec file... 2) When including scripts to be interpreted, distutils looks for the leading #! and the word "python". My scripts have the word "python2", since I want to be able to test them directly. It seems like distutils could somehow handle versioned python's, like looking for a word that begins with "python", or perhaps some other magic sequence. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-22 09:37 Message: Logged In: YES user_id=38388 1) use "python setup.py bdist_rpm --python python2"; not a bug. 2) this would require extending the RE in build_scripts.py; however, I'm not sure what magic you have in mind here ? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521526&group_id=5470 From noreply@sourceforge.net Fri Feb 22 17:38:18 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 09:38:18 -0800 Subject: [Python-bugs-list] [ python-Bugs-521526 ] Problems when python is renamed Message-ID: Bugs item #521526, was opened at 2002-02-22 09:11 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521526&group_id=5470 >Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: R. Lindsay Todd (rltodd) Assigned to: M.-A. Lemburg (lemburg) Summary: Problems when python is renamed Initial Comment: I use a RedHat 7.2 system where Python 2.2 in an executable /usr/bin/python2. This causes some problems with using distutils. 1) If I say "python2 setup.py bdist_rpm" it creates an RPM spec file that uses plain "python" instead of "python2". Seems to me that this should make use of the path to the interpreter that is actually running. Fortunately this fails, so I can manually hack the spec file... 2) When including scripts to be interpreted, distutils looks for the leading #! and the word "python". My scripts have the word "python2", since I want to be able to test them directly. It seems like distutils could somehow handle versioned python's, like looking for a word that begins with "python", or perhaps some other magic sequence. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-22 09:38 Message: Logged In: YES user_id=38388 1) use "python setup.py bdist_rpm --python python2"; not a bug. 2) this would require extending the RE in build_scripts.py; however, I'm not sure what magic you have in mind here ? ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-22 09:37 Message: Logged In: YES user_id=38388 1) use "python setup.py bdist_rpm --python python2"; not a bug. 2) this would require extending the RE in build_scripts.py; however, I'm not sure what magic you have in mind here ? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521526&group_id=5470 From noreply@sourceforge.net Fri Feb 22 19:09:57 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 11:09:57 -0800 Subject: [Python-bugs-list] [ python-Bugs-521526 ] Problems when python is renamed Message-ID: Bugs item #521526, was opened at 2002-02-22 09:11 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521526&group_id=5470 Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Submitted By: R. Lindsay Todd (rltodd) Assigned to: M.-A. Lemburg (lemburg) Summary: Problems when python is renamed Initial Comment: I use a RedHat 7.2 system where Python 2.2 in an executable /usr/bin/python2. This causes some problems with using distutils. 1) If I say "python2 setup.py bdist_rpm" it creates an RPM spec file that uses plain "python" instead of "python2". Seems to me that this should make use of the path to the interpreter that is actually running. Fortunately this fails, so I can manually hack the spec file... 2) When including scripts to be interpreted, distutils looks for the leading #! and the word "python". My scripts have the word "python2", since I want to be able to test them directly. It seems like distutils could somehow handle versioned python's, like looking for a word that begins with "python", or perhaps some other magic sequence. ---------------------------------------------------------------------- >Comment By: R. Lindsay Todd (rltodd) Date: 2002-02-22 11:09 Message: Logged In: YES user_id=283405 1) Thanks. I missed that in the documentation (still do, after grepping it). I see it displayed with --help, though. Still, I found this behaviour a little surprising (that the default was not to use the python executable used to invoke setup.py. 2) r'^#!.*python[0-9.]*(\s+.*)?$' would be an improvement, and handle my case. Possibly even r'^#!.*python\S*(\s+.*)?$' Maybe there should instead be a magic comment of some sort to indicate that this is a python script that should have line 1 rewritten? ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-22 09:38 Message: Logged In: YES user_id=38388 1) use "python setup.py bdist_rpm --python python2"; not a bug. 2) this would require extending the RE in build_scripts.py; however, I'm not sure what magic you have in mind here ? ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-22 09:37 Message: Logged In: YES user_id=38388 1) use "python setup.py bdist_rpm --python python2"; not a bug. 2) this would require extending the RE in build_scripts.py; however, I'm not sure what magic you have in mind here ? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521526&group_id=5470 From noreply@sourceforge.net Fri Feb 22 22:14:53 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 14:14:53 -0800 Subject: [Python-bugs-list] [ python-Bugs-521628 ] thread_pthread.h rev 2.37 Message-ID: Bugs item #521628, was opened at 2002-02-22 14:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521628&group_id=5470 Category: Macintosh Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 8 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: thread_pthread.h rev 2.37 Initial Comment: Michael, the rev 2.37 fix for thread_pthread.h needs to be included, otherwise Python won't compile on Mac OS X. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521628&group_id=5470 From noreply@sourceforge.net Fri Feb 22 23:52:26 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 15:52:26 -0800 Subject: [Python-bugs-list] [ python-Bugs-501591 ] dir() doc is old Message-ID: Bugs item #501591, was opened at 2002-01-09 18:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=501591&group_id=5470 Category: Documentation Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Neal Norwitz (nnorwitz) >Assigned to: Tim Peters (tim_one) Summary: dir() doc is old Initial Comment: "Brian Quinlan" reports on c.l.p that dir() is incorrect in the library reference. The current doc string seems to be more accurate: Return an alphabetized list of names comprising (some of) the attributes of the given object, and of attributes reachable from it: No argument: the names in the current scope. Module object: the module attributes. Type or class object: its attributes, and recursively the attributes of its bases. Otherwise: its attributes, its class's attributes, and recursively the attributes of its class's base classes. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-22 15:52 Message: Logged In: YES user_id=31435 Reassigned to me, because Fred is off today and we want to get this done for 2.2.1. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=501591&group_id=5470 From noreply@sourceforge.net Sat Feb 23 03:58:56 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 19:58:56 -0800 Subject: [Python-bugs-list] [ python-Bugs-521706 ] Python expects __eprintf on Solaris Message-ID: Bugs item #521706, was opened at 2002-02-22 19:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521706&group_id=5470 Category: Build Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Greg Kochanski (gpk) Assigned to: Nobody/Anonymous (nobody) Summary: Python expects __eprintf on Solaris Initial Comment: ftp_up.py Traceback (most recent call last): File "/usr/local/bin/ftp_up.py", line 10, in ? import ftplib File "/usr/local/lib/python2.1/ftplib.py", line 46, in ? import socket File "/usr/local/lib/python2.1/socket.py", line 41, in ? from _socket import * ImportError: ld.so.1: /usr/local/bin/python: fatal: relocation error: file /usr/local/lib/python2.1/lib-dynload/_socket.so: symbol __eprintf: referenced symbol not found On Solaris 2.6 (current patches), Python 2.1.2 out-of-the-box install. nm *.a | grep eprintf shows nothing in /lib and /usr/lib. Presumably, the build system is expecting that function to exist, when it really doesn't. Same problem on Solaris 2.7: /usr/local/bin/python Python 2.1.2 (#1, Jan 23 2002, 10:44:53) [C] on sunos5 Type "copyright", "credits" or "license" for more information. >>> import _socket Traceback (most recent call last): File "", line 1, in ? ImportError: ld.so.1: /usr/local/bin/python: fatal: relocation error: file /usr/local/lib/python2.1/lib-dynload/_socket.so: symbol __eprintf: referenced symbol not found >>> ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521706&group_id=5470 From noreply@sourceforge.net Sat Feb 23 04:41:02 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 20:41:02 -0800 Subject: [Python-bugs-list] [ python-Bugs-501591 ] dir() doc is old Message-ID: Bugs item #501591, was opened at 2002-01-09 18:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=501591&group_id=5470 Category: Documentation Group: Python 2.2.1 candidate >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Neal Norwitz (nnorwitz) Assigned to: Tim Peters (tim_one) Summary: dir() doc is old Initial Comment: "Brian Quinlan" reports on c.l.p that dir() is incorrect in the library reference. The current doc string seems to be more accurate: Return an alphabetized list of names comprising (some of) the attributes of the given object, and of attributes reachable from it: No argument: the names in the current scope. Module object: the module attributes. Type or class object: its attributes, and recursively the attributes of its bases. Otherwise: its attributes, its class's attributes, and recursively the attributes of its class's base classes. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-22 20:41 Message: Logged In: YES user_id=31435 Repaired, in Doc/lib/libfuncs.tex; new revision: 1.101 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-22 15:52 Message: Logged In: YES user_id=31435 Reassigned to me, because Fred is off today and we want to get this done for 2.2.1. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=501591&group_id=5470 From noreply@sourceforge.net Sat Feb 23 05:34:26 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Fri, 22 Feb 2002 21:34:26 -0800 Subject: [Python-bugs-list] [ python-Bugs-521723 ] fcntl.ioctl on openbsd Message-ID: Bugs item #521723, was opened at 2002-02-22 21:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 Category: Extension Modules Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Jeremy Rossi (skin_pup) Assigned to: Nobody/Anonymous (nobody) Summary: fcntl.ioctl on openbsd Initial Comment: >From the OpenBSD man page ------- #include int ioctl(int d, unsigned long request, ...); -- On OpenBSD ioctl takes an unsigned long for the action to be preformed on d. The function fcntl_ioctl() in Modules/fcntlmodule.c will only accept an int for the second argument without rasing an error. On OpenBSD (maybe free/net also I have not checked) an unsigned long should be the largest allowed. >From Modules/fcntlmodule.c ------- PyArg_ParseTuple(args, "iis#:ioctl", &fd, &code, &str, &len)) -- ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 From noreply@sourceforge.net Sat Feb 23 08:31:43 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 00:31:43 -0800 Subject: [Python-bugs-list] [ python-Bugs-521723 ] fcntl.ioctl on openbsd Message-ID: Bugs item #521723, was opened at 2002-02-22 21:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 Category: Extension Modules Group: Python 2.1.1 Status: Open Resolution: None >Priority: 3 Submitted By: Jeremy Rossi (skin_pup) Assigned to: Nobody/Anonymous (nobody) Summary: fcntl.ioctl on openbsd Initial Comment: >From the OpenBSD man page ------- #include int ioctl(int d, unsigned long request, ...); -- On OpenBSD ioctl takes an unsigned long for the action to be preformed on d. The function fcntl_ioctl() in Modules/fcntlmodule.c will only accept an int for the second argument without rasing an error. On OpenBSD (maybe free/net also I have not checked) an unsigned long should be the largest allowed. >From Modules/fcntlmodule.c ------- PyArg_ParseTuple(args, "iis#:ioctl", &fd, &code, &str, &len)) -- ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 From noreply@sourceforge.net Sat Feb 23 09:13:49 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 01:13:49 -0800 Subject: [Python-bugs-list] [ python-Bugs-521628 ] thread_pthread.h rev 2.37 Message-ID: Bugs item #521628, was opened at 2002-02-22 14:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521628&group_id=5470 Category: Macintosh Group: Python 2.2.1 candidate >Status: Closed >Resolution: Accepted Priority: 8 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: thread_pthread.h rev 2.37 Initial Comment: Michael, the rev 2.37 fix for thread_pthread.h needs to be included, otherwise Python won't compile on Mac OS X. ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-23 01:13 Message: Logged In: YES user_id=6656 OK, done. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521628&group_id=5470 From noreply@sourceforge.net Sat Feb 23 13:44:18 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 05:44:18 -0800 Subject: [Python-bugs-list] [ python-Bugs-521782 ] unreliable file.read() error handling Message-ID: Bugs item #521782, was opened at 2002-02-23 05:44 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521782&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Marius Gedminas (mgedmin) Assigned to: Nobody/Anonymous (nobody) Summary: unreliable file.read() error handling Initial Comment: fread(3) manual page states fread and fwrite return the number of items successfully read or written (i.e., not the number of characters). If an error occurs, or the end-of-file is reached, the return value is a short item count (or zero). Python only checks ferror status when the return value is zero (Objects/fileobject.c line 550 from Python-2.1.2 sources). I agree that it is a good idea to delay exception throwing until after the user has processed the partial chunk of data returned by fread, but there are two problems with the current implementation: loss of errno and occasional loss of data. Both problems are illustrated with this scenario taken from real life: suppose the file descriptor refers to a pipe, and we set O_NONBLOCK mode with fcntl (the application was reading from multiple pipes in a select() loop and couldn't afford to block) fread(4096) returns an incomplete block and sets errno to EAGAIN chunksize != 0 so we do not check ferror() and return successfully the next time file.read() is called we reset errno and do fread(4096) again. It returns a full block (i.e. bytesread == buffersize on line 559), so we repeat the loop and call fread(0). It returns 0, of course. Now we check ferror() and find it was set (by a previous fread(4096) called maybe a century ago). The errno information is already lost, so we throw an IOError with errno=0. And also lose that 4K chunk of valuable user data. Regarding solutions, I can see two alternatives: - call clearerr(f->f_fp) just before fread(), where Python currently sets errno = 0; This makes sure that we do not have stale ferror() flag and errno is valid, but we might not notice some errors. That doesn't matter for EAGAIN, and for errors that occur reliably if we repeat fread() on the same stream. We might still lose data if an exception is thrown on the second or later loop iteration. - always check for ferror() immediatelly after fread(). - regarding data loss, maybe it is possible to store the errno somewhere inside the file object and delay exception throwing if we have successfully read some data (i.e. bytesread > 0). The exception could be thrown on the next call to file.read() before performing anything else. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521782&group_id=5470 From noreply@sourceforge.net Sat Feb 23 19:20:28 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 11:20:28 -0800 Subject: [Python-bugs-list] [ python-Bugs-521854 ] Different extension modules share space Message-ID: Bugs item #521854, was opened at 2002-02-23 11:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521854&group_id=5470 Category: Extension Modules Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Pearu Peterson (pearu) Assigned to: Nobody/Anonymous (nobody) Summary: Different extension modules share space Initial Comment: Hi! I have found that if two extension modules use the same third party library that defines a static variable, then this static variable is shared in both extension modules. In real applications, this can cause curious segmentation faults if both extension modules are used in the same Python script or session. Using these extension modules in separate session generates no problems whatsoever. This is observed only with Python version 2.1.1 and 2.1.2. Python 2.0 and 2.2 are free from the described symptoms. Therefore, it makes me argue that there are still bugs in 2.1.2 that are related to importing extension modules. I have prepared a small example to demonstrate all this. The example consists of 4 files: runme.py, foo.c, bar.c, and fun.c that are attached to this report. You only need to run runme.py. Here are the outputs of runme.py when used with different Python versions: $ python2.0 runme.py >From foo: set_var: var=0; Doing var++ >From bar: set_var: var=0; Doing var++ $ python2.1 runme.py >From foo: set_var: var=0; Doing var++ >From bar: set_var: var=1; Doing var++ <- note that var=1 was set in foo $ python2.2 runme.py >From foo: set_var: var=0; Doing var++ >From bar: set_var: var=0; Doing var++ These tests are performed on Debian Woody with gcc-2.95.4. I appreciate if you could suggest a fix or workaround to extension modules that are build under Python 2.1.2, of course only if possible. Thanks, Pearu ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521854&group_id=5470 From noreply@sourceforge.net Sat Feb 23 22:57:27 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 14:57:27 -0800 Subject: [Python-bugs-list] [ python-Bugs-521854 ] Different extension modules share space Message-ID: Bugs item #521854, was opened at 2002-02-23 11:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521854&group_id=5470 Category: Extension Modules Group: Python 2.1.2 >Status: Closed >Resolution: Works For Me Priority: 5 Submitted By: Pearu Peterson (pearu) Assigned to: Nobody/Anonymous (nobody) Summary: Different extension modules share space Initial Comment: Hi! I have found that if two extension modules use the same third party library that defines a static variable, then this static variable is shared in both extension modules. In real applications, this can cause curious segmentation faults if both extension modules are used in the same Python script or session. Using these extension modules in separate session generates no problems whatsoever. This is observed only with Python version 2.1.1 and 2.1.2. Python 2.0 and 2.2 are free from the described symptoms. Therefore, it makes me argue that there are still bugs in 2.1.2 that are related to importing extension modules. I have prepared a small example to demonstrate all this. The example consists of 4 files: runme.py, foo.c, bar.c, and fun.c that are attached to this report. You only need to run runme.py. Here are the outputs of runme.py when used with different Python versions: $ python2.0 runme.py >From foo: set_var: var=0; Doing var++ >From bar: set_var: var=0; Doing var++ $ python2.1 runme.py >From foo: set_var: var=0; Doing var++ >From bar: set_var: var=1; Doing var++ <- note that var=1 was set in foo $ python2.2 runme.py >From foo: set_var: var=0; Doing var++ >From bar: set_var: var=0; Doing var++ These tests are performed on Debian Woody with gcc-2.95.4. I appreciate if you could suggest a fix or workaround to extension modules that are build under Python 2.1.2, of course only if possible. Thanks, Pearu ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 14:57 Message: Logged In: YES user_id=21627 That's a bug in the Debian package. Debian uses the patch --- python2.1-2.1.2.orig/Python/dynload_shlib.c +++ python2.1-2.1.2/Python/dynload_shlib.c @@ -87,7 +87,7 @@ #ifdef RTLD_NOW /* RTLD_NOW: resolve externals now (i.e. core dump now if some are missing) */ - handle = dlopen(pathname, RTLD_NOW); + handle = dlopen(pathname, RTLD_NOW | RTLD_GLOBAL); #else if (Py_VerboseFlag) printf("dlopen(\%s\, %d);\n", pathname, which results in exactly this behaviour. Please report the bug to them; it works fine in the standard Python 2.1 distribution. They claim that this "solves" bug debbug:97146, and that it is a good thing to copy that strategy from Redhat. This is foolish; the use of RTLD_GLOBAL has been stopped since Python 1.5.2 precisely to avoid the problem you are now seeing, and Redhat should have never changed the Python source in that way. Any library that relies on RTLD_GLOBAL needs to be fixed (through exposure of CAPI objects); any application that relies on RTLD_GLOBAL can use sys.setdlopenflags (available since Python 2.2). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521854&group_id=5470 From noreply@sourceforge.net Sat Feb 23 23:05:10 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 15:05:10 -0800 Subject: [Python-bugs-list] [ python-Bugs-521723 ] fcntl.ioctl on openbsd Message-ID: Bugs item #521723, was opened at 2002-02-22 21:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 Category: Extension Modules Group: Python 2.1.1 >Status: Closed >Resolution: Invalid Priority: 3 Submitted By: Jeremy Rossi (skin_pup) Assigned to: Nobody/Anonymous (nobody) Summary: fcntl.ioctl on openbsd Initial Comment: >From the OpenBSD man page ------- #include int ioctl(int d, unsigned long request, ...); -- On OpenBSD ioctl takes an unsigned long for the action to be preformed on d. The function fcntl_ioctl() in Modules/fcntlmodule.c will only accept an int for the second argument without rasing an error. On OpenBSD (maybe free/net also I have not checked) an unsigned long should be the largest allowed. >From Modules/fcntlmodule.c ------- PyArg_ParseTuple(args, "iis#:ioctl", &fd, &code, &str, &len)) -- ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 15:05 Message: Logged In: YES user_id=21627 Can you give a practical example of an fcntl operation where this is a problem? For all practical purposes, a byte would be sufficient. Also, in POSIX, the argument to fcntl is of type int, see http://www.opengroup.org/onlinepubs/007904975/functions/fcntl.html So I can't see the bug. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 From noreply@sourceforge.net Sat Feb 23 23:09:14 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 15:09:14 -0800 Subject: [Python-bugs-list] [ python-Bugs-521706 ] Python expects __eprintf on Solaris Message-ID: Bugs item #521706, was opened at 2002-02-22 19:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521706&group_id=5470 Category: Build Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Greg Kochanski (gpk) Assigned to: Nobody/Anonymous (nobody) Summary: Python expects __eprintf on Solaris Initial Comment: ftp_up.py Traceback (most recent call last): File "/usr/local/bin/ftp_up.py", line 10, in ? import ftplib File "/usr/local/lib/python2.1/ftplib.py", line 46, in ? import socket File "/usr/local/lib/python2.1/socket.py", line 41, in ? from _socket import * ImportError: ld.so.1: /usr/local/bin/python: fatal: relocation error: file /usr/local/lib/python2.1/lib-dynload/_socket.so: symbol __eprintf: referenced symbol not found On Solaris 2.6 (current patches), Python 2.1.2 out-of-the-box install. nm *.a | grep eprintf shows nothing in /lib and /usr/lib. Presumably, the build system is expecting that function to exist, when it really doesn't. Same problem on Solaris 2.7: /usr/local/bin/python Python 2.1.2 (#1, Jan 23 2002, 10:44:53) [C] on sunos5 Type "copyright", "credits" or "license" for more information. >>> import _socket Traceback (most recent call last): File "", line 1, in ? ImportError: ld.so.1: /usr/local/bin/python: fatal: relocation error: file /usr/local/lib/python2.1/lib-dynload/_socket.so: symbol __eprintf: referenced symbol not found >>> ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 15:09 Message: Logged In: YES user_id=21627 This is not a bug in Python, but in your installation. Python does not, in itself, ever call __eprintf. Instead, certain versions of gcc emit references to this symbol when expanding the assert macro. In turn, you need to link such object files with libgcc.a. Are you certain that you have build all relevant objects with the system compiler (including, for example, any OpenSSL installation)? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521706&group_id=5470 From noreply@sourceforge.net Sat Feb 23 23:11:39 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 15:11:39 -0800 Subject: [Python-bugs-list] [ python-Bugs-521270 ] SMTP does not handle UNICODE Message-ID: Bugs item #521270, was opened at 2002-02-21 17:36 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521270&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Noah Spurrier (noah) Assigned to: Nobody/Anonymous (nobody) Summary: SMTP does not handle UNICODE Initial Comment: The SMTP library does not gracefully handle strings. This type of string is frequently returned from a databases and particulary when working with COM objects. For example, we pull email TO addresses and messages from from a database. We would like to call: server.sendmail(FROM, TO, message) instead we have to do this: server.sendmail(FROM, str(TO), str(message)) >From a users point of view it is easy to get around this by putting str() around every string before calling STMP methods, but I think it would make more sense for SMTP to convert them or gracefully handle them. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 15:11 Message: Logged In: YES user_id=21627 Can you give a specific example? Please attach a script to this report which exposes the error you are seeing (using Unicode literals where necessary). Perhaps you have non-ASCII characters in your strings? Those are not supported by the SMTP protocol, so there is no way smtplib could handle them gracefully. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521270&group_id=5470 From noreply@sourceforge.net Sun Feb 24 00:37:20 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 16:37:20 -0800 Subject: [Python-bugs-list] [ python-Bugs-521270 ] SMTP does not handle UNICODE Message-ID: Bugs item #521270, was opened at 2002-02-21 17:36 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521270&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Noah Spurrier (noah) Assigned to: Nobody/Anonymous (nobody) Summary: SMTP does not handle UNICODE Initial Comment: The SMTP library does not gracefully handle strings. This type of string is frequently returned from a databases and particulary when working with COM objects. For example, we pull email TO addresses and messages from from a database. We would like to call: server.sendmail(FROM, TO, message) instead we have to do this: server.sendmail(FROM, str(TO), str(message)) >From a users point of view it is easy to get around this by putting str() around every string before calling STMP methods, but I think it would make more sense for SMTP to convert them or gracefully handle them. ---------------------------------------------------------------------- >Comment By: Noah Spurrier (noah) Date: 2002-02-23 16:37 Message: Logged In: YES user_id=59261 # This is a contrived example to show what happens # when stmplib tries to swallow a unicode string. # Here the TO address is passed as unicode. Note that # this unicode string encode only regular ASCII characters # that would otherwise not be a problem for the SMPT # protocol. Where you might see this problem in the real # world would be when you pull email addresses from a # database or if you were getting data from a Windows # COM object. # # I think that in keeping with the transparent spirit of # unicode strings that it should be the responsibility # of the smtplib class to convert these strings -- # or to at least throw an exception. On the other hand, # if it is decided that smtplib should not convert unicode # then it should at least trap unicode strings and throw an # exception. Currently it treats unicode strings as regular # strings (thus lulling the programmer into a false sense # of security), but then it fails at the protocol level. # There is no run-time exception. The SMTP server just # rejects the recipient. import smtplib # For argument sake, say that this unicode string came # from a database; also Windows COM objects return unicode. ADDRESS_TO = u'null@blackhole.org' SMTP_SERVER = 'spruce.he.net' ADDRESS_FROM = 'noah@noah.org' SUBJECT = 'test' MESSAGE = "From: %s\r\nTo: %s\r\nSubject:%s\r\n\r\n" % (ADDRESS_FROM, ADDRESS_TO, SUBJECT) server = smtplib.SMTP(SMTP_SERVER) server.set_debuglevel(1) server.sendmail(ADDRESS_FROM, ADDRESS_TO, MESSAGE) server.quit() ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 15:11 Message: Logged In: YES user_id=21627 Can you give a specific example? Please attach a script to this report which exposes the error you are seeing (using Unicode literals where necessary). Perhaps you have non-ASCII characters in your strings? Those are not supported by the SMTP protocol, so there is no way smtplib could handle them gracefully. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521270&group_id=5470 From noreply@sourceforge.net Sun Feb 24 02:07:09 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 18:07:09 -0800 Subject: [Python-bugs-list] [ python-Bugs-521937 ] email module: object instantiation fails Message-ID: Bugs item #521937, was opened at 2002-02-23 18:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521937&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Sheila King (sheilaking) Assigned to: Nobody/Anonymous (nobody) Summary: email module: object instantiation fails Initial Comment: email module fails to instantiate an object when reading in a poorly formed message from either the message_from_file or message_from_string object. I have commented on this already on the python-list. I will include links to the discussions below: http://mail.python.org/pipermail/python-list/2002-February/085513.html http://mail.python.org/pipermail/python-list/2002-February/089220.html ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521937&group_id=5470 From noreply@sourceforge.net Sun Feb 24 03:15:49 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 19:15:49 -0800 Subject: [Python-bugs-list] [ python-Bugs-521723 ] fcntl.ioctl on openbsd Message-ID: Bugs item #521723, was opened at 2002-02-22 21:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 Category: Extension Modules Group: Python 2.1.1 Status: Closed Resolution: Invalid Priority: 3 Submitted By: Jeremy Rossi (skin_pup) Assigned to: Nobody/Anonymous (nobody) Summary: fcntl.ioctl on openbsd Initial Comment: >From the OpenBSD man page ------- #include int ioctl(int d, unsigned long request, ...); -- On OpenBSD ioctl takes an unsigned long for the action to be preformed on d. The function fcntl_ioctl() in Modules/fcntlmodule.c will only accept an int for the second argument without rasing an error. On OpenBSD (maybe free/net also I have not checked) an unsigned long should be the largest allowed. >From Modules/fcntlmodule.c ------- PyArg_ParseTuple(args, "iis#:ioctl", &fd, &code, &str, &len)) -- ---------------------------------------------------------------------- >Comment By: Jeremy Rossi (skin_pup) Date: 2002-02-23 19:15 Message: Logged In: YES user_id=323435 >From the current man pages of OpenBSD and FreeBSD. It stats that the second argument of ioctl is an unsigned int. http://www.openbsd.org/cgi-bin/man.cgi?query=ioctl http://www.freebsd.org/cgi-bin/man.cgi?query=ioctl Pythons fcntl.ioctl() does not allow the second argumnet to be anything other then a C int, this does not allow required operations to be preformed with ioctl on the two BSD systems. For a practical example. On the openbsd system the /dev/pf is the direct inteface to the firewall, the only things I am able to preform on this file in python are to turn the firewall on and off. This is allowed because the ioctl un_signed ints (536888321 in base 10) that prefrom this action happen to be small enough to fit in to an int. While the ioctl unsigned int (3229893651 in base 10) for reporting the status of connections is larger then a C int and python raises an exception before calling the system ioctl call. The following is the code in question. import fcntl import struct import os fd = os.open("/dev/pf",os.O_RDWR) null = '\0'*(struct.calcsize("LLLLIIII")) x = 3229893651 null = fcntl.ioctl(fd,x,null) print struct.unpack("LLLLIIII",null) ---output--- $ sudo python ./py-pfctl.py Traceback (most recent call last): File "./py-pfctl.py", line 8, in ? null = fcntl.ioctl(fd,x,null) OverflowError: long int too large to convert to int ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 15:05 Message: Logged In: YES user_id=21627 Can you give a practical example of an fcntl operation where this is a problem? For all practical purposes, a byte would be sufficient. Also, in POSIX, the argument to fcntl is of type int, see http://www.opengroup.org/onlinepubs/007904975/functions/fcntl.html So I can't see the bug. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 From noreply@sourceforge.net Sun Feb 24 06:06:11 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sat, 23 Feb 2002 22:06:11 -0800 Subject: [Python-bugs-list] [ python-Bugs-521937 ] email module: object instantiation fails Message-ID: Bugs item #521937, was opened at 2002-02-23 18:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521937&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Sheila King (sheilaking) >Assigned to: Barry Warsaw (bwarsaw) Summary: email module: object instantiation fails Initial Comment: email module fails to instantiate an object when reading in a poorly formed message from either the message_from_file or message_from_string object. I have commented on this already on the python-list. I will include links to the discussions below: http://mail.python.org/pipermail/python-list/2002-February/085513.html http://mail.python.org/pipermail/python-list/2002-February/089220.html ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-23 22:06 Message: Logged In: YES user_id=31435 Assigned to Barry. Short course: trying to load an ill- formed msg can raise an exception under the email package where under earlier libraries the same inupt allowed making some (non-exceptional) progress. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521937&group_id=5470 From noreply@sourceforge.net Sun Feb 24 12:00:46 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 04:00:46 -0800 Subject: [Python-bugs-list] [ python-Bugs-522033 ] Tkinter d/n't complain when Tcl not foun Message-ID: Bugs item #522033, was opened at 2002-02-24 04:00 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522033&group_id=5470 Category: Tkinter Group: None Status: Open Resolution: None Priority: 5 Submitted By: Lloyd Hugh Allen (lha2) Assigned to: Nobody/Anonymous (nobody) Summary: Tkinter d/n't complain when Tcl not foun Initial Comment: Under Windows 98, 64 meg, pentium II 300mHz: Installing ruby puts a line in the autoexec.bat saying "please use ruby tcl libraries". If Ruby is subsequently uninstalled, the autoexec.bat retains this line even though the ruby tcl directory no longer exists. This causes launching IDLE to do nothing rather than to produce an error message that "Tcl library not found" or somesuch. Under Python 2.2 #28. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522033&group_id=5470 From noreply@sourceforge.net Sun Feb 24 14:58:52 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 06:58:52 -0800 Subject: [Python-bugs-list] [ python-Bugs-521723 ] fcntl.ioctl on openbsd Message-ID: Bugs item #521723, was opened at 2002-02-22 21:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 Category: Extension Modules Group: Python 2.1.1 >Status: Open Resolution: Invalid Priority: 3 Submitted By: Jeremy Rossi (skin_pup) Assigned to: Nobody/Anonymous (nobody) Summary: fcntl.ioctl on openbsd Initial Comment: >From the OpenBSD man page ------- #include int ioctl(int d, unsigned long request, ...); -- On OpenBSD ioctl takes an unsigned long for the action to be preformed on d. The function fcntl_ioctl() in Modules/fcntlmodule.c will only accept an int for the second argument without rasing an error. On OpenBSD (maybe free/net also I have not checked) an unsigned long should be the largest allowed. >From Modules/fcntlmodule.c ------- PyArg_ParseTuple(args, "iis#:ioctl", &fd, &code, &str, &len)) -- ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-24 06:58 Message: Logged In: YES user_id=21627 This won't be easy to change: If we declare the type of ioctl to be unsigned, then we break systems where it is signed (as it should be). As a work-around, try using 0xC0844413 (i.e. the hexadecimal version) as the value for the ioctl. Python will understand this as a negative value, but your system will likely still understand it as the right ioctl command. ---------------------------------------------------------------------- Comment By: Jeremy Rossi (skin_pup) Date: 2002-02-23 19:15 Message: Logged In: YES user_id=323435 >From the current man pages of OpenBSD and FreeBSD. It stats that the second argument of ioctl is an unsigned int. http://www.openbsd.org/cgi-bin/man.cgi?query=ioctl http://www.freebsd.org/cgi-bin/man.cgi?query=ioctl Pythons fcntl.ioctl() does not allow the second argumnet to be anything other then a C int, this does not allow required operations to be preformed with ioctl on the two BSD systems. For a practical example. On the openbsd system the /dev/pf is the direct inteface to the firewall, the only things I am able to preform on this file in python are to turn the firewall on and off. This is allowed because the ioctl un_signed ints (536888321 in base 10) that prefrom this action happen to be small enough to fit in to an int. While the ioctl unsigned int (3229893651 in base 10) for reporting the status of connections is larger then a C int and python raises an exception before calling the system ioctl call. The following is the code in question. import fcntl import struct import os fd = os.open("/dev/pf",os.O_RDWR) null = '\0'*(struct.calcsize("LLLLIIII")) x = 3229893651 null = fcntl.ioctl(fd,x,null) print struct.unpack("LLLLIIII",null) ---output--- $ sudo python ./py-pfctl.py Traceback (most recent call last): File "./py-pfctl.py", line 8, in ? null = fcntl.ioctl(fd,x,null) OverflowError: long int too large to convert to int ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 15:05 Message: Logged In: YES user_id=21627 Can you give a practical example of an fcntl operation where this is a problem? For all practical purposes, a byte would be sufficient. Also, in POSIX, the argument to fcntl is of type int, see http://www.opengroup.org/onlinepubs/007904975/functions/fcntl.html So I can't see the bug. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 From noreply@sourceforge.net Sun Feb 24 15:09:00 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 07:09:00 -0800 Subject: [Python-bugs-list] [ python-Bugs-521270 ] SMTP does not handle UNICODE Message-ID: Bugs item #521270, was opened at 2002-02-21 17:36 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521270&group_id=5470 Category: Python Library Group: Python 2.1.1 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Noah Spurrier (noah) Assigned to: Nobody/Anonymous (nobody) Summary: SMTP does not handle UNICODE Initial Comment: The SMTP library does not gracefully handle strings. This type of string is frequently returned from a databases and particulary when working with COM objects. For example, we pull email TO addresses and messages from from a database. We would like to call: server.sendmail(FROM, TO, message) instead we have to do this: server.sendmail(FROM, str(TO), str(message)) >From a users point of view it is easy to get around this by putting str() around every string before calling STMP methods, but I think it would make more sense for SMTP to convert them or gracefully handle them. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-24 07:09 Message: Logged In: YES user_id=21627 Thanks for the example; the problem was that it considered the Unicode string as a list of recipients. Fixed in smtplib.py 1.48. ---------------------------------------------------------------------- Comment By: Noah Spurrier (noah) Date: 2002-02-23 16:37 Message: Logged In: YES user_id=59261 # This is a contrived example to show what happens # when stmplib tries to swallow a unicode string. # Here the TO address is passed as unicode. Note that # this unicode string encode only regular ASCII characters # that would otherwise not be a problem for the SMPT # protocol. Where you might see this problem in the real # world would be when you pull email addresses from a # database or if you were getting data from a Windows # COM object. # # I think that in keeping with the transparent spirit of # unicode strings that it should be the responsibility # of the smtplib class to convert these strings -- # or to at least throw an exception. On the other hand, # if it is decided that smtplib should not convert unicode # then it should at least trap unicode strings and throw an # exception. Currently it treats unicode strings as regular # strings (thus lulling the programmer into a false sense # of security), but then it fails at the protocol level. # There is no run-time exception. The SMTP server just # rejects the recipient. import smtplib # For argument sake, say that this unicode string came # from a database; also Windows COM objects return unicode. ADDRESS_TO = u'null@blackhole.org' SMTP_SERVER = 'spruce.he.net' ADDRESS_FROM = 'noah@noah.org' SUBJECT = 'test' MESSAGE = "From: %s\r\nTo: %s\r\nSubject:%s\r\n\r\n" % (ADDRESS_FROM, ADDRESS_TO, SUBJECT) server = smtplib.SMTP(SMTP_SERVER) server.set_debuglevel(1) server.sendmail(ADDRESS_FROM, ADDRESS_TO, MESSAGE) server.quit() ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 15:11 Message: Logged In: YES user_id=21627 Can you give a specific example? Please attach a script to this report which exposes the error you are seeing (using Unicode literals where necessary). Perhaps you have non-ASCII characters in your strings? Those are not supported by the SMTP protocol, so there is no way smtplib could handle them gracefully. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521270&group_id=5470 From noreply@sourceforge.net Sun Feb 24 17:07:15 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 09:07:15 -0800 Subject: [Python-bugs-list] [ python-Bugs-219960 ] Problems with Tcl/Tk and non-ASCII text entry Message-ID: Bugs item #219960, was opened at 2000-10-31 13:38 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=219960&group_id=5470 Category: Unicode Group: None Status: Open Resolution: None Priority: 3 Submitted By: Kirill Simonov (kirill_simonov) Assigned to: M.-A. Lemburg (lemburg) Summary: Problems with Tcl/Tk and non-ASCII text entry Initial Comment: Win98, Python2.0final. 1. I can't write cyrillic letters in IDLE editor. I tried to figure, what's happened and found that Tcl has command 'encoding'. I typed in IDLE shell: >>> from Tkinter import * >>> root = Tk() >>> root.tk.call("encoding", "names") 'utf-8 identity unicode' >>> root.tk.call("encoding", "system") 'identity' But Tcl had numerous encodings in 'tcl\tcl8.3\encodings' including 'cp1251'! Then I installed Tk separately and removed tcl83.dll and tk83.dll from DLLs: >>> from Tkinter import * >>> root = Tk() >>> root.tk.call("encoding", "names") 'cp860 cp861 [.........] cp857 unicode' >>> root.tk.call("encoding", "system") 'cp1251' So, when tcl/tk dlls in Python\DLLs directory, TCL can't load all it's encodings. But this is not the end. I typed in IDLE shell: >>> print "hello " # all chars looks correctly. and got: Exception in Tkinter callback Traceback (most recent call last): File "c:\python20\lib\lib-tk\Tkinter.py", line 1287, in __call__ return apply(self.func, args) File "C:\PYTHON20\Tools\idle\PyShell.py", line 579, in enter_callback self.runit() File "C:\PYTHON20\Tools\idle\PyShell.py", line 598, in runit more = self.interp.runsource(line) File "C:\PYTHON20\Tools\idle\PyShell.py", line 183, in runsource return InteractiveInterpreter.runsource(self, source, filename) File "c:\python20\lib\code.py", line 61, in runsource code = compile_command(source, filename, symbol) File "c:\python20\lib\codeop.py", line 61, in compile_command code = compile(source, filename, symbol) UnicodeError: ASCII encoding error: ordinal not in range(128) print "[the same characters]" Then, when I pressed Enter again, i got the same error message. I stopped this by pressing C-Break. [1/2 hour later] I fix this by editing site.py: if 1: # was: if 0 # Enable to support locale aware default string encodings. I typed again: >>> print "hello " and got: >>> print unicode("hello ") [2 hours later] Looking sources of _tkinter.c: static Tcl_Obj* AsObj(PyObject *value) { if type(value) is StringType: return Tcl_NewStringObj(value) elif type(value) is UnicodeType: ... ... } But I read in that all Tcl functions require all strings to be passed in UTF-8. So, this code must look like: if type(value) is StringType: if TCL_Version >= 8.1: return Tcl_NewStringObj() else: return Tcl_NewStringObj(value) And when I typed: >>> print unicode("hello ").encode('utf-8') i got: hello This is the end. P.S. Sorry for my bad english, but I really want to use IDLE and Tkinter in our school, so I can't wait for somebody other writing bug report. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-24 09:07 Message: Logged In: YES user_id=21627 Item 1. of MAL's list becomes 'Tcl does not find its encoding directory' in Python 2.2; this is fixed with FixTk.py 1.6. Item 2. has been fixed for Python 2.2; the remaining problem was that the OutputWindow converted all unicode objects to strings first, this has been fixed with OutputWindow.py 1.6. I'm not sure which problem is supposed to be solved with item 3. in MAL's list, I believe that this change is not necessary, and may be incorrect in some cases. Item 1. of the original submitter's problems is solved with the changes to FixTk.py. As for entering non-ASCII characters in the IDLE shell, I'm not sure what to do with this. For entering non-ASCII characters in a IDLE source window, see patch http://sourceforge.net/tracker/index.php?func=detail&aid=508973&group_id=9579&atid=309579 and PEP 263. I'm inclined to recommend that IDLE should encode Unicode strings entered by the user as UTF-8 before passing them to the interpreter; most likely, any byte strings will be printed to a Tk window, in which case UTF-8 should work right. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-01-03 13:37 Message: I've changed the subject line to better reflect the cause of the error: 1. The Tcl version shipped with Python 2.0 apparently doesn't include the Tcl codec libs, but these seem to be needed by Tcl to allow entry of characters in non-ASCII environments. 2. Python's print statement should allow Unicode to be passed through to sys.stdout. 3. _tkinter should recode all 8-bit strings into Unicode under the assumption that the 8-bit strings use sys.getdefaultencoding() as encoding. ---------------------------------------------------------------------- Comment By: Kirill Simonov (kirill_simonov) Date: 2000-11-12 04:17 Message: No, you are wrong! Entry and Text widget depends on TCL system encoding. If TCL can't find cyrillic encoding (cp1251) then I can't enter cyrillic characters. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2000-11-12 03:30 Message: It should be no problem that Tcl can't find its encodings. When used with Tkinter, Tcl can only expect Unicode strings, or strings in sys.getdefaultencoding() (i.e. 'ascii'). Therefore, Tk never needs any other encoding. If you want to make use of the Tcl system encoding (which is apparently not supported in Tkinter), you probably need to set the TCL_LIBRARY environment variable. ---------------------------------------------------------------------- Comment By: Kirill Simonov (kirill_simonov) Date: 2000-11-10 10:53 Message: Yes, this is a solution. But don't forget that TCL can't load it's encodings at startup. Look at FixTk.py: import sys, os, _tkinter [...] os.environ["TCL_LIBRARY"] = v But 'import _tkinter' loads _tkinter.pyd; _tkinter.pyd loads tcl83.dll; tcl83.dll tryes to load it's encodings at startup and fails, becourse TCL_LIBRARY is not defined! I can fix this: #import sys, os, _tkinter import sys, os #ver = str(_tkinter.TCL_VERSION) ver = "8.3" [...] ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2000-11-09 02:00 Message: Ok, as we've found out in discussions on python-dev, the cause for the problem is (partially) the fact that "print obj" does an implicit str(obj), so any Unicode object printed will turn out as default encoded string no matter how hard we try. To fix this, we'll need to tweak the current "print" mechanism a bit to allow Unicode to pass through to the receveiving end (sys.stdout in this case). About the problem that Tcl/tk needs UTF-8 strings: we could have _tkinter.c recode the strings for you in case sys.getdefaultencoding() returns anything other than 'ascii' or 'utf-8'. That way you can use a different default encoding in Python while Tcl/tk will always get a true UTF-8 string. Would this be a solution ? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-11-03 12:49 Message: Assigned to Marc-Andre, since I have no idea what to do about this... :-( ---------------------------------------------------------------------- Comment By: Kirill Simonov (kirill_simonov) Date: 2000-11-01 13:16 Message: 1. print unicode("") in IDLE don't work! The mechanics (I think) is a) print unicode_string encodes unicode string to normal string using default encoding and pass it to sys.stdout. b) sys.stdout intercepted by IDLE. IDLE sent this string to Tkinter. c) Tkinter pass this string (not unicode but cp1251!) to TCL but TCL waits for UTF-8 string!!! d) I see messy characters on screen. 2. You breaks compability! In 1.5 I can write Button(root, text="") and this works. Writing unicode("<>", 'cp1251') is UGLY and ANNOYING! TCL requires string in utf-8. All pythonian strings is sys.getdefaultencoding() encoding. So, we have to recode all strings to utf-8. 3. TCL in DLLs can't found it's encodings in tcl\tk8.3\encodings! I don't no why. So, I can't write in Tkinter.Text in russian. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2000-11-01 12:47 Message: AFAIK, the _tkinter.c code automatically converts Unicode to UTF-8 and then passes this to Tcl/Tk. So basically the folloing should get you correct results... print unicode("hello ", "cp1251") Alternatively, you can set your default encoding to "cp1251" in the way your describe and then write: print unicode("hello ") I am not too familiar with Tcl/Tk, so I can't judge whether trying to recode normal 8-bit into UTF-8 is a good idea in general for the _tkinter.c interface. It would easily be possible using: utf8 = string.encode('utf-8') since 8-bit support the .encode() method too. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2000-11-01 08:00 Message: I am not entirely sure what the bug is, though I agree that it can be confusing to deal with Unicode strings. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=219960&group_id=5470 From noreply@sourceforge.net Sun Feb 24 17:17:10 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 09:17:10 -0800 Subject: [Python-bugs-list] [ python-Bugs-431899 ] tkfileDialog on NT makes float fr specif Message-ID: Bugs item #431899, was opened at 2001-06-10 13:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431899&group_id=5470 Category: Tkinter Group: Platform-specific Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: tkfileDialog on NT makes float fr specif Initial Comment: If I use the line: (Tkinter 8.3 for Python 2.0) file = tkFileDialog.askopenfilename(...) on an NT french workstation, that turn off floats using dot but comma separator for Tcl... then if your have defined a Text widget, calling self.yview('moveto', '1.0') failed with an unavailable type error: TclError: expected floating-point number but got "1.0" this appends in lib-tk\Tkinter.py line 2846 in yview self.tk.call((self._w, 'yview') + what) But the bugs in my opinion comes from Tcl tkFileDialog which activate a flag about float memory representation for tcl. The problem is that I'm unable to find the turnarround i.e. finding tcl methode to turn on US float representation. All help may be pleased. Jerry alias the foolish dracomorpheus python french fan ;-) ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-24 09:17 Message: Logged In: YES user_id=21627 I can't reproduce that problem. I only have a German XP installation, but it should behave similarly in these respect. I've been using Python 2.2. Can you attach a small script to this report which demonstrates the problem? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=431899&group_id=5470 From noreply@sourceforge.net Sun Feb 24 17:26:37 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 09:26:37 -0800 Subject: [Python-bugs-list] [ python-Bugs-452973 ] Tcl event loop callback woes Message-ID: Bugs item #452973, was opened at 2001-08-19 09:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=452973&group_id=5470 Category: Tkinter Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Egnor (egnor) Assigned to: Nobody/Anonymous (nobody) Summary: Tcl event loop callback woes Initial Comment: I have C code which handles I/O and needs an event dispatcher. It is running in the context of a Python GUI application, and uses the Tcl event loop, using Tcl_CreateFileHandler and friends. The C code gets callbacks from the Tcl event loop fine, but when it attempts to call into the Python application from one of these callbacks, things go wrong. - Tkinter has released the GIL and hidden the current thread state. I can work around this by re-acquiring the GIL from the callback and creating a new thread state. - When the callback is invoked, Tkinter's tcl_lock is held. If the Python code invoked from the callback ultimately calls some other Tkinter function, the tcl_lock is still held, and deadlock results. The only way to work around this is to use a single-threaded Python build. - If the Python code returns an error, there's no way to stop the event loop to report the error up. Tkinter's error-reporting mechanisms are inaccessible. In general, Tkinter has a lot of infrastructure for managing callbacks from the Tcl event loop. If a third party C library wants to use the same event loop, that infrastructure is unavailable, and it is very difficult to work with Python. Unfortunately, short of using threads (which have their own problems), there's no other alternative for an external C library to do I/O without blocking the GUI. I've seen several problem reports from people trying to do exactly this, though they almost never figure out all of what's going on, and nobody else ever has any good advice to offer. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-24 09:26 Message: Logged In: YES user_id=21627 It seems that all you need to have is access to the Tcl lock. Would that solve your problem? As for reporting the error up: This is certainly possible. Just implement a _report_exception method on your widget; define it as def _report_exception(self): raise ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-08-20 11:32 Message: Logged In: YES user_id=6380 Unassigning -- /F is a black hole. :-( ---------------------------------------------------------------------- Comment By: Barry Warsaw (bwarsaw) Date: 2001-08-20 10:02 Message: Logged In: YES user_id=12800 For lack of a volunteer or better victim... er, assigning to /F ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=452973&group_id=5470 From noreply@sourceforge.net Sun Feb 24 17:31:39 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 09:31:39 -0800 Subject: [Python-bugs-list] [ python-Bugs-219960 ] Problems with Tcl/Tk and non-ASCII text entry Message-ID: Bugs item #219960, was opened at 2000-10-31 13:38 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=219960&group_id=5470 Category: Unicode Group: None Status: Open Resolution: None Priority: 3 Submitted By: Kirill Simonov (kirill_simonov) >Assigned to: Martin v. Löwis (loewis) Summary: Problems with Tcl/Tk and non-ASCII text entry Initial Comment: Win98, Python2.0final. 1. I can't write cyrillic letters in IDLE editor. I tried to figure, what's happened and found that Tcl has command 'encoding'. I typed in IDLE shell: >>> from Tkinter import * >>> root = Tk() >>> root.tk.call("encoding", "names") 'utf-8 identity unicode' >>> root.tk.call("encoding", "system") 'identity' But Tcl had numerous encodings in 'tcl\tcl8.3\encodings' including 'cp1251'! Then I installed Tk separately and removed tcl83.dll and tk83.dll from DLLs: >>> from Tkinter import * >>> root = Tk() >>> root.tk.call("encoding", "names") 'cp860 cp861 [.........] cp857 unicode' >>> root.tk.call("encoding", "system") 'cp1251' So, when tcl/tk dlls in Python\DLLs directory, TCL can't load all it's encodings. But this is not the end. I typed in IDLE shell: >>> print "hello " # all chars looks correctly. and got: Exception in Tkinter callback Traceback (most recent call last): File "c:\python20\lib\lib-tk\Tkinter.py", line 1287, in __call__ return apply(self.func, args) File "C:\PYTHON20\Tools\idle\PyShell.py", line 579, in enter_callback self.runit() File "C:\PYTHON20\Tools\idle\PyShell.py", line 598, in runit more = self.interp.runsource(line) File "C:\PYTHON20\Tools\idle\PyShell.py", line 183, in runsource return InteractiveInterpreter.runsource(self, source, filename) File "c:\python20\lib\code.py", line 61, in runsource code = compile_command(source, filename, symbol) File "c:\python20\lib\codeop.py", line 61, in compile_command code = compile(source, filename, symbol) UnicodeError: ASCII encoding error: ordinal not in range(128) print "[the same characters]" Then, when I pressed Enter again, i got the same error message. I stopped this by pressing C-Break. [1/2 hour later] I fix this by editing site.py: if 1: # was: if 0 # Enable to support locale aware default string encodings. I typed again: >>> print "hello " and got: >>> print unicode("hello ") [2 hours later] Looking sources of _tkinter.c: static Tcl_Obj* AsObj(PyObject *value) { if type(value) is StringType: return Tcl_NewStringObj(value) elif type(value) is UnicodeType: ... ... } But I read in that all Tcl functions require all strings to be passed in UTF-8. So, this code must look like: if type(value) is StringType: if TCL_Version >= 8.1: return Tcl_NewStringObj() else: return Tcl_NewStringObj(value) And when I typed: >>> print unicode("hello ").encode('utf-8') i got: hello This is the end. P.S. Sorry for my bad english, but I really want to use IDLE and Tkinter in our school, so I can't wait for somebody other writing bug report. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-24 09:31 Message: Logged In: YES user_id=38388 Assigned to Martin for further processing -- I know to little about Tkinter to be of any help here. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-24 09:07 Message: Logged In: YES user_id=21627 Item 1. of MAL's list becomes 'Tcl does not find its encoding directory' in Python 2.2; this is fixed with FixTk.py 1.6. Item 2. has been fixed for Python 2.2; the remaining problem was that the OutputWindow converted all unicode objects to strings first, this has been fixed with OutputWindow.py 1.6. I'm not sure which problem is supposed to be solved with item 3. in MAL's list, I believe that this change is not necessary, and may be incorrect in some cases. Item 1. of the original submitter's problems is solved with the changes to FixTk.py. As for entering non-ASCII characters in the IDLE shell, I'm not sure what to do with this. For entering non-ASCII characters in a IDLE source window, see patch http://sourceforge.net/tracker/index.php?func=detail&aid=508973&group_id=9579&atid=309579 and PEP 263. I'm inclined to recommend that IDLE should encode Unicode strings entered by the user as UTF-8 before passing them to the interpreter; most likely, any byte strings will be printed to a Tk window, in which case UTF-8 should work right. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2001-01-03 13:37 Message: I've changed the subject line to better reflect the cause of the error: 1. The Tcl version shipped with Python 2.0 apparently doesn't include the Tcl codec libs, but these seem to be needed by Tcl to allow entry of characters in non-ASCII environments. 2. Python's print statement should allow Unicode to be passed through to sys.stdout. 3. _tkinter should recode all 8-bit strings into Unicode under the assumption that the 8-bit strings use sys.getdefaultencoding() as encoding. ---------------------------------------------------------------------- Comment By: Kirill Simonov (kirill_simonov) Date: 2000-11-12 04:17 Message: No, you are wrong! Entry and Text widget depends on TCL system encoding. If TCL can't find cyrillic encoding (cp1251) then I can't enter cyrillic characters. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2000-11-12 03:30 Message: It should be no problem that Tcl can't find its encodings. When used with Tkinter, Tcl can only expect Unicode strings, or strings in sys.getdefaultencoding() (i.e. 'ascii'). Therefore, Tk never needs any other encoding. If you want to make use of the Tcl system encoding (which is apparently not supported in Tkinter), you probably need to set the TCL_LIBRARY environment variable. ---------------------------------------------------------------------- Comment By: Kirill Simonov (kirill_simonov) Date: 2000-11-10 10:53 Message: Yes, this is a solution. But don't forget that TCL can't load it's encodings at startup. Look at FixTk.py: import sys, os, _tkinter [...] os.environ["TCL_LIBRARY"] = v But 'import _tkinter' loads _tkinter.pyd; _tkinter.pyd loads tcl83.dll; tcl83.dll tryes to load it's encodings at startup and fails, becourse TCL_LIBRARY is not defined! I can fix this: #import sys, os, _tkinter import sys, os #ver = str(_tkinter.TCL_VERSION) ver = "8.3" [...] ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2000-11-09 02:00 Message: Ok, as we've found out in discussions on python-dev, the cause for the problem is (partially) the fact that "print obj" does an implicit str(obj), so any Unicode object printed will turn out as default encoded string no matter how hard we try. To fix this, we'll need to tweak the current "print" mechanism a bit to allow Unicode to pass through to the receveiving end (sys.stdout in this case). About the problem that Tcl/tk needs UTF-8 strings: we could have _tkinter.c recode the strings for you in case sys.getdefaultencoding() returns anything other than 'ascii' or 'utf-8'. That way you can use a different default encoding in Python while Tcl/tk will always get a true UTF-8 string. Would this be a solution ? ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-11-03 12:49 Message: Assigned to Marc-Andre, since I have no idea what to do about this... :-( ---------------------------------------------------------------------- Comment By: Kirill Simonov (kirill_simonov) Date: 2000-11-01 13:16 Message: 1. print unicode("") in IDLE don't work! The mechanics (I think) is a) print unicode_string encodes unicode string to normal string using default encoding and pass it to sys.stdout. b) sys.stdout intercepted by IDLE. IDLE sent this string to Tkinter. c) Tkinter pass this string (not unicode but cp1251!) to TCL but TCL waits for UTF-8 string!!! d) I see messy characters on screen. 2. You breaks compability! In 1.5 I can write Button(root, text="") and this works. Writing unicode("<>", 'cp1251') is UGLY and ANNOYING! TCL requires string in utf-8. All pythonian strings is sys.getdefaultencoding() encoding. So, we have to recode all strings to utf-8. 3. TCL in DLLs can't found it's encodings in tcl\tk8.3\encodings! I don't no why. So, I can't write in Tkinter.Text in russian. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2000-11-01 12:47 Message: AFAIK, the _tkinter.c code automatically converts Unicode to UTF-8 and then passes this to Tcl/Tk. So basically the folloing should get you correct results... print unicode("hello ", "cp1251") Alternatively, you can set your default encoding to "cp1251" in the way your describe and then write: print unicode("hello ") I am not too familiar with Tcl/Tk, so I can't judge whether trying to recode normal 8-bit into UTF-8 is a good idea in general for the _tkinter.c interface. It would easily be possible using: utf8 = string.encode('utf-8') since 8-bit support the .encode() method too. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2000-11-01 08:00 Message: I am not entirely sure what the bug is, though I agree that it can be confusing to deal with Unicode strings. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=219960&group_id=5470 From noreply@sourceforge.net Sun Feb 24 17:49:25 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 09:49:25 -0800 Subject: [Python-bugs-list] [ python-Bugs-452973 ] Tcl event loop callback woes Message-ID: Bugs item #452973, was opened at 2001-08-19 09:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=452973&group_id=5470 Category: Tkinter Group: None Status: Open Resolution: None Priority: 5 Submitted By: Dan Egnor (egnor) Assigned to: Nobody/Anonymous (nobody) Summary: Tcl event loop callback woes Initial Comment: I have C code which handles I/O and needs an event dispatcher. It is running in the context of a Python GUI application, and uses the Tcl event loop, using Tcl_CreateFileHandler and friends. The C code gets callbacks from the Tcl event loop fine, but when it attempts to call into the Python application from one of these callbacks, things go wrong. - Tkinter has released the GIL and hidden the current thread state. I can work around this by re-acquiring the GIL from the callback and creating a new thread state. - When the callback is invoked, Tkinter's tcl_lock is held. If the Python code invoked from the callback ultimately calls some other Tkinter function, the tcl_lock is still held, and deadlock results. The only way to work around this is to use a single-threaded Python build. - If the Python code returns an error, there's no way to stop the event loop to report the error up. Tkinter's error-reporting mechanisms are inaccessible. In general, Tkinter has a lot of infrastructure for managing callbacks from the Tcl event loop. If a third party C library wants to use the same event loop, that infrastructure is unavailable, and it is very difficult to work with Python. Unfortunately, short of using threads (which have their own problems), there's no other alternative for an external C library to do I/O without blocking the GUI. I've seen several problem reports from people trying to do exactly this, though they almost never figure out all of what's going on, and nobody else ever has any good advice to offer. ---------------------------------------------------------------------- >Comment By: Dan Egnor (egnor) Date: 2002-02-24 09:49 Message: Logged In: YES user_id=128950 I *think* external access to the tcl_lock would do it (it's been a while). There are a bunch of helper functions/macros inside the Tkinter code for handling this situation; exposing and documenting those would be ideal. As far as error reporting, I don't think that suffices. (Again, it's been a while.) The problem is that the exception is returned to the C side, which must figure out what to do with it. Specifically, the Tcl event loop should be stopped and the exception reported to whoever invoked it... but there's no direct way to stop the Tcl event loop. (Fuzzy memories here.) ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-24 09:26 Message: Logged In: YES user_id=21627 It seems that all you need to have is access to the Tcl lock. Would that solve your problem? As for reporting the error up: This is certainly possible. Just implement a _report_exception method on your widget; define it as def _report_exception(self): raise ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-08-20 11:32 Message: Logged In: YES user_id=6380 Unassigning -- /F is a black hole. :-( ---------------------------------------------------------------------- Comment By: Barry Warsaw (bwarsaw) Date: 2001-08-20 10:02 Message: Logged In: YES user_id=12800 For lack of a volunteer or better victim... er, assigning to /F ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=452973&group_id=5470 From noreply@sourceforge.net Sun Feb 24 21:36:02 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 13:36:02 -0800 Subject: [Python-bugs-list] [ python-Bugs-521723 ] fcntl.ioctl on openbsd Message-ID: Bugs item #521723, was opened at 2002-02-22 21:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 Category: Extension Modules Group: Python 2.1.1 Status: Open Resolution: Invalid Priority: 3 Submitted By: Jeremy Rossi (skin_pup) Assigned to: Nobody/Anonymous (nobody) Summary: fcntl.ioctl on openbsd Initial Comment: >From the OpenBSD man page ------- #include int ioctl(int d, unsigned long request, ...); -- On OpenBSD ioctl takes an unsigned long for the action to be preformed on d. The function fcntl_ioctl() in Modules/fcntlmodule.c will only accept an int for the second argument without rasing an error. On OpenBSD (maybe free/net also I have not checked) an unsigned long should be the largest allowed. >From Modules/fcntlmodule.c ------- PyArg_ParseTuple(args, "iis#:ioctl", &fd, &code, &str, &len)) -- ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-24 13:36 Message: Logged In: YES user_id=31435 Gotta love it . I don't believe ioctl is a POSIX Classic function. There's a good discussion of why the POSIX Realtime Extensions added a workalike posix_devctl() instead, in http://www.usenix.org/publications/login/standards/22.posix. html Martin, the URL you gave is actually for fcntl, not ioctl. You can s/fcntl/ioctl/ in your URL to get the Single UNIX Specification's ioctl page, though, which also says "int". I agree OpenBSD is out of line with best current practice because of that. It appears that Jeremy must be using Python 2.2, or running on a 64-bit machine, since his line x = 3229893651 raises OverflowError on 32-bit boxes before the 2.2 release. As Martin suggests, using a hex literal instead has always been the intended way to deal with cases "like this". The situation will get a lot worse if OpenBSD is ported to a 64- bit box with sizeof(long)==8, and some yahoo actually defines a ioctl arg that requires more than 32 bits. Before then, I suggest we leave this alone (by the time it may matter for real, OpenBSD should be feeling a lot more pressure to conform to the larger open standards). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-24 06:58 Message: Logged In: YES user_id=21627 This won't be easy to change: If we declare the type of ioctl to be unsigned, then we break systems where it is signed (as it should be). As a work-around, try using 0xC0844413 (i.e. the hexadecimal version) as the value for the ioctl. Python will understand this as a negative value, but your system will likely still understand it as the right ioctl command. ---------------------------------------------------------------------- Comment By: Jeremy Rossi (skin_pup) Date: 2002-02-23 19:15 Message: Logged In: YES user_id=323435 >From the current man pages of OpenBSD and FreeBSD. It stats that the second argument of ioctl is an unsigned int. http://www.openbsd.org/cgi-bin/man.cgi?query=ioctl http://www.freebsd.org/cgi-bin/man.cgi?query=ioctl Pythons fcntl.ioctl() does not allow the second argumnet to be anything other then a C int, this does not allow required operations to be preformed with ioctl on the two BSD systems. For a practical example. On the openbsd system the /dev/pf is the direct inteface to the firewall, the only things I am able to preform on this file in python are to turn the firewall on and off. This is allowed because the ioctl un_signed ints (536888321 in base 10) that prefrom this action happen to be small enough to fit in to an int. While the ioctl unsigned int (3229893651 in base 10) for reporting the status of connections is larger then a C int and python raises an exception before calling the system ioctl call. The following is the code in question. import fcntl import struct import os fd = os.open("/dev/pf",os.O_RDWR) null = '\0'*(struct.calcsize("LLLLIIII")) x = 3229893651 null = fcntl.ioctl(fd,x,null) print struct.unpack("LLLLIIII",null) ---output--- $ sudo python ./py-pfctl.py Traceback (most recent call last): File "./py-pfctl.py", line 8, in ? null = fcntl.ioctl(fd,x,null) OverflowError: long int too large to convert to int ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 15:05 Message: Logged In: YES user_id=21627 Can you give a practical example of an fcntl operation where this is a problem? For all practical purposes, a byte would be sufficient. Also, in POSIX, the argument to fcntl is of type int, see http://www.opengroup.org/onlinepubs/007904975/functions/fcntl.html So I can't see the bug. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 From noreply@sourceforge.net Sun Feb 24 22:23:42 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 14:23:42 -0800 Subject: [Python-bugs-list] [ python-Bugs-521723 ] fcntl.ioctl on openbsd Message-ID: Bugs item #521723, was opened at 2002-02-22 21:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 Category: Extension Modules Group: Python 2.1.1 Status: Open Resolution: Invalid Priority: 3 Submitted By: Jeremy Rossi (skin_pup) Assigned to: Nobody/Anonymous (nobody) Summary: fcntl.ioctl on openbsd Initial Comment: >From the OpenBSD man page ------- #include int ioctl(int d, unsigned long request, ...); -- On OpenBSD ioctl takes an unsigned long for the action to be preformed on d. The function fcntl_ioctl() in Modules/fcntlmodule.c will only accept an int for the second argument without rasing an error. On OpenBSD (maybe free/net also I have not checked) an unsigned long should be the largest allowed. >From Modules/fcntlmodule.c ------- PyArg_ParseTuple(args, "iis#:ioctl", &fd, &code, &str, &len)) -- ---------------------------------------------------------------------- >Comment By: Jeremy Rossi (skin_pup) Date: 2002-02-24 14:23 Message: Logged In: YES user_id=323435 Thank you Martin the 0xC0844413 does indeed work for me, but I am working on writing a thin wrapper that will accept un_signed long ints for ioctl. (Never done C before, but I guess this is as good as any to learn) But to looking forward I have done some checking and it seams to me that all the *BSD's including BSDi use unsigned longs for ioctl. I was not able to find documentation for darwin on the web, bit I think it is safe to assume that it also takes a unsigned long for ioctl. NetBSD also have been ported to 64bit systems. NetBSD: http://www.tac.eu.org/cgi-bin/man-cgi?ioctl++NetBSD-current -- BEGIN cut and paste from a BSDi systems. $ uname -a BSD/OS xxxx.xxxxxx.com 2.1 BSDI BSD/OS 2.1 Kernel #2: Mon Jan 27 16:12:45 MST 1997 web@xxxx.xxxxxx.com:/usr/src/sys/compile/USR i386 $ man ioctl | head IOCTL(2) BSD Programmer's Manual IOCTL(2) NAME ioctl - control device SYNOPSIS #include int ioctl(int d, unsigned long request, char *argp); --END ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-24 13:36 Message: Logged In: YES user_id=31435 Gotta love it . I don't believe ioctl is a POSIX Classic function. There's a good discussion of why the POSIX Realtime Extensions added a workalike posix_devctl() instead, in http://www.usenix.org/publications/login/standards/22.posix. html Martin, the URL you gave is actually for fcntl, not ioctl. You can s/fcntl/ioctl/ in your URL to get the Single UNIX Specification's ioctl page, though, which also says "int". I agree OpenBSD is out of line with best current practice because of that. It appears that Jeremy must be using Python 2.2, or running on a 64-bit machine, since his line x = 3229893651 raises OverflowError on 32-bit boxes before the 2.2 release. As Martin suggests, using a hex literal instead has always been the intended way to deal with cases "like this". The situation will get a lot worse if OpenBSD is ported to a 64- bit box with sizeof(long)==8, and some yahoo actually defines a ioctl arg that requires more than 32 bits. Before then, I suggest we leave this alone (by the time it may matter for real, OpenBSD should be feeling a lot more pressure to conform to the larger open standards). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-24 06:58 Message: Logged In: YES user_id=21627 This won't be easy to change: If we declare the type of ioctl to be unsigned, then we break systems where it is signed (as it should be). As a work-around, try using 0xC0844413 (i.e. the hexadecimal version) as the value for the ioctl. Python will understand this as a negative value, but your system will likely still understand it as the right ioctl command. ---------------------------------------------------------------------- Comment By: Jeremy Rossi (skin_pup) Date: 2002-02-23 19:15 Message: Logged In: YES user_id=323435 >From the current man pages of OpenBSD and FreeBSD. It stats that the second argument of ioctl is an unsigned int. http://www.openbsd.org/cgi-bin/man.cgi?query=ioctl http://www.freebsd.org/cgi-bin/man.cgi?query=ioctl Pythons fcntl.ioctl() does not allow the second argumnet to be anything other then a C int, this does not allow required operations to be preformed with ioctl on the two BSD systems. For a practical example. On the openbsd system the /dev/pf is the direct inteface to the firewall, the only things I am able to preform on this file in python are to turn the firewall on and off. This is allowed because the ioctl un_signed ints (536888321 in base 10) that prefrom this action happen to be small enough to fit in to an int. While the ioctl unsigned int (3229893651 in base 10) for reporting the status of connections is larger then a C int and python raises an exception before calling the system ioctl call. The following is the code in question. import fcntl import struct import os fd = os.open("/dev/pf",os.O_RDWR) null = '\0'*(struct.calcsize("LLLLIIII")) x = 3229893651 null = fcntl.ioctl(fd,x,null) print struct.unpack("LLLLIIII",null) ---output--- $ sudo python ./py-pfctl.py Traceback (most recent call last): File "./py-pfctl.py", line 8, in ? null = fcntl.ioctl(fd,x,null) OverflowError: long int too large to convert to int ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 15:05 Message: Logged In: YES user_id=21627 Can you give a practical example of an fcntl operation where this is a problem? For all practical purposes, a byte would be sufficient. Also, in POSIX, the argument to fcntl is of type int, see http://www.opengroup.org/onlinepubs/007904975/functions/fcntl.html So I can't see the bug. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 From noreply@sourceforge.net Mon Feb 25 01:40:33 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 17:40:33 -0800 Subject: [Python-bugs-list] [ python-Bugs-522274 ] tokenizer.py STARSTAR doesn't exist Message-ID: Bugs item #522274, was opened at 2002-02-24 17:40 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522274&group_id=5470 Category: Parser/Compiler Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Evelyn Mitchell (efm) Assigned to: Nobody/Anonymous (nobody) Summary: tokenizer.py STARSTAR doesn't exist Initial Comment: pyChecker complains at line 774 of tokenizer.py No module attribute (STARSTAR) found The section is: if i < len(nodelist): # should be DOUBLESTAR or STAR STAR t = nodelist[i][0] if t == token.DOUBLESTAR: node = nodelist[i+1] elif t == token.STARSTAR: node = nodelist[i+2] else: raise ValueError, "unexpected token: %s" % t names.append(node[1]) flags = flags | CO_VARKEYWORDS I've verified that there is no STARSTAR in token.py. I'd patch this to be token.STAR, which does exist, but this module has no self tests or unit tests, so I wouldn't be able to know if it broke anything. My wild guess is that the intention is to refer to two STAR tokens rather than a DOUBLESTAR token (and that is because the increment is 2 rather than 1), but I think that evidence is pretty slim. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522274&group_id=5470 From noreply@sourceforge.net Mon Feb 25 04:47:33 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Sun, 24 Feb 2002 20:47:33 -0800 Subject: [Python-bugs-list] [ python-Bugs-522033 ] Tkinter d/n't complain when Tcl not foun Message-ID: Bugs item #522033, was opened at 2002-02-24 04:00 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522033&group_id=5470 Category: Tkinter >Group: 3rd Party >Status: Closed >Resolution: Wont Fix Priority: 5 Submitted By: Lloyd Hugh Allen (lha2) >Assigned to: Tim Peters (tim_one) Summary: Tkinter d/n't complain when Tcl not foun Initial Comment: Under Windows 98, 64 meg, pentium II 300mHz: Installing ruby puts a line in the autoexec.bat saying "please use ruby tcl libraries". If Ruby is subsequently uninstalled, the autoexec.bat retains this line even though the ruby tcl directory no longer exists. This causes launching IDLE to do nothing rather than to produce an error message that "Tcl library not found" or somesuch. Under Python 2.2 #28. ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2002-02-24 20:47 Message: Logged In: YES user_id=31435 Heh. So you were serious after all. OK, if the Ruby uninstaller leaves Tcl/Tk in an unusable state, then *of course* it's Ruby's bug. Not only should their uninstaller clean up, but their installer shouldn't have been mucking with autoexec.bat to begin with. They're welcome to study the Python installer to see how to install Tcl/Tk without touching autoexec.bat, and without screwing other programs that may need a different version of Tcl/Tk. If you care about this, bring it to the Ruby developers' attention. For the rest of it, IDLE requires Tcl/Tk to bring up a window. If Tcl/Tk is unusable (due to other software leaving behing a damaged autoexec.bat, or for any other reason), IDLE can't bring up a window to display an error msg. So you're stuck. You would have seen an error msg had you brought up a DOS-box Python instead and tried to use Tkinter. Then Python can use the DOS box as a msg display area; IDLE doesn't have that option. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522033&group_id=5470 From noreply@sourceforge.net Mon Feb 25 11:07:04 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 03:07:04 -0800 Subject: [Python-bugs-list] [ python-Bugs-522393 ] Doesn't build on SGI Message-ID: Bugs item #522393, was opened at 2002-02-25 03:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522393&group_id=5470 Category: None Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: Doesn't build on SGI Initial Comment: On the SGI I can't build the current 2.2.1 from CVS. I get an undefined error on pthread_detach in the link step for python: ld32: ERROR 33: Unresolved text symbol "pthread_detach" -- 1st referenced by libpython2.2.a(thread.o). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522393&group_id=5470 From noreply@sourceforge.net Mon Feb 25 11:15:26 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 03:15:26 -0800 Subject: [Python-bugs-list] [ python-Bugs-522395 ] test_descrtut fails on OSX Message-ID: Bugs item #522395, was opened at 2002-02-25 03:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522395&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: test_descrtut fails on OSX Initial Comment: I'm submitting to you because I have no way right now to see whether this is a general bug, something you didn't get around to or an OSX-specific problem. test_descrtut fails on OSX. I did the staring at the two outputs already: __doc__ is in the actual output but not in the expected output. test_descrtut ***************************************************************** Failure in example: pprint.pprint(dir(list)) # like list.__dict__.keys(), but sorted from line #30 of test.test_descrtut.__test__.tut3 Expected: ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__eq__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__repr__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] Got: ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__doc__', '__eq__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__repr__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] ***************************************************************** 1 items had failures: 1 of 13 in test.test_descrtut.__test__.tut3 ***Test Failed*** 1 failures. test test_descrtut failed -- 1 of 96 doctests failed ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522395&group_id=5470 From noreply@sourceforge.net Mon Feb 25 11:17:13 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 03:17:13 -0800 Subject: [Python-bugs-list] [ python-Bugs-522396 ] test_unicodedata fails on OSX Message-ID: Bugs item #522396, was opened at 2002-02-25 03:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522396&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Nobody/Anonymous (nobody) Summary: test_unicodedata fails on OSX Initial Comment: Again, I have no other working 2.2.1 platforms so I can't test whether this is an OSX specific bug or a general bug, hence I'm assigning it to the Universal 221 Scapegoat:-) test_unicodedata fails on OSX with the following report: test_unicodedata test test_unicodedata produced unexpected output: ********************************************************************** *** mismatch between line 3 of expected output and line 3 of actual output: - Methods: 6c7a7c02657b69d0fdd7a7d174f573194bba2e18 + Methods: 11d75c5e423430d480fb8000ef1c611ecfcd44f1 ********************************************************************** ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522396&group_id=5470 From noreply@sourceforge.net Mon Feb 25 11:17:36 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 03:17:36 -0800 Subject: [Python-bugs-list] [ python-Bugs-522396 ] test_unicodedata fails on OSX Message-ID: Bugs item #522396, was opened at 2002-02-25 03:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522396&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) >Assigned to: Michael Hudson (mwh) Summary: test_unicodedata fails on OSX Initial Comment: Again, I have no other working 2.2.1 platforms so I can't test whether this is an OSX specific bug or a general bug, hence I'm assigning it to the Universal 221 Scapegoat:-) test_unicodedata fails on OSX with the following report: test_unicodedata test test_unicodedata produced unexpected output: ********************************************************************** *** mismatch between line 3 of expected output and line 3 of actual output: - Methods: 6c7a7c02657b69d0fdd7a7d174f573194bba2e18 + Methods: 11d75c5e423430d480fb8000ef1c611ecfcd44f1 ********************************************************************** ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522396&group_id=5470 From noreply@sourceforge.net Mon Feb 25 11:21:19 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 03:21:19 -0800 Subject: [Python-bugs-list] [ python-Bugs-508779 ] Disable flat namespace on MacOS X Message-ID: Bugs item #508779, was opened at 2002-01-25 19:44 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=508779&group_id=5470 Category: Extension Modules Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 7 Submitted By: Manoj Plakal (terabaap) Assigned to: Nobody/Anonymous (nobody) Summary: Disable flat namespace on MacOS X Initial Comment: Python: v2.2 OS: MacOS X 10.1 MacOS X 10.1 introduced two forms of linking for loadable modules: flat namespace and two-level namespace. Python 2.2 is set up to use flat namespace by default on OS X for building extension modules. I believe that this is a problem since it introduces spurious run-time linking errors when loading 2 or more modules that happen to have common symbols. The Linux and Windows implementations do not allow symbols within modules to clash with each other. This behavior also goes against expectations of C extension module writers. As a reproducible example, consider two dummy modules foo (foomodule.c) and bar (barmodule.c) both of which are built with a common file baz.c that contains some data variables. With the current Python 2.2 on OS X 10.1, only one of foo or bar can be imported, but NOT BOTH, into the same interpreter session. The files can be picked up from these URLs: http://yumpee.org/python/foomodule.c http://yumpee.org/python/barmodule.c http://yumpee.org/python/baz.c http://yumpee.org/python/setup.py If I run "python setup.py build" with Python 2.2 (built from the 2.2 source tarball) and then import foo followed by bar, I get an ImportError: "Failure linking new module" (from Python/dynload_next.c). If I add a call to NSLinkEditError() to print a more detailed error message, I see that the problem is multiple definitions of the data variables in baz.c. The above example works fine with Python 2.1 on Red Hat Linux 7.2 and Python 2.2a4 on Win98. If I then edit /usr/local/lib/python2.2/Makefile and change LDSHARED and BLDSHARED to not use flat_namespace: $(CC) $(LDFLAGS) -bundle -bundle_loader /usr/local/bin/python2.2 -undefined error then the problem is solved and I can load both foo and bar without problems. More info and discussion is available at these URLs (also search groups.google.com for "comp.lang.python OS X import bug"): http://groups.google.com/groups?hl=en&threadm=j4sn8uu517.fsf%40informatik.hu-berlin.de&prev=/groups%3Fnum%3D25%26hl%3Den%26group%3Dcomp.lang.python%26start%3D75%26group%3Dcomp.lang.python http://mail.python.org/pipermail/pythonmac-sig/2002-January/004977.html It would be great to have this simple change be applied to Python 2.2.1. Manoj terabaap@yumpee.org ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 03:21 Message: Logged In: YES user_id=45365 I usurping this bug, but I'm not sure yet whether it's a good idea to fix this for 2.2.1, as it will break other extension modules that rely on the single flat namespace. ---------------------------------------------------------------------- Comment By: Manoj Plakal (terabaap) Date: 2002-01-25 20:25 Message: Logged In: YES user_id=150105 Another idea is to provide the option for flat or 2-level namespace when building extension modules on OS X, maybe as an extra flag passed to distutils.core.Extension or somewhere else ... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=508779&group_id=5470 From noreply@sourceforge.net Mon Feb 25 11:21:50 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 03:21:50 -0800 Subject: [Python-bugs-list] [ python-Bugs-508779 ] Disable flat namespace on MacOS X Message-ID: Bugs item #508779, was opened at 2002-01-25 19:44 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=508779&group_id=5470 Category: Extension Modules Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 7 Submitted By: Manoj Plakal (terabaap) >Assigned to: Jack Jansen (jackjansen) Summary: Disable flat namespace on MacOS X Initial Comment: Python: v2.2 OS: MacOS X 10.1 MacOS X 10.1 introduced two forms of linking for loadable modules: flat namespace and two-level namespace. Python 2.2 is set up to use flat namespace by default on OS X for building extension modules. I believe that this is a problem since it introduces spurious run-time linking errors when loading 2 or more modules that happen to have common symbols. The Linux and Windows implementations do not allow symbols within modules to clash with each other. This behavior also goes against expectations of C extension module writers. As a reproducible example, consider two dummy modules foo (foomodule.c) and bar (barmodule.c) both of which are built with a common file baz.c that contains some data variables. With the current Python 2.2 on OS X 10.1, only one of foo or bar can be imported, but NOT BOTH, into the same interpreter session. The files can be picked up from these URLs: http://yumpee.org/python/foomodule.c http://yumpee.org/python/barmodule.c http://yumpee.org/python/baz.c http://yumpee.org/python/setup.py If I run "python setup.py build" with Python 2.2 (built from the 2.2 source tarball) and then import foo followed by bar, I get an ImportError: "Failure linking new module" (from Python/dynload_next.c). If I add a call to NSLinkEditError() to print a more detailed error message, I see that the problem is multiple definitions of the data variables in baz.c. The above example works fine with Python 2.1 on Red Hat Linux 7.2 and Python 2.2a4 on Win98. If I then edit /usr/local/lib/python2.2/Makefile and change LDSHARED and BLDSHARED to not use flat_namespace: $(CC) $(LDFLAGS) -bundle -bundle_loader /usr/local/bin/python2.2 -undefined error then the problem is solved and I can load both foo and bar without problems. More info and discussion is available at these URLs (also search groups.google.com for "comp.lang.python OS X import bug"): http://groups.google.com/groups?hl=en&threadm=j4sn8uu517.fsf%40informatik.hu-berlin.de&prev=/groups%3Fnum%3D25%26hl%3Den%26group%3Dcomp.lang.python%26start%3D75%26group%3Dcomp.lang.python http://mail.python.org/pipermail/pythonmac-sig/2002-January/004977.html It would be great to have this simple change be applied to Python 2.2.1. Manoj terabaap@yumpee.org ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 03:21 Message: Logged In: YES user_id=45365 I usurping this bug, but I'm not sure yet whether it's a good idea to fix this for 2.2.1, as it will break other extension modules that rely on the single flat namespace. ---------------------------------------------------------------------- Comment By: Manoj Plakal (terabaap) Date: 2002-01-25 20:25 Message: Logged In: YES user_id=150105 Another idea is to provide the option for flat or 2-level namespace when building extension modules on OS X, maybe as an extra flag passed to distutils.core.Extension or somewhere else ... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=508779&group_id=5470 From noreply@sourceforge.net Mon Feb 25 11:41:58 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 03:41:58 -0800 Subject: [Python-bugs-list] [ python-Bugs-522396 ] test_unicodedata fails on OSX Message-ID: Bugs item #522396, was opened at 2002-02-25 03:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522396&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: test_unicodedata fails on OSX Initial Comment: Again, I have no other working 2.2.1 platforms so I can't test whether this is an OSX specific bug or a general bug, hence I'm assigning it to the Universal 221 Scapegoat:-) test_unicodedata fails on OSX with the following report: test_unicodedata test test_unicodedata produced unexpected output: ********************************************************************** *** mismatch between line 3 of expected output and line 3 of actual output: - Methods: 6c7a7c02657b69d0fdd7a7d174f573194bba2e18 + Methods: 11d75c5e423430d480fb8000ef1c611ecfcd44f1 ********************************************************************** ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-25 03:41 Message: Logged In: YES user_id=38388 FWIW, these are the correct values for 2.2.0 and 2.3 (CVS): Python 2.2.0: Methods: 84b72943b1d4320bc1e64a4888f7cdf62eea219a Functions: 41e1d4792185d6474a43c83ce4f593b1bdb01f8a Python 2.3: Methods: 6c7a7c02657b69d0fdd7a7d174f573194bba2e18 Functions: 41e1d4792185d6474a43c83ce4f593b1bdb01f8a The change was caused by the fix to the UTF-8 codec in 2.3. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522396&group_id=5470 From noreply@sourceforge.net Mon Feb 25 12:35:04 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 04:35:04 -0800 Subject: [Python-bugs-list] [ python-Bugs-522393 ] Doesn't build on SGI Message-ID: Bugs item #522393, was opened at 2002-02-25 03:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522393&group_id=5470 Category: None Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: Doesn't build on SGI Initial Comment: On the SGI I can't build the current 2.2.1 from CVS. I get an undefined error on pthread_detach in the link step for python: ld32: ERROR 33: Unresolved text symbol "pthread_detach" -- 1st referenced by libpython2.2.a(thread.o). ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-25 04:35 Message: Logged In: YES user_id=6656 OK, this is odd. Does the trunk build? Did 2.2 build? I can't easily find any branch changes that would account for this. I haven't looked very hard yet. Will do so later. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522393&group_id=5470 From noreply@sourceforge.net Mon Feb 25 12:38:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 04:38:38 -0800 Subject: [Python-bugs-list] [ python-Bugs-522395 ] test_descrtut fails on OSX Message-ID: Bugs item #522395, was opened at 2002-02-25 03:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522395&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: test_descrtut fails on OSX Initial Comment: I'm submitting to you because I have no way right now to see whether this is a general bug, something you didn't get around to or an OSX-specific problem. test_descrtut fails on OSX. I did the staring at the two outputs already: __doc__ is in the actual output but not in the expected output. test_descrtut ***************************************************************** Failure in example: pprint.pprint(dir(list)) # like list.__dict__.keys(), but sorted from line #30 of test.test_descrtut.__test__.tut3 Expected: ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__eq__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__repr__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] Got: ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__doc__', '__eq__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__repr__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] ***************************************************************** 1 items had failures: 1 of 13 in test.test_descrtut.__test__.tut3 ***Test Failed*** 1 failures. test test_descrtut failed -- 1 of 96 doctests failed ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-25 04:38 Message: Logged In: YES user_id=6656 This'll be related to the "allow unicode docstrings" issue, I'm sure. Problem probably exists on the trunk, too. Will investigate, after I've built the trunk, had lunch and given a tutorial :) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522395&group_id=5470 From noreply@sourceforge.net Mon Feb 25 12:56:02 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 04:56:02 -0800 Subject: [Python-bugs-list] [ python-Bugs-474836 ] Tix not included in windows distribution Message-ID: Bugs item #474836, was opened at 2001-10-25 04:22 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=474836&group_id=5470 Category: Tkinter Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Tix not included in windows distribution Initial Comment: Although there is a Tix.py available, there is no Tix support in the precomiled Python-distribution for windows. So import Tix works fine, but root = Tix.Tk() results in TclError: package not found. It is possible to circumvent this problem by installing a regular Tcl/Tk distribution (e.g. in c:\programme\tcl) and installing Tix in the regular Tcl-path (i.e. tcl\lib). Mathias ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-25 04:56 Message: Logged In: YES user_id=21627 Building Tix from sources is non-trivial, and I could not find any recent Windows binary distribution (based on Tix 8.1). So I'll attach a build of Tix 8.1.3 for Tcl/Tk 8.3, as a drop-in into the Python binary distribution. Compared to the original distribution, only tix8.1 \pkgIndex.tcl required tweaking, to tell it that tix8183.dll can be found in the DLLs subdirectory. Also, unless TIX_LIBRARY is set, the Tix tcl files *must* live in tcl\tix8.1, since tix8183.dll will look in TCL_LIBRARY\..\tix (among other locations). If a major Tcl release happens before Python 2.3 is released (and it is then still desirable to distribute Python with Tix), these binaries need to be regenerated. Would these instructions (unpack zip file into distribution tree) be precise enough to allow incorporation into the windows installer? ---------------------------------------------------------------------- Comment By: Mathias Palm (monos) Date: 2001-10-29 03:53 Message: Logged In: YES user_id=361926 As mentioned in the mail above (by me, Mathias), Tix is a package belonging to Tcl/Tk (to be found on sourceforge: tix.sourceforge.net, or via the Python home page - tkinter link). Everything needed can be found there, just read about it (and dont forget about the winking, eyes might be getting dry) Mathias ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-10-25 11:26 Message: Logged In: YES user_id=31435 I don't know anything about Tix, so if somebody wants this in the Windows installer, they're going to have to explain exactly (by which I mean exactly <0.5 wink>) what's needed. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=474836&group_id=5470 From noreply@sourceforge.net Mon Feb 25 12:57:42 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 04:57:42 -0800 Subject: [Python-bugs-list] [ python-Bugs-474836 ] Tix not included in windows distribution Message-ID: Bugs item #474836, was opened at 2001-10-25 04:22 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=474836&group_id=5470 Category: Tkinter Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) Assigned to: Nobody/Anonymous (nobody) Summary: Tix not included in windows distribution Initial Comment: Although there is a Tix.py available, there is no Tix support in the precomiled Python-distribution for windows. So import Tix works fine, but root = Tix.Tk() results in TclError: package not found. It is possible to circumvent this problem by installing a regular Tcl/Tk distribution (e.g. in c:\programme\tcl) and installing Tix in the regular Tcl-path (i.e. tcl\lib). Mathias ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-25 04:57 Message: Logged In: YES user_id=21627 The zip file is slightly too large for SF, so it is now at http://www.informatik.hu- berlin.de/~loewis/python/tix813win.zip ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-25 04:56 Message: Logged In: YES user_id=21627 Building Tix from sources is non-trivial, and I could not find any recent Windows binary distribution (based on Tix 8.1). So I'll attach a build of Tix 8.1.3 for Tcl/Tk 8.3, as a drop-in into the Python binary distribution. Compared to the original distribution, only tix8.1 \pkgIndex.tcl required tweaking, to tell it that tix8183.dll can be found in the DLLs subdirectory. Also, unless TIX_LIBRARY is set, the Tix tcl files *must* live in tcl\tix8.1, since tix8183.dll will look in TCL_LIBRARY\..\tix (among other locations). If a major Tcl release happens before Python 2.3 is released (and it is then still desirable to distribute Python with Tix), these binaries need to be regenerated. Would these instructions (unpack zip file into distribution tree) be precise enough to allow incorporation into the windows installer? ---------------------------------------------------------------------- Comment By: Mathias Palm (monos) Date: 2001-10-29 03:53 Message: Logged In: YES user_id=361926 As mentioned in the mail above (by me, Mathias), Tix is a package belonging to Tcl/Tk (to be found on sourceforge: tix.sourceforge.net, or via the Python home page - tkinter link). Everything needed can be found there, just read about it (and dont forget about the winking, eyes might be getting dry) Mathias ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-10-25 11:26 Message: Logged In: YES user_id=31435 I don't know anything about Tix, so if somebody wants this in the Windows installer, they're going to have to explain exactly (by which I mean exactly <0.5 wink>) what's needed. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=474836&group_id=5470 From noreply@sourceforge.net Mon Feb 25 13:05:19 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 05:05:19 -0800 Subject: [Python-bugs-list] [ python-Bugs-522426 ] undocumented argument in filecmp.cmpfile Message-ID: Bugs item #522426, was opened at 2002-02-25 05:05 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522426&group_id=5470 Category: Documentation Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Philippe Fremy (pfremy) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: undocumented argument in filecmp.cmpfile Initial Comment: The filecmp.cmpfiles function is described like this: cmpfiles(dir1, dir2, common[, shallow[, use_statcache]]) The documentation doesn't point out what common is and I haven't been able to figure it out myself. This is on my version of python (2.1 on Windows) and in the latest version. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522426&group_id=5470 From noreply@sourceforge.net Mon Feb 25 13:13:40 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 05:13:40 -0800 Subject: [Python-bugs-list] [ python-Bugs-474836 ] Tix not included in windows distribution Message-ID: Bugs item #474836, was opened at 2001-10-25 04:22 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=474836&group_id=5470 Category: Tkinter Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Nobody/Anonymous (nobody) >Assigned to: Tim Peters (tim_one) Summary: Tix not included in windows distribution Initial Comment: Although there is a Tix.py available, there is no Tix support in the precomiled Python-distribution for windows. So import Tix works fine, but root = Tix.Tk() results in TclError: package not found. It is possible to circumvent this problem by installing a regular Tcl/Tk distribution (e.g. in c:\programme\tcl) and installing Tix in the regular Tcl-path (i.e. tcl\lib). Mathias ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-25 04:57 Message: Logged In: YES user_id=21627 The zip file is slightly too large for SF, so it is now at http://www.informatik.hu- berlin.de/~loewis/python/tix813win.zip ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-25 04:56 Message: Logged In: YES user_id=21627 Building Tix from sources is non-trivial, and I could not find any recent Windows binary distribution (based on Tix 8.1). So I'll attach a build of Tix 8.1.3 for Tcl/Tk 8.3, as a drop-in into the Python binary distribution. Compared to the original distribution, only tix8.1 \pkgIndex.tcl required tweaking, to tell it that tix8183.dll can be found in the DLLs subdirectory. Also, unless TIX_LIBRARY is set, the Tix tcl files *must* live in tcl\tix8.1, since tix8183.dll will look in TCL_LIBRARY\..\tix (among other locations). If a major Tcl release happens before Python 2.3 is released (and it is then still desirable to distribute Python with Tix), these binaries need to be regenerated. Would these instructions (unpack zip file into distribution tree) be precise enough to allow incorporation into the windows installer? ---------------------------------------------------------------------- Comment By: Mathias Palm (monos) Date: 2001-10-29 03:53 Message: Logged In: YES user_id=361926 As mentioned in the mail above (by me, Mathias), Tix is a package belonging to Tcl/Tk (to be found on sourceforge: tix.sourceforge.net, or via the Python home page - tkinter link). Everything needed can be found there, just read about it (and dont forget about the winking, eyes might be getting dry) Mathias ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-10-25 11:26 Message: Logged In: YES user_id=31435 I don't know anything about Tix, so if somebody wants this in the Windows installer, they're going to have to explain exactly (by which I mean exactly <0.5 wink>) what's needed. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=474836&group_id=5470 From noreply@sourceforge.net Mon Feb 25 13:24:01 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 05:24:01 -0800 Subject: [Python-bugs-list] [ python-Bugs-522393 ] Doesn't build on SGI Message-ID: Bugs item #522393, was opened at 2002-02-25 03:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522393&group_id=5470 Category: None Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: Doesn't build on SGI Initial Comment: On the SGI I can't build the current 2.2.1 from CVS. I get an undefined error on pthread_detach in the link step for python: ld32: ERROR 33: Unresolved text symbol "pthread_detach" -- 1st referenced by libpython2.2.a(thread.o). ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 05:24 Message: Logged In: YES user_id=45365 Ouch! You are right: the trunk also doesn't build, and probably 2.2 doesn't build either. I've never checked this, because I always build --without-thread on SGI. I've found the problem: libc contains a partial implementation of pthreads, which does include pthread_create but not pthread_detach. For the full implementation you need to add -lpthread to your link step. But the autoconf test tests only for pthread_create(), so it thinks no extra link options are needed. I think we should reassign this to a pthread guru, but I'm not sure who qualifies. Simply adding a pthread_detach() call to the autotest may be worse, if I read thread_pthread.h correctly thread_detach() isn't defined in all flavors of pthreads. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-25 04:35 Message: Logged In: YES user_id=6656 OK, this is odd. Does the trunk build? Did 2.2 build? I can't easily find any branch changes that would account for this. I haven't looked very hard yet. Will do so later. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522393&group_id=5470 From noreply@sourceforge.net Mon Feb 25 13:25:44 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 05:25:44 -0800 Subject: [Python-bugs-list] [ python-Bugs-522395 ] test_descrtut fails on OSX Message-ID: Bugs item #522395, was opened at 2002-02-25 03:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522395&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: test_descrtut fails on OSX Initial Comment: I'm submitting to you because I have no way right now to see whether this is a general bug, something you didn't get around to or an OSX-specific problem. test_descrtut fails on OSX. I did the staring at the two outputs already: __doc__ is in the actual output but not in the expected output. test_descrtut ***************************************************************** Failure in example: pprint.pprint(dir(list)) # like list.__dict__.keys(), but sorted from line #30 of test.test_descrtut.__test__.tut3 Expected: ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__eq__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__repr__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] Got: ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__doc__', '__eq__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__repr__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] ***************************************************************** 1 items had failures: 1 of 13 in test.test_descrtut.__test__.tut3 ***Test Failed*** 1 failures. test test_descrtut failed -- 1 of 96 doctests failed ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 05:25 Message: Logged In: YES user_id=45365 This test passes on the trunk. Maybe you forgot the move a revision of the test (or test output file) to the branch? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-25 04:38 Message: Logged In: YES user_id=6656 This'll be related to the "allow unicode docstrings" issue, I'm sure. Problem probably exists on the trunk, too. Will investigate, after I've built the trunk, had lunch and given a tutorial :) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522395&group_id=5470 From noreply@sourceforge.net Mon Feb 25 13:53:18 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 05:53:18 -0800 Subject: [Python-bugs-list] [ python-Bugs-522395 ] test_descrtut fails on OSX Message-ID: Bugs item #522395, was opened at 2002-02-25 03:15 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522395&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.1 candidate >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: test_descrtut fails on OSX Initial Comment: I'm submitting to you because I have no way right now to see whether this is a general bug, something you didn't get around to or an OSX-specific problem. test_descrtut fails on OSX. I did the staring at the two outputs already: __doc__ is in the actual output but not in the expected output. test_descrtut ***************************************************************** Failure in example: pprint.pprint(dir(list)) # like list.__dict__.keys(), but sorted from line #30 of test.test_descrtut.__test__.tut3 Expected: ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__eq__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__repr__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] Got: ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delslice__', '__doc__', '__eq__', '__ge__', '__getattribute__', '__getitem__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__repr__', '__rmul__', '__setattr__', '__setitem__', '__setslice__', '__str__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] ***************************************************************** 1 items had failures: 1 of 13 in test.test_descrtut.__test__.tut3 ***Test Failed*** 1 failures. test test_descrtut failed -- 1 of 96 doctests failed ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-25 05:53 Message: Logged In: YES user_id=6656 Yup. Tim fixed this on the trunk. Just the unicode test failure to go, then. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 05:25 Message: Logged In: YES user_id=45365 This test passes on the trunk. Maybe you forgot the move a revision of the test (or test output file) to the branch? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-25 04:38 Message: Logged In: YES user_id=6656 This'll be related to the "allow unicode docstrings" issue, I'm sure. Problem probably exists on the trunk, too. Will investigate, after I've built the trunk, had lunch and given a tutorial :) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522395&group_id=5470 From noreply@sourceforge.net Mon Feb 25 13:55:51 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 05:55:51 -0800 Subject: [Python-bugs-list] [ python-Bugs-522393 ] Doesn't build on SGI Message-ID: Bugs item #522393, was opened at 2002-02-25 03:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522393&group_id=5470 >Category: Build Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: Doesn't build on SGI Initial Comment: On the SGI I can't build the current 2.2.1 from CVS. I get an undefined error on pthread_detach in the link step for python: ld32: ERROR 33: Unresolved text symbol "pthread_detach" -- 1st referenced by libpython2.2.a(thread.o). ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-25 05:55 Message: Logged In: YES user_id=6656 Oh, the joy of unix. Special case the snot out of SGI in configure.in? ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 05:24 Message: Logged In: YES user_id=45365 Ouch! You are right: the trunk also doesn't build, and probably 2.2 doesn't build either. I've never checked this, because I always build --without-thread on SGI. I've found the problem: libc contains a partial implementation of pthreads, which does include pthread_create but not pthread_detach. For the full implementation you need to add -lpthread to your link step. But the autoconf test tests only for pthread_create(), so it thinks no extra link options are needed. I think we should reassign this to a pthread guru, but I'm not sure who qualifies. Simply adding a pthread_detach() call to the autotest may be worse, if I read thread_pthread.h correctly thread_detach() isn't defined in all flavors of pthreads. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-25 04:35 Message: Logged In: YES user_id=6656 OK, this is odd. Does the trunk build? Did 2.2 build? I can't easily find any branch changes that would account for this. I haven't looked very hard yet. Will do so later. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522393&group_id=5470 From noreply@sourceforge.net Mon Feb 25 16:16:01 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 08:16:01 -0800 Subject: [Python-bugs-list] [ python-Bugs-522396 ] test_unicodedata fails on OSX Message-ID: Bugs item #522396, was opened at 2002-02-25 03:17 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522396&group_id=5470 Category: Python Interpreter Core Group: Python 2.2.1 candidate >Status: Closed Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) Assigned to: Michael Hudson (mwh) Summary: test_unicodedata fails on OSX Initial Comment: Again, I have no other working 2.2.1 platforms so I can't test whether this is an OSX specific bug or a general bug, hence I'm assigning it to the Universal 221 Scapegoat:-) test_unicodedata fails on OSX with the following report: test_unicodedata test test_unicodedata produced unexpected output: ********************************************************************** *** mismatch between line 3 of expected output and line 3 of actual output: - Methods: 6c7a7c02657b69d0fdd7a7d174f573194bba2e18 + Methods: 11d75c5e423430d480fb8000ef1c611ecfcd44f1 ********************************************************************** ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 08:16 Message: Logged In: YES user_id=45365 Mark-Andre fixed this after some private mail. Thanks! ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-25 03:41 Message: Logged In: YES user_id=38388 FWIW, these are the correct values for 2.2.0 and 2.3 (CVS): Python 2.2.0: Methods: 84b72943b1d4320bc1e64a4888f7cdf62eea219a Functions: 41e1d4792185d6474a43c83ce4f593b1bdb01f8a Python 2.3: Methods: 6c7a7c02657b69d0fdd7a7d174f573194bba2e18 Functions: 41e1d4792185d6474a43c83ce4f593b1bdb01f8a The change was caused by the fix to the UTF-8 codec in 2.3. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522396&group_id=5470 From noreply@sourceforge.net Mon Feb 25 16:19:14 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 08:19:14 -0800 Subject: [Python-bugs-list] [ python-Bugs-508779 ] Disable flat namespace on MacOS X Message-ID: Bugs item #508779, was opened at 2002-01-25 19:44 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=508779&group_id=5470 Category: Extension Modules Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 7 Submitted By: Manoj Plakal (terabaap) Assigned to: Jack Jansen (jackjansen) Summary: Disable flat namespace on MacOS X Initial Comment: Python: v2.2 OS: MacOS X 10.1 MacOS X 10.1 introduced two forms of linking for loadable modules: flat namespace and two-level namespace. Python 2.2 is set up to use flat namespace by default on OS X for building extension modules. I believe that this is a problem since it introduces spurious run-time linking errors when loading 2 or more modules that happen to have common symbols. The Linux and Windows implementations do not allow symbols within modules to clash with each other. This behavior also goes against expectations of C extension module writers. As a reproducible example, consider two dummy modules foo (foomodule.c) and bar (barmodule.c) both of which are built with a common file baz.c that contains some data variables. With the current Python 2.2 on OS X 10.1, only one of foo or bar can be imported, but NOT BOTH, into the same interpreter session. The files can be picked up from these URLs: http://yumpee.org/python/foomodule.c http://yumpee.org/python/barmodule.c http://yumpee.org/python/baz.c http://yumpee.org/python/setup.py If I run "python setup.py build" with Python 2.2 (built from the 2.2 source tarball) and then import foo followed by bar, I get an ImportError: "Failure linking new module" (from Python/dynload_next.c). If I add a call to NSLinkEditError() to print a more detailed error message, I see that the problem is multiple definitions of the data variables in baz.c. The above example works fine with Python 2.1 on Red Hat Linux 7.2 and Python 2.2a4 on Win98. If I then edit /usr/local/lib/python2.2/Makefile and change LDSHARED and BLDSHARED to not use flat_namespace: $(CC) $(LDFLAGS) -bundle -bundle_loader /usr/local/bin/python2.2 -undefined error then the problem is solved and I can load both foo and bar without problems. More info and discussion is available at these URLs (also search groups.google.com for "comp.lang.python OS X import bug"): http://groups.google.com/groups?hl=en&threadm=j4sn8uu517.fsf%40informatik.hu-berlin.de&prev=/groups%3Fnum%3D25%26hl%3Den%26group%3Dcomp.lang.python%26start%3D75%26group%3Dcomp.lang.python http://mail.python.org/pipermail/pythonmac-sig/2002-January/004977.html It would be great to have this simple change be applied to Python 2.2.1. Manoj terabaap@yumpee.org ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 08:19 Message: Logged In: YES user_id=45365 This solution still suffers from the problem we discussed on the Pythonmac-SIG, that BLDSHARED (or whatever replaces it) would need to have one value for -bundle_loader when building the standard extension modules and another during "normal operation"... ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 03:21 Message: Logged In: YES user_id=45365 I usurping this bug, but I'm not sure yet whether it's a good idea to fix this for 2.2.1, as it will break other extension modules that rely on the single flat namespace. ---------------------------------------------------------------------- Comment By: Manoj Plakal (terabaap) Date: 2002-01-25 20:25 Message: Logged In: YES user_id=150105 Another idea is to provide the option for flat or 2-level namespace when building extension modules on OS X, maybe as an extra flag passed to distutils.core.Extension or somewhere else ... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=508779&group_id=5470 From noreply@sourceforge.net Mon Feb 25 17:36:45 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 09:36:45 -0800 Subject: [Python-bugs-list] [ python-Bugs-521723 ] fcntl.ioctl on openbsd Message-ID: Bugs item #521723, was opened at 2002-02-22 21:34 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 Category: Extension Modules Group: Python 2.1.1 >Status: Closed Resolution: Invalid Priority: 3 Submitted By: Jeremy Rossi (skin_pup) Assigned to: Nobody/Anonymous (nobody) Summary: fcntl.ioctl on openbsd Initial Comment: >From the OpenBSD man page ------- #include int ioctl(int d, unsigned long request, ...); -- On OpenBSD ioctl takes an unsigned long for the action to be preformed on d. The function fcntl_ioctl() in Modules/fcntlmodule.c will only accept an int for the second argument without rasing an error. On OpenBSD (maybe free/net also I have not checked) an unsigned long should be the largest allowed. >From Modules/fcntlmodule.c ------- PyArg_ParseTuple(args, "iis#:ioctl", &fd, &code, &str, &len)) -- ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-25 09:36 Message: Logged In: YES user_id=21627 On your system, 'unsigned long' is the same type as 'unsigned int'. Furthermore, passing a negative number -n is the same as passing 2**32-n, atleast on a 32-bit architecture. So even if your driver processes unsigned ints, you will be able to pass the correct value using the corresponding negative number. Furthermore, if you use the hex notation, it will work even on 64-bit ports unmodified: unsigned long will be 64 bit, but so will be the Python int type, and the hex literal then indicates a positive integer, which is well in range. As for ioctl not being Posix: It actually is, in the latest revision of Posix (IEEE Std 1003.1-2001), which is identical to Single Unix (see the top of the ioctl page). In any case, I think this is not worth fixing, as I cannot see a problem arising from it that cannot be easily worked-around, hence closing the report. ---------------------------------------------------------------------- Comment By: Jeremy Rossi (skin_pup) Date: 2002-02-24 14:23 Message: Logged In: YES user_id=323435 Thank you Martin the 0xC0844413 does indeed work for me, but I am working on writing a thin wrapper that will accept un_signed long ints for ioctl. (Never done C before, but I guess this is as good as any to learn) But to looking forward I have done some checking and it seams to me that all the *BSD's including BSDi use unsigned longs for ioctl. I was not able to find documentation for darwin on the web, bit I think it is safe to assume that it also takes a unsigned long for ioctl. NetBSD also have been ported to 64bit systems. NetBSD: http://www.tac.eu.org/cgi-bin/man-cgi?ioctl++NetBSD-current -- BEGIN cut and paste from a BSDi systems. $ uname -a BSD/OS xxxx.xxxxxx.com 2.1 BSDI BSD/OS 2.1 Kernel #2: Mon Jan 27 16:12:45 MST 1997 web@xxxx.xxxxxx.com:/usr/src/sys/compile/USR i386 $ man ioctl | head IOCTL(2) BSD Programmer's Manual IOCTL(2) NAME ioctl - control device SYNOPSIS #include int ioctl(int d, unsigned long request, char *argp); --END ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2002-02-24 13:36 Message: Logged In: YES user_id=31435 Gotta love it . I don't believe ioctl is a POSIX Classic function. There's a good discussion of why the POSIX Realtime Extensions added a workalike posix_devctl() instead, in http://www.usenix.org/publications/login/standards/22.posix. html Martin, the URL you gave is actually for fcntl, not ioctl. You can s/fcntl/ioctl/ in your URL to get the Single UNIX Specification's ioctl page, though, which also says "int". I agree OpenBSD is out of line with best current practice because of that. It appears that Jeremy must be using Python 2.2, or running on a 64-bit machine, since his line x = 3229893651 raises OverflowError on 32-bit boxes before the 2.2 release. As Martin suggests, using a hex literal instead has always been the intended way to deal with cases "like this". The situation will get a lot worse if OpenBSD is ported to a 64- bit box with sizeof(long)==8, and some yahoo actually defines a ioctl arg that requires more than 32 bits. Before then, I suggest we leave this alone (by the time it may matter for real, OpenBSD should be feeling a lot more pressure to conform to the larger open standards). ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-24 06:58 Message: Logged In: YES user_id=21627 This won't be easy to change: If we declare the type of ioctl to be unsigned, then we break systems where it is signed (as it should be). As a work-around, try using 0xC0844413 (i.e. the hexadecimal version) as the value for the ioctl. Python will understand this as a negative value, but your system will likely still understand it as the right ioctl command. ---------------------------------------------------------------------- Comment By: Jeremy Rossi (skin_pup) Date: 2002-02-23 19:15 Message: Logged In: YES user_id=323435 >From the current man pages of OpenBSD and FreeBSD. It stats that the second argument of ioctl is an unsigned int. http://www.openbsd.org/cgi-bin/man.cgi?query=ioctl http://www.freebsd.org/cgi-bin/man.cgi?query=ioctl Pythons fcntl.ioctl() does not allow the second argumnet to be anything other then a C int, this does not allow required operations to be preformed with ioctl on the two BSD systems. For a practical example. On the openbsd system the /dev/pf is the direct inteface to the firewall, the only things I am able to preform on this file in python are to turn the firewall on and off. This is allowed because the ioctl un_signed ints (536888321 in base 10) that prefrom this action happen to be small enough to fit in to an int. While the ioctl unsigned int (3229893651 in base 10) for reporting the status of connections is larger then a C int and python raises an exception before calling the system ioctl call. The following is the code in question. import fcntl import struct import os fd = os.open("/dev/pf",os.O_RDWR) null = '\0'*(struct.calcsize("LLLLIIII")) x = 3229893651 null = fcntl.ioctl(fd,x,null) print struct.unpack("LLLLIIII",null) ---output--- $ sudo python ./py-pfctl.py Traceback (most recent call last): File "./py-pfctl.py", line 8, in ? null = fcntl.ioctl(fd,x,null) OverflowError: long int too large to convert to int ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-23 15:05 Message: Logged In: YES user_id=21627 Can you give a practical example of an fcntl operation where this is a problem? For all practical purposes, a byte would be sufficient. Also, in POSIX, the argument to fcntl is of type int, see http://www.opengroup.org/onlinepubs/007904975/functions/fcntl.html So I can't see the bug. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521723&group_id=5470 From noreply@sourceforge.net Mon Feb 25 22:09:07 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 14:09:07 -0800 Subject: [Python-bugs-list] [ python-Bugs-522682 ] pydoc: HTML not escaped Message-ID: Bugs item #522682, was opened at 2002-02-25 14:09 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522682&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Johannes Gijsbers (jlgijsbers) Assigned to: Nobody/Anonymous (nobody) Summary: pydoc: HTML not escaped Initial Comment: In the part 'Data and non-method functions', the variables are not escaped for HTML. I discovered this while looking at pyweblib.forms[1], which uses some HTML like
, , etc. in it's docstrings. Also, \n isn't replaced by
. I'm not sure what the new behavior should be, but I do know that this is quite ugly to see(and it can break all of the layout, with a nicely placed
). [1] http://www.stroeder.com/pylib/PyWebLib/pydoc/pyweblib.f orms.html ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522682&group_id=5470 From noreply@sourceforge.net Mon Feb 25 22:41:46 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 14:41:46 -0800 Subject: [Python-bugs-list] [ python-Bugs-216289 ] Programs using Tkinter sometimes can't shut down (Windows) Message-ID: Bugs item #216289, was opened at 2000-10-06 19:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=216289&group_id=5470 Category: Windows Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 3 Submitted By: Tim Peters (tim_one) Assigned to: Tim Peters (tim_one) Summary: Programs using Tkinter sometimes can't shut down (Windows) Initial Comment: The following msg from the Tutor list is about 1.6, but I noticed the same thing several times today using 2.0b2+CVS. In my case, I was running IDLE via python ../tool/idle/idle.pyw from a DOS box in my PCbuild directory. Win98SE. *Most* of the time, shutting down IDLE via Ctrl+Q left the DOS box hanging. As with the poster, the only way to regain control was to use the Task Manager to kill off Winoldap. -----Original Message----- From: Joseph Stubenrauch Sent: Friday, October 06, 2000 9:23 PM To: tutor@python.org Subject: Re: [Tutor] Python 1.6 BUG Strange, I have been experiencing the same bug myself. Here's the low down for me: Python 1.6 with win95 I am running a little Tkinter program The command line I use is simply: "python foo.py" About 25-35% of the time, when I close the Tkinter window, DOS seems to "freeze" and never returns to the c:\ command prompt. I have to ctrl-alt-delete repeatedly and shut down "winoldapp" to get rid of the window and then shell back into DOS and keep working. It's a bit of a pain, since I have the habit of testing EVERYTHING in tiny little stages, so I change one little thing, test it ... freeze ... ARGH! Change one more tiny thing, test it ... freeze ... ARGH! However, sometimes it seems to behave and doesn't bother me for an entire several hour session of python work. That's my report on the problem. Cheers, Joe ---------------------------------------------------------------------- Comment By: John Popplewell (johnnypops) Date: 2002-02-25 14:41 Message: Logged In: YES user_id=143340 This one has been torturing me for a while. Managed to track it down to a problem inside Tcl. For the Tcl8.3.4 source distribution the problem is in the file win/tclWinNotify.c void Tcl_FinalizeNotifier(ClientData clientData) { ThreadSpecificData *tsdPtr = (ThreadSpecificData *) clientData; /* sometimes called with tsdPtr == NULL */ if ( tsdPtr != NULL ) { DeleteCriticalSection(&tsdPtr->crit); CloseHandle(tsdPtr->event); /* * Clean up the timer and messaging * window for this thread. */ if (tsdPtr->hwnd) { KillTimer(tsdPtr->hwnd, INTERVAL_TIMER); DestroyWindow(tsdPtr->hwnd); } } /* * If this is the last thread to use the notifier, * unregister the notifier window class. */ Tcl_MutexLock(¬ifierMutex); if ( notifierCount && !--notifierCount ) { UnregisterClassA( "TclNotifier", TclWinGetTclInstance()); } Tcl_MutexUnlock(¬ifierMutex); } This bodge doesn't address the underlying problem but has stopped me from tearing all my hair out, cheers, John Popplewell. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-10-24 15:27 Message: Logged In: YES user_id=31435 FYI, you don't need an IDE to do this -- in Win9x, hit Ctrl+Alt+Del and kill the process directly. A saner solution is to develop under Win2K, which doesn't appear to suffer this problem (the only reports I've seen, and experienced myself, came from Win9x boxes). ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-10-24 01:52 Message: Logged In: NO For those who are still trapped in this bug's hell, I will gladly share the one thing that saved my sanity: WingIDE's 'Kill' command will shut down the program with apparent 100% certainty and no fear of lockups. WingIDE has its own issues, so its not a perfect solution, but if you are like me and Joe (above) who test in small iterations, then using 'Kill' to quit out of your app while testing is a workaround that may preserve your sanity. Perhaps the python gods and the Wing guys can get together and tell us how to replicate 'kill' into our code. For now, I'll use WingIDE to edit, and pythonw.exe for my final client's delivery. ---------------------------------------------------------------------- Comment By: Howard Lightstone (hlightstone) Date: 2001-09-05 10:43 Message: Logged In: YES user_id=66570 I sometimes get bunches of these.... A tool I use (Taskinfo2000) reports that (after killing winoldap): python.exe is blocked on a mutex named OLESCELOCKMUTEX. The reported state is "Console Terminating". There appears to be only one (os) thread running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-04-02 13:06 Message: Logged In: YES user_id=31435 No sign of progress on this presumed Tk/Tcl Windows bug in over 3 months, so closing it for lack of hope. ---------------------------------------------------------------------- Comment By: Doug Henderson (djhender) Date: 2001-02-05 21:13 Message: This was a symptom I saw while tracking down the essence of the problem reported in #131207. Using Win98SE, I would get an error dialog (GPF?) in the Kernel, and sometimes the dos prompt would not come back. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-12-12 18:00 Message: Just reproduced w/ current CVS, but didn't hang until the 8th try. http://dev.scriptics.com/software/tcltk/ says 8.3 is still the latest released version; don't know whether that URL still makes sense, though. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-12-12 12:58 Message: Tim, can you still reproduce this with the current CVS version? There's been one critical patch to _tkinter since the 2.0 release. An alternative would be to try with a newer version of Tcl (isn't 8.4 out already?). ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-10-15 09:47 Message: Same as I've reported earlier; it hangs in the call to Tcl_Finalize (which is called by the DLL finalization code). It's less likely to hang if I call Tcl_Finalize from the _tkinter DLL (from user code). Note that the problem isn't really Python-related -- I have stand-alone samples (based on wish) that hangs in the same way. More later. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-13 07:40 Message: Back to Tim since I have no clue what to do here. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-12 10:25 Message: The recent fix to _tkinter (Tcl_GetStringResult(interp) instead of interp->result) didn't fix this either. As Tim has remarked in private but not yet recorded here, a workaround is to use pythonw instead of python, so I'm lowering thepriority again. Also note that the hanging process that Tim writes about apparently prevents Win98 from shutting down properly. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-10-07 00:37 Message: More info (none good, but some worse so boosted priority): + Happens under release and debug builds. + Have not been able to provoke when starting in the debugger. + Ctrl+Alt+Del and killing Winoldap is not enough to clean everything up. There's still a Python (or Python_d) process hanging around that Ctrl+Alt+Del doesn't show. + This process makes it impossible to delete the associated Python .dll, and in particular makes it impossible to rebuild Python successfully without a reboot. + These processes cannot be killed! Wintop and Process Viewer both fail to get the job done. PrcView (a freeware process viewer) itself locks up if I try to kill them using it. Process Viewer freezes for several seconds before giving up. + Attempting to attach to the process with the MSVC debugger (in order to find out what the heck it's doing) finds the process OK, but then yields the cryptic and undocumented error msg "Cannot execute program". + The processes are not accumulating cycles. + Smells like deadlock. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=216289&group_id=5470 From noreply@sourceforge.net Mon Feb 25 22:49:44 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 14:49:44 -0800 Subject: [Python-bugs-list] [ python-Bugs-522699 ] Segfault evaluating '%.100f' % 2.0**100 Message-ID: Bugs item #522699, was opened at 2002-02-25 14:49 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522699&group_id=5470 Category: Python Interpreter Core Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Erwin S. Andreasen (drylock) Assigned to: Nobody/Anonymous (nobody) Summary: Segfault evaluating '%.100f' % 2.0**100 Initial Comment: Evaluating this code: '%.100f' % 2.0**100 will crash python2.1.2. gdb on the core file shows #0 0x30303030 in ?? () Error accessing memory address 0x30303030: No such process. which suggests overflow of some stack variable (0x30 is ASCII character '0') The same problem also happens on Python 2.0 The problem does NOT occur on Python 2.2 nor 1.5 Program versions used: Python 2.0b1 (#18, Sep 23 2001, 21:06:34) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Python 2.1.2 (#1, Jan 18 2002, 18:05:45) [GCC 2.95.4 (Debian prerelease)] on linux2 Python 2.2 (#1, Jan 8 2002, 01:13:32) [GCC 2.95.4 20011006 (Debian prerelease)] on linux2 Python 1.5.2 (#0, Dec 27 2000, 13:59:38) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522699&group_id=5470 From noreply@sourceforge.net Mon Feb 25 23:32:15 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 15:32:15 -0800 Subject: [Python-bugs-list] [ python-Bugs-216289 ] Programs using Tkinter sometimes can't shut down (Windows) Message-ID: Bugs item #216289, was opened at 2000-10-06 19:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=216289&group_id=5470 Category: Windows Group: 3rd Party Status: Closed Resolution: Wont Fix Priority: 3 Submitted By: Tim Peters (tim_one) Assigned to: Tim Peters (tim_one) Summary: Programs using Tkinter sometimes can't shut down (Windows) Initial Comment: The following msg from the Tutor list is about 1.6, but I noticed the same thing several times today using 2.0b2+CVS. In my case, I was running IDLE via python ../tool/idle/idle.pyw from a DOS box in my PCbuild directory. Win98SE. *Most* of the time, shutting down IDLE via Ctrl+Q left the DOS box hanging. As with the poster, the only way to regain control was to use the Task Manager to kill off Winoldap. -----Original Message----- From: Joseph Stubenrauch Sent: Friday, October 06, 2000 9:23 PM To: tutor@python.org Subject: Re: [Tutor] Python 1.6 BUG Strange, I have been experiencing the same bug myself. Here's the low down for me: Python 1.6 with win95 I am running a little Tkinter program The command line I use is simply: "python foo.py" About 25-35% of the time, when I close the Tkinter window, DOS seems to "freeze" and never returns to the c:\ command prompt. I have to ctrl-alt-delete repeatedly and shut down "winoldapp" to get rid of the window and then shell back into DOS and keep working. It's a bit of a pain, since I have the habit of testing EVERYTHING in tiny little stages, so I change one little thing, test it ... freeze ... ARGH! Change one more tiny thing, test it ... freeze ... ARGH! However, sometimes it seems to behave and doesn't bother me for an entire several hour session of python work. That's my report on the problem. Cheers, Joe ---------------------------------------------------------------------- Comment By: Jeffrey Hobbs (hobbs) Date: 2002-02-25 15:32 Message: Logged In: YES user_id=72656 This is mostly correct, and is fixed in the 8.4 Tcl sources already (I guess we can backport this). This was mentioned in SF Tcl bug (account for chopped URL): https://sourceforge.net/tracker/? func=detail&aid=217982&group_id=10894&atid=110894 and the code comment is: /* * Only finalize the notifier if a notifier was installed in the * current thread; there is a route in which this is not * guaranteed to be true (when tclWin32Dll.c:DllMain() is called * with the flag DLL_PROCESS_DETACH by the OS, which could be * doing so from a thread that's never previously been involved * with Tcl, e.g. the task manager) so this check is important. * * Fixes Bug #217982 reported by Hugh Vu and Gene Leache. */ if (tsdPtr == NULL) { return; } ---------------------------------------------------------------------- Comment By: John Popplewell (johnnypops) Date: 2002-02-25 14:41 Message: Logged In: YES user_id=143340 This one has been torturing me for a while. Managed to track it down to a problem inside Tcl. For the Tcl8.3.4 source distribution the problem is in the file win/tclWinNotify.c void Tcl_FinalizeNotifier(ClientData clientData) { ThreadSpecificData *tsdPtr = (ThreadSpecificData *) clientData; /* sometimes called with tsdPtr == NULL */ if ( tsdPtr != NULL ) { DeleteCriticalSection(&tsdPtr->crit); CloseHandle(tsdPtr->event); /* * Clean up the timer and messaging * window for this thread. */ if (tsdPtr->hwnd) { KillTimer(tsdPtr->hwnd, INTERVAL_TIMER); DestroyWindow(tsdPtr->hwnd); } } /* * If this is the last thread to use the notifier, * unregister the notifier window class. */ Tcl_MutexLock(¬ifierMutex); if ( notifierCount && !--notifierCount ) { UnregisterClassA( "TclNotifier", TclWinGetTclInstance()); } Tcl_MutexUnlock(¬ifierMutex); } This bodge doesn't address the underlying problem but has stopped me from tearing all my hair out, cheers, John Popplewell. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-10-24 15:27 Message: Logged In: YES user_id=31435 FYI, you don't need an IDE to do this -- in Win9x, hit Ctrl+Alt+Del and kill the process directly. A saner solution is to develop under Win2K, which doesn't appear to suffer this problem (the only reports I've seen, and experienced myself, came from Win9x boxes). ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-10-24 01:52 Message: Logged In: NO For those who are still trapped in this bug's hell, I will gladly share the one thing that saved my sanity: WingIDE's 'Kill' command will shut down the program with apparent 100% certainty and no fear of lockups. WingIDE has its own issues, so its not a perfect solution, but if you are like me and Joe (above) who test in small iterations, then using 'Kill' to quit out of your app while testing is a workaround that may preserve your sanity. Perhaps the python gods and the Wing guys can get together and tell us how to replicate 'kill' into our code. For now, I'll use WingIDE to edit, and pythonw.exe for my final client's delivery. ---------------------------------------------------------------------- Comment By: Howard Lightstone (hlightstone) Date: 2001-09-05 10:43 Message: Logged In: YES user_id=66570 I sometimes get bunches of these.... A tool I use (Taskinfo2000) reports that (after killing winoldap): python.exe is blocked on a mutex named OLESCELOCKMUTEX. The reported state is "Console Terminating". There appears to be only one (os) thread running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-04-02 13:06 Message: Logged In: YES user_id=31435 No sign of progress on this presumed Tk/Tcl Windows bug in over 3 months, so closing it for lack of hope. ---------------------------------------------------------------------- Comment By: Doug Henderson (djhender) Date: 2001-02-05 21:13 Message: This was a symptom I saw while tracking down the essence of the problem reported in #131207. Using Win98SE, I would get an error dialog (GPF?) in the Kernel, and sometimes the dos prompt would not come back. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-12-12 18:00 Message: Just reproduced w/ current CVS, but didn't hang until the 8th try. http://dev.scriptics.com/software/tcltk/ says 8.3 is still the latest released version; don't know whether that URL still makes sense, though. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-12-12 12:58 Message: Tim, can you still reproduce this with the current CVS version? There's been one critical patch to _tkinter since the 2.0 release. An alternative would be to try with a newer version of Tcl (isn't 8.4 out already?). ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-10-15 09:47 Message: Same as I've reported earlier; it hangs in the call to Tcl_Finalize (which is called by the DLL finalization code). It's less likely to hang if I call Tcl_Finalize from the _tkinter DLL (from user code). Note that the problem isn't really Python-related -- I have stand-alone samples (based on wish) that hangs in the same way. More later. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-13 07:40 Message: Back to Tim since I have no clue what to do here. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-12 10:25 Message: The recent fix to _tkinter (Tcl_GetStringResult(interp) instead of interp->result) didn't fix this either. As Tim has remarked in private but not yet recorded here, a workaround is to use pythonw instead of python, so I'm lowering thepriority again. Also note that the hanging process that Tim writes about apparently prevents Win98 from shutting down properly. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-10-07 00:37 Message: More info (none good, but some worse so boosted priority): + Happens under release and debug builds. + Have not been able to provoke when starting in the debugger. + Ctrl+Alt+Del and killing Winoldap is not enough to clean everything up. There's still a Python (or Python_d) process hanging around that Ctrl+Alt+Del doesn't show. + This process makes it impossible to delete the associated Python .dll, and in particular makes it impossible to rebuild Python successfully without a reboot. + These processes cannot be killed! Wintop and Process Viewer both fail to get the job done. PrcView (a freeware process viewer) itself locks up if I try to kill them using it. Process Viewer freezes for several seconds before giving up. + Attempting to attach to the process with the MSVC debugger (in order to find out what the heck it's doing) finds the process OK, but then yields the cryptic and undocumented error msg "Cannot execute program". + The processes are not accumulating cycles. + Smells like deadlock. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=216289&group_id=5470 From noreply@sourceforge.net Mon Feb 25 23:39:40 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 15:39:40 -0800 Subject: [Python-bugs-list] [ python-Bugs-216289 ] Programs using Tkinter sometimes can't shut down (Windows) Message-ID: Bugs item #216289, was opened at 2000-10-06 19:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=216289&group_id=5470 Category: Windows Group: 3rd Party >Status: Open Resolution: Wont Fix Priority: 3 Submitted By: Tim Peters (tim_one) Assigned to: Tim Peters (tim_one) Summary: Programs using Tkinter sometimes can't shut down (Windows) Initial Comment: The following msg from the Tutor list is about 1.6, but I noticed the same thing several times today using 2.0b2+CVS. In my case, I was running IDLE via python ../tool/idle/idle.pyw from a DOS box in my PCbuild directory. Win98SE. *Most* of the time, shutting down IDLE via Ctrl+Q left the DOS box hanging. As with the poster, the only way to regain control was to use the Task Manager to kill off Winoldap. -----Original Message----- From: Joseph Stubenrauch Sent: Friday, October 06, 2000 9:23 PM To: tutor@python.org Subject: Re: [Tutor] Python 1.6 BUG Strange, I have been experiencing the same bug myself. Here's the low down for me: Python 1.6 with win95 I am running a little Tkinter program The command line I use is simply: "python foo.py" About 25-35% of the time, when I close the Tkinter window, DOS seems to "freeze" and never returns to the c:\ command prompt. I have to ctrl-alt-delete repeatedly and shut down "winoldapp" to get rid of the window and then shell back into DOS and keep working. It's a bit of a pain, since I have the habit of testing EVERYTHING in tiny little stages, so I change one little thing, test it ... freeze ... ARGH! Change one more tiny thing, test it ... freeze ... ARGH! However, sometimes it seems to behave and doesn't bother me for an entire several hour session of python work. That's my report on the problem. Cheers, Joe ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-25 15:39 Message: Logged In: YES user_id=6380 Reopened until we know what the proper action is. ---------------------------------------------------------------------- Comment By: Jeffrey Hobbs (hobbs) Date: 2002-02-25 15:32 Message: Logged In: YES user_id=72656 This is mostly correct, and is fixed in the 8.4 Tcl sources already (I guess we can backport this). This was mentioned in SF Tcl bug (account for chopped URL): https://sourceforge.net/tracker/? func=detail&aid=217982&group_id=10894&atid=110894 and the code comment is: /* * Only finalize the notifier if a notifier was installed in the * current thread; there is a route in which this is not * guaranteed to be true (when tclWin32Dll.c:DllMain() is called * with the flag DLL_PROCESS_DETACH by the OS, which could be * doing so from a thread that's never previously been involved * with Tcl, e.g. the task manager) so this check is important. * * Fixes Bug #217982 reported by Hugh Vu and Gene Leache. */ if (tsdPtr == NULL) { return; } ---------------------------------------------------------------------- Comment By: John Popplewell (johnnypops) Date: 2002-02-25 14:41 Message: Logged In: YES user_id=143340 This one has been torturing me for a while. Managed to track it down to a problem inside Tcl. For the Tcl8.3.4 source distribution the problem is in the file win/tclWinNotify.c void Tcl_FinalizeNotifier(ClientData clientData) { ThreadSpecificData *tsdPtr = (ThreadSpecificData *) clientData; /* sometimes called with tsdPtr == NULL */ if ( tsdPtr != NULL ) { DeleteCriticalSection(&tsdPtr->crit); CloseHandle(tsdPtr->event); /* * Clean up the timer and messaging * window for this thread. */ if (tsdPtr->hwnd) { KillTimer(tsdPtr->hwnd, INTERVAL_TIMER); DestroyWindow(tsdPtr->hwnd); } } /* * If this is the last thread to use the notifier, * unregister the notifier window class. */ Tcl_MutexLock(¬ifierMutex); if ( notifierCount && !--notifierCount ) { UnregisterClassA( "TclNotifier", TclWinGetTclInstance()); } Tcl_MutexUnlock(¬ifierMutex); } This bodge doesn't address the underlying problem but has stopped me from tearing all my hair out, cheers, John Popplewell. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-10-24 15:27 Message: Logged In: YES user_id=31435 FYI, you don't need an IDE to do this -- in Win9x, hit Ctrl+Alt+Del and kill the process directly. A saner solution is to develop under Win2K, which doesn't appear to suffer this problem (the only reports I've seen, and experienced myself, came from Win9x boxes). ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-10-24 01:52 Message: Logged In: NO For those who are still trapped in this bug's hell, I will gladly share the one thing that saved my sanity: WingIDE's 'Kill' command will shut down the program with apparent 100% certainty and no fear of lockups. WingIDE has its own issues, so its not a perfect solution, but if you are like me and Joe (above) who test in small iterations, then using 'Kill' to quit out of your app while testing is a workaround that may preserve your sanity. Perhaps the python gods and the Wing guys can get together and tell us how to replicate 'kill' into our code. For now, I'll use WingIDE to edit, and pythonw.exe for my final client's delivery. ---------------------------------------------------------------------- Comment By: Howard Lightstone (hlightstone) Date: 2001-09-05 10:43 Message: Logged In: YES user_id=66570 I sometimes get bunches of these.... A tool I use (Taskinfo2000) reports that (after killing winoldap): python.exe is blocked on a mutex named OLESCELOCKMUTEX. The reported state is "Console Terminating". There appears to be only one (os) thread running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-04-02 13:06 Message: Logged In: YES user_id=31435 No sign of progress on this presumed Tk/Tcl Windows bug in over 3 months, so closing it for lack of hope. ---------------------------------------------------------------------- Comment By: Doug Henderson (djhender) Date: 2001-02-05 21:13 Message: This was a symptom I saw while tracking down the essence of the problem reported in #131207. Using Win98SE, I would get an error dialog (GPF?) in the Kernel, and sometimes the dos prompt would not come back. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-12-12 18:00 Message: Just reproduced w/ current CVS, but didn't hang until the 8th try. http://dev.scriptics.com/software/tcltk/ says 8.3 is still the latest released version; don't know whether that URL still makes sense, though. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-12-12 12:58 Message: Tim, can you still reproduce this with the current CVS version? There's been one critical patch to _tkinter since the 2.0 release. An alternative would be to try with a newer version of Tcl (isn't 8.4 out already?). ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-10-15 09:47 Message: Same as I've reported earlier; it hangs in the call to Tcl_Finalize (which is called by the DLL finalization code). It's less likely to hang if I call Tcl_Finalize from the _tkinter DLL (from user code). Note that the problem isn't really Python-related -- I have stand-alone samples (based on wish) that hangs in the same way. More later. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-13 07:40 Message: Back to Tim since I have no clue what to do here. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-12 10:25 Message: The recent fix to _tkinter (Tcl_GetStringResult(interp) instead of interp->result) didn't fix this either. As Tim has remarked in private but not yet recorded here, a workaround is to use pythonw instead of python, so I'm lowering thepriority again. Also note that the hanging process that Tim writes about apparently prevents Win98 from shutting down properly. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-10-07 00:37 Message: More info (none good, but some worse so boosted priority): + Happens under release and debug builds. + Have not been able to provoke when starting in the debugger. + Ctrl+Alt+Del and killing Winoldap is not enough to clean everything up. There's still a Python (or Python_d) process hanging around that Ctrl+Alt+Del doesn't show. + This process makes it impossible to delete the associated Python .dll, and in particular makes it impossible to rebuild Python successfully without a reboot. + These processes cannot be killed! Wintop and Process Viewer both fail to get the job done. PrcView (a freeware process viewer) itself locks up if I try to kill them using it. Process Viewer freezes for several seconds before giving up. + Attempting to attach to the process with the MSVC debugger (in order to find out what the heck it's doing) finds the process OK, but then yields the cryptic and undocumented error msg "Cannot execute program". + The processes are not accumulating cycles. + Smells like deadlock. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=216289&group_id=5470 From noreply@sourceforge.net Mon Feb 25 23:52:00 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 15:52:00 -0800 Subject: [Python-bugs-list] [ python-Bugs-522699 ] Segfault evaluating '%.100f' % 2.0**100 Message-ID: Bugs item #522699, was opened at 2002-02-25 14:49 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522699&group_id=5470 Category: Python Interpreter Core Group: Python 2.1.2 Status: Open Resolution: None Priority: 5 Submitted By: Erwin S. Andreasen (drylock) Assigned to: Nobody/Anonymous (nobody) Summary: Segfault evaluating '%.100f' % 2.0**100 Initial Comment: Evaluating this code: '%.100f' % 2.0**100 will crash python2.1.2. gdb on the core file shows #0 0x30303030 in ?? () Error accessing memory address 0x30303030: No such process. which suggests overflow of some stack variable (0x30 is ASCII character '0') The same problem also happens on Python 2.0 The problem does NOT occur on Python 2.2 nor 1.5 Program versions used: Python 2.0b1 (#18, Sep 23 2001, 21:06:34) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Python 2.1.2 (#1, Jan 18 2002, 18:05:45) [GCC 2.95.4 (Debian prerelease)] on linux2 Python 2.2 (#1, Jan 8 2002, 01:13:32) [GCC 2.95.4 20011006 (Debian prerelease)] on linux2 Python 1.5.2 (#0, Dec 27 2000, 13:59:38) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 ---------------------------------------------------------------------- >Comment By: Neil Schemenauer (nascheme) Date: 2002-02-25 15:52 Message: Logged In: YES user_id=35752 I think this was fixed in floatobject.c 2.108. The patch is attached if anyone wants to backport it. 2.1 doesn't seem to have snprintf though so the port could be tricky. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522699&group_id=5470 From noreply@sourceforge.net Tue Feb 26 00:55:10 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 16:55:10 -0800 Subject: [Python-bugs-list] [ python-Bugs-216289 ] Programs using Tkinter sometimes can't shut down (Windows) Message-ID: Bugs item #216289, was opened at 2000-10-06 19:25 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=216289&group_id=5470 Category: Windows Group: 3rd Party Status: Open Resolution: Wont Fix Priority: 3 Submitted By: Tim Peters (tim_one) Assigned to: Tim Peters (tim_one) Summary: Programs using Tkinter sometimes can't shut down (Windows) Initial Comment: The following msg from the Tutor list is about 1.6, but I noticed the same thing several times today using 2.0b2+CVS. In my case, I was running IDLE via python ../tool/idle/idle.pyw from a DOS box in my PCbuild directory. Win98SE. *Most* of the time, shutting down IDLE via Ctrl+Q left the DOS box hanging. As with the poster, the only way to regain control was to use the Task Manager to kill off Winoldap. -----Original Message----- From: Joseph Stubenrauch Sent: Friday, October 06, 2000 9:23 PM To: tutor@python.org Subject: Re: [Tutor] Python 1.6 BUG Strange, I have been experiencing the same bug myself. Here's the low down for me: Python 1.6 with win95 I am running a little Tkinter program The command line I use is simply: "python foo.py" About 25-35% of the time, when I close the Tkinter window, DOS seems to "freeze" and never returns to the c:\ command prompt. I have to ctrl-alt-delete repeatedly and shut down "winoldapp" to get rid of the window and then shell back into DOS and keep working. It's a bit of a pain, since I have the habit of testing EVERYTHING in tiny little stages, so I change one little thing, test it ... freeze ... ARGH! Change one more tiny thing, test it ... freeze ... ARGH! However, sometimes it seems to behave and doesn't bother me for an entire several hour session of python work. That's my report on the problem. Cheers, Joe ---------------------------------------------------------------------- Comment By: John Popplewell (johnnypops) Date: 2002-02-25 16:55 Message: Logged In: YES user_id=143340 I knew I wasn't getting to the heart of it .... Almost a one-liner! It has been seriously spoiling my (otherwise crash free) Python experience on Win9x - hope it gets fixed soon. cheers, John. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-25 15:39 Message: Logged In: YES user_id=6380 Reopened until we know what the proper action is. ---------------------------------------------------------------------- Comment By: Jeffrey Hobbs (hobbs) Date: 2002-02-25 15:32 Message: Logged In: YES user_id=72656 This is mostly correct, and is fixed in the 8.4 Tcl sources already (I guess we can backport this). This was mentioned in SF Tcl bug (account for chopped URL): https://sourceforge.net/tracker/? func=detail&aid=217982&group_id=10894&atid=110894 and the code comment is: /* * Only finalize the notifier if a notifier was installed in the * current thread; there is a route in which this is not * guaranteed to be true (when tclWin32Dll.c:DllMain() is called * with the flag DLL_PROCESS_DETACH by the OS, which could be * doing so from a thread that's never previously been involved * with Tcl, e.g. the task manager) so this check is important. * * Fixes Bug #217982 reported by Hugh Vu and Gene Leache. */ if (tsdPtr == NULL) { return; } ---------------------------------------------------------------------- Comment By: John Popplewell (johnnypops) Date: 2002-02-25 14:41 Message: Logged In: YES user_id=143340 This one has been torturing me for a while. Managed to track it down to a problem inside Tcl. For the Tcl8.3.4 source distribution the problem is in the file win/tclWinNotify.c void Tcl_FinalizeNotifier(ClientData clientData) { ThreadSpecificData *tsdPtr = (ThreadSpecificData *) clientData; /* sometimes called with tsdPtr == NULL */ if ( tsdPtr != NULL ) { DeleteCriticalSection(&tsdPtr->crit); CloseHandle(tsdPtr->event); /* * Clean up the timer and messaging * window for this thread. */ if (tsdPtr->hwnd) { KillTimer(tsdPtr->hwnd, INTERVAL_TIMER); DestroyWindow(tsdPtr->hwnd); } } /* * If this is the last thread to use the notifier, * unregister the notifier window class. */ Tcl_MutexLock(¬ifierMutex); if ( notifierCount && !--notifierCount ) { UnregisterClassA( "TclNotifier", TclWinGetTclInstance()); } Tcl_MutexUnlock(¬ifierMutex); } This bodge doesn't address the underlying problem but has stopped me from tearing all my hair out, cheers, John Popplewell. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-10-24 15:27 Message: Logged In: YES user_id=31435 FYI, you don't need an IDE to do this -- in Win9x, hit Ctrl+Alt+Del and kill the process directly. A saner solution is to develop under Win2K, which doesn't appear to suffer this problem (the only reports I've seen, and experienced myself, came from Win9x boxes). ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-10-24 01:52 Message: Logged In: NO For those who are still trapped in this bug's hell, I will gladly share the one thing that saved my sanity: WingIDE's 'Kill' command will shut down the program with apparent 100% certainty and no fear of lockups. WingIDE has its own issues, so its not a perfect solution, but if you are like me and Joe (above) who test in small iterations, then using 'Kill' to quit out of your app while testing is a workaround that may preserve your sanity. Perhaps the python gods and the Wing guys can get together and tell us how to replicate 'kill' into our code. For now, I'll use WingIDE to edit, and pythonw.exe for my final client's delivery. ---------------------------------------------------------------------- Comment By: Howard Lightstone (hlightstone) Date: 2001-09-05 10:43 Message: Logged In: YES user_id=66570 I sometimes get bunches of these.... A tool I use (Taskinfo2000) reports that (after killing winoldap): python.exe is blocked on a mutex named OLESCELOCKMUTEX. The reported state is "Console Terminating". There appears to be only one (os) thread running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-04-02 13:06 Message: Logged In: YES user_id=31435 No sign of progress on this presumed Tk/Tcl Windows bug in over 3 months, so closing it for lack of hope. ---------------------------------------------------------------------- Comment By: Doug Henderson (djhender) Date: 2001-02-05 21:13 Message: This was a symptom I saw while tracking down the essence of the problem reported in #131207. Using Win98SE, I would get an error dialog (GPF?) in the Kernel, and sometimes the dos prompt would not come back. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-12-12 18:00 Message: Just reproduced w/ current CVS, but didn't hang until the 8th try. http://dev.scriptics.com/software/tcltk/ says 8.3 is still the latest released version; don't know whether that URL still makes sense, though. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-12-12 12:58 Message: Tim, can you still reproduce this with the current CVS version? There's been one critical patch to _tkinter since the 2.0 release. An alternative would be to try with a newer version of Tcl (isn't 8.4 out already?). ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2000-10-15 09:47 Message: Same as I've reported earlier; it hangs in the call to Tcl_Finalize (which is called by the DLL finalization code). It's less likely to hang if I call Tcl_Finalize from the _tkinter DLL (from user code). Note that the problem isn't really Python-related -- I have stand-alone samples (based on wish) that hangs in the same way. More later. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-13 07:40 Message: Back to Tim since I have no clue what to do here. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2000-10-12 10:25 Message: The recent fix to _tkinter (Tcl_GetStringResult(interp) instead of interp->result) didn't fix this either. As Tim has remarked in private but not yet recorded here, a workaround is to use pythonw instead of python, so I'm lowering thepriority again. Also note that the hanging process that Tim writes about apparently prevents Win98 from shutting down properly. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2000-10-07 00:37 Message: More info (none good, but some worse so boosted priority): + Happens under release and debug builds. + Have not been able to provoke when starting in the debugger. + Ctrl+Alt+Del and killing Winoldap is not enough to clean everything up. There's still a Python (or Python_d) process hanging around that Ctrl+Alt+Del doesn't show. + This process makes it impossible to delete the associated Python .dll, and in particular makes it impossible to rebuild Python successfully without a reboot. + These processes cannot be killed! Wintop and Process Viewer both fail to get the job done. PrcView (a freeware process viewer) itself locks up if I try to kill them using it. Process Viewer freezes for several seconds before giving up. + Attempting to attach to the process with the MSVC debugger (in order to find out what the heck it's doing) finds the process OK, but then yields the cryptic and undocumented error msg "Cannot execute program". + The processes are not accumulating cycles. + Smells like deadlock. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=216289&group_id=5470 From noreply@sourceforge.net Tue Feb 26 02:49:35 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Mon, 25 Feb 2002 18:49:35 -0800 Subject: [Python-bugs-list] [ python-Bugs-522780 ] bsddb keys corruption Message-ID: Bugs item #522780, was opened at 2002-02-25 18:49 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522780&group_id=5470 Category: Extension Modules Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Marc Conley (mconley) Assigned to: Nobody/Anonymous (nobody) Summary: bsddb keys corruption Initial Comment: I'm having a problem with either the keys() function returning invalid information or the database getting corrupted or something along those lines. This is what I keep seeing occasionally during my development: Python 2.2 (#28, Dec 21 2001, 12:21:22) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import bsddb >>> db = bsddb.hashopen("test.db") >>> db.keys() ['192.168.0.1799'] >>> db["192.168.0.1799"] Traceback (most recent call last): File "", line 1, in ? KeyError: 192.168.0.1799 >>> The lines of importance are the return value for db.keys() and then the traceback. Note that the db.keys () returns a value that I immediately try to access and get a KeyError in so doing. This happens in a program with multiple threads but for which I am using a threading.Lock acquire() and release() around all database/bsddb accesses. I am also using sync()s after all write operations. The key value, by the way, should be 192.168.0.179. It is consistently, on several different occasions, getting the extra "9" appended to the end of it. This same problem has occurred 3 times during testing. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522780&group_id=5470 From noreply@sourceforge.net Tue Feb 26 10:40:53 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Feb 2002 02:40:53 -0800 Subject: [Python-bugs-list] [ python-Bugs-522898 ] Robotparser does not handle empty paths Message-ID: Bugs item #522898, was opened at 2002-02-26 02:40 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522898&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Costas Malamas (cmalamas) Assigned to: Nobody/Anonymous (nobody) Summary: Robotparser does not handle empty paths Initial Comment: The robotparser module handles incorrectly empty paths in the allow/disallow directives. According to: http://www.robotstxt.org/wc/norobots- rfc.html, the following rule should be a global *allow*: User-agent: * Disallow: My reading of the RFC is that an empty path is always a global allow (for both Allow and Disallow directives) so that the syntax is backwards compatible --there was no Allow directive in the original syntax. Suggested fix: robotparser.RuleLine.applies_to() becomes: def applies_to(self, filename): if not self.path: self.allowance = 1 return self.path=="*" or re.match(self.path, filename) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522898&group_id=5470 From noreply@sourceforge.net Tue Feb 26 15:59:02 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Feb 2002 07:59:02 -0800 Subject: [Python-bugs-list] [ python-Bugs-504723 ] Bad exceptions from pickle Message-ID: Bugs item #504723, was opened at 2002-01-16 19:55 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=504723&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Bob Alexander (bobalex) Assigned to: Nobody/Anonymous (nobody) Summary: Bad exceptions from pickle Initial Comment: Here is an annotated session that shows some incorrect exceptions raised with bad input. I'm assuming that it is intended that when attempting to load a pickle file, we should only have to check for UnpicklingError, not for several other possible exceptions. >>> import sys >>> sys.version '2.2 (#1, Dec 26 2001, 16:14:13) \n[GCC 2.96 20000731 (Mandrake Linux 8.1 2.96-0.62mdk)]' Problem 1: Attempting to load an empty file produces an EOFError exception, but should probably produce an UnpicklingError exception. This happens with both pickle and cPickle. >>> import cPickle as pickle >>> import pickle as pk >>> f=open("/dev/null") # Empty file >>> ff=StringIO("asdfasdfasdfasdfasdfasdf") # Garbage file >>> pickle.load(f) Traceback (most recent call last): File "", line 1, in ? EOFError >>> pk.load(f) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.2/pickle.py", line 977, in load return Unpickler(file).load() File "/usr/lib/python2.2/pickle.py", line 592, in load dispatch[key](self) File "/usr/lib/python2.2/pickle.py", line 606, in load_eof raise EOFError EOFError Problem 2: With cPickle, loading a garbage file produced and IndexError, not an Unpickling error. >>> pk.load(ff) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.2/pickle.py", line 977, in load return Unpickler(file).load() File "/usr/lib/python2.2/pickle.py", line 592, in load dispatch[key](self) File "/usr/lib/python2.2/pickle.py", line 746, in load_dict k = self.marker() File "/usr/lib/python2.2/pickle.py", line 600, in marker while stack[k] is not mark: k = k-1 IndexError: list index out of range ---------------------------------------------------------------------- Comment By: Thomas W. Christopger (tchristopher) Date: 2002-02-26 07:59 Message: Logged In: YES user_id=64169 Actually, I want EOFError on the file-like object returning ''; the empty string IS an end of file indication. That's what I'm looking for when there are a varying number of objects, and yes, I use it a lot. So please DO NOT make end of file return an UnpicklingError. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-01-16 23:41 Message: Logged In: YES user_id=21627 The documentation of UnpicklingError says Note that other exceptions may also be raised during unpickling, including (but not necessarily limited to) \exception{AttributeError} and \exception{ImportError}. So I'm not so sure that there is a bug. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=504723&group_id=5470 From noreply@sourceforge.net Tue Feb 26 16:33:42 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Feb 2002 08:33:42 -0800 Subject: [Python-bugs-list] [ python-Bugs-523020 ] pickle/cPickle Inconsistent EOF handling Message-ID: Bugs item #523020, was opened at 2002-02-26 08:33 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523020&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Thomas W. Christopger (tchristopher) Assigned to: Nobody/Anonymous (nobody) Summary: pickle/cPickle Inconsistent EOF handling Initial Comment: cPickle.Unpickler(f).load() and cPickle.load() do not handle EOF the way pickle.Unpickler(f).load(f), cPickle.loads(s) and pickle.loads(s) do. The first two give cPickle.UnpicklingError: invalid load key, ' '. on EOF. The actual message text is "invalid load key, '\000'." The remaining give EOFError (The EOFError from all of them is what I need.) Observe: >>> pickle.loads('') Traceback (most recent call last): File "", line 1, in ? File "C:\Python22\lib\pickle.py", line 981, in loads return Unpickler(file).load() File "C:\Python22\lib\pickle.py", line 592, in load dispatch[key](self) File "C:\Python22\lib\pickle.py", line 606, in load_eof raise EOFError EOFError >>> cPickle.loads('') Traceback (most recent call last): File "", line 1, in ? EOFError >>> class C: ... def read(self,n): return '' ... def readline(self): return '' ... >>> p=pickle.Unpickler(C()) >>> p.load() Traceback (most recent call last): File "", line 1, in ? File "C:\Python22\lib\pickle.py", line 592, in load dispatch[key](self) File "C:\Python22\lib\pickle.py", line 606, in load_eof raise EOFError EOFError >>> pickle.load(C()) Traceback (most recent call last): File "", line 1, in ? File "C:\Python22\lib\pickle.py", line 977, in load return Unpickler(file).load() File "C:\Python22\lib\pickle.py", line 592, in load dispatch[key](self) File "C:\Python22\lib\pickle.py", line 606, in load_eof raise EOFError EOFError >>> p=cPickle.Unpickler(C()) >>> p.load() Traceback (most recent call last): File "", line 1, in ? cPickle.UnpicklingError: invalid load key, ' '. >>> cPickle.load(C()) Traceback (most recent call last): File "", line 1, in ? cPickle.UnpicklingError: invalid load key, ' '. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523020&group_id=5470 From noreply@sourceforge.net Tue Feb 26 17:14:31 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Feb 2002 09:14:31 -0800 Subject: [Python-bugs-list] [ python-Bugs-523041 ] Robotparser incorrectly applies regex Message-ID: Bugs item #523041, was opened at 2002-02-26 09:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523041&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Costas Malamas (cmalamas) Assigned to: Nobody/Anonymous (nobody) Summary: Robotparser incorrectly applies regex Initial Comment: Robotparser uses re to evaluate the Allow/Disallow directives: nowhere in the RFC is it specified that these directives can be regular expressions. As a result, directives such as the following are mis- interpreted: User-Agent: * Disallow: /. The directive (which is actually syntactically incorrect according to the RFC) denies access to the root directory, but not the entire site; it should pass robotparser but it fails (e.g. http://www.pbs.org/robots.txt) >From the draft RFC (http://www.robotstxt.org/wc/norobots.html): "The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved. For example, Disallow: /help disallows both /help.html" Also the final RFC excludes * as valid in the path directive (http://www.robotstxt.org/wc/norobots- rfc.html). Suggested fix (also fixes bug #522898): robotparser.RuleLine.applies_to becomes: def applies_to(self, filename): if not self.path: self.allowance = 1 return self.path=="*" or self.path.find (filename) == 0 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523041&group_id=5470 From noreply@sourceforge.net Tue Feb 26 20:07:59 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Feb 2002 12:07:59 -0800 Subject: [Python-bugs-list] [ python-Bugs-523117 ] ref.ps and ref.pdf formatting Message-ID: Bugs item #523117, was opened at 2002-02-26 12:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523117&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: David R Young (youngdr) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: ref.ps and ref.pdf formatting Initial Comment: I have noticed what seems to be a formatting problem in both ref.ps (when viewed with GSview 4.1) and ref.pdf (when viewed with Acrobat 5.0.5). One example is on page 33 in section 5.3.4. The text of the modified BNF grammar notation for a rule seems to be much too long to fit on one line. Since it is not wrapped, a lot of the rule is not printed. They are the files from pdf-letter-2.2.zip and postscript-letter-2.2.zip. They are from 2001/December/21. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523117&group_id=5470 From noreply@sourceforge.net Tue Feb 26 23:04:18 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Feb 2002 15:04:18 -0800 Subject: [Python-bugs-list] [ python-Bugs-522393 ] Doesn't build on SGI Message-ID: Bugs item #522393, was opened at 2002-02-25 03:07 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522393&group_id=5470 Category: Build Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Jack Jansen (jackjansen) >Assigned to: Guido van Rossum (gvanrossum) Summary: Doesn't build on SGI Initial Comment: On the SGI I can't build the current 2.2.1 from CVS. I get an undefined error on pthread_detach in the link step for python: ld32: ERROR 33: Unresolved text symbol "pthread_detach" -- 1st referenced by libpython2.2.a(thread.o). ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-02-26 15:04 Message: Logged In: YES user_id=45365 Guido, I'm assigning this to you as 90% of the checkins relating to pthreads are yours. I've attached a patch to configure.in which not only tests availability of pthread_create without special options but also of pthread_detach. If you think has a good chance of being safe for other OSes too please let me know. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-25 05:55 Message: Logged In: YES user_id=6656 Oh, the joy of unix. Special case the snot out of SGI in configure.in? ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 05:24 Message: Logged In: YES user_id=45365 Ouch! You are right: the trunk also doesn't build, and probably 2.2 doesn't build either. I've never checked this, because I always build --without-thread on SGI. I've found the problem: libc contains a partial implementation of pthreads, which does include pthread_create but not pthread_detach. For the full implementation you need to add -lpthread to your link step. But the autoconf test tests only for pthread_create(), so it thinks no extra link options are needed. I think we should reassign this to a pthread guru, but I'm not sure who qualifies. Simply adding a pthread_detach() call to the autotest may be worse, if I read thread_pthread.h correctly thread_detach() isn't defined in all flavors of pthreads. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-25 04:35 Message: Logged In: YES user_id=6656 OK, this is odd. Does the trunk build? Did 2.2 build? I can't easily find any branch changes that would account for this. I haven't looked very hard yet. Will do so later. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522393&group_id=5470 From noreply@sourceforge.net Tue Feb 26 23:30:15 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Feb 2002 15:30:15 -0800 Subject: [Python-bugs-list] [ python-Bugs-523195 ] SMTPLIB does not support " Message-ID: Bugs item #523195, was opened at 2002-02-26 15:30 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523195&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Paul F. Dubois (dubois) Assigned to: Nobody/Anonymous (nobody) Summary: SMTPLIB does not support " Initial Comment: I was trying to use smtplib to send mail through a server. This script had worked before but I had to move it to a new environment. I could not connect to the machine "smtp.something" (name changed to protect the innocent). I wrote to a guru person who told me: "The name smtp.something has an MX record in our external DNS that should cause the email to go first through our incoming mail gateways... if you configured your software with smtp.something and it didn't work, that's saying that it does not use the MX record (which it should...). In that case you can point directly to the incoming mail gateway, smtp-in.something" and indeed this change made it work. I don't know anything about "MX records" but evidently smtplib doesn't support them and somebody thinks it should. I'm a fish out of water on this and don't know anything I didn't just say. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523195&group_id=5470 From noreply@sourceforge.net Wed Feb 27 01:46:33 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Tue, 26 Feb 2002 17:46:33 -0800 Subject: [Python-bugs-list] [ python-Bugs-523230 ] socket.gethostbyaddrors Message-ID: Bugs item #523230, was opened at 2002-02-26 17:46 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523230&group_id=5470 Category: Python Library Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Tim Sharpe (beaststwo) Assigned to: Nobody/Anonymous (nobody) Summary: socket.gethostbyaddrors Initial Comment: When socket.gethostbyaddr() is given an IP address for which no DNS name exists (or if DNS fails to respond for some reason), the function abends with a "socket.error". It seems to me that if the function is provided with a hostname that can't be resolved, the following behavior would be more friendly: -If the function can parse the provided hostname as a validly-formatted IP address, the function could provide the IP address for the (hostname, aliaslist, ipaddrlist) result. Then the user's program can continue on to find out if the IP address is valid or not. At least it's a chance to continue execution. -If the function is cannot be resolved and cannot be parsed and identified as an IP address, then throw an exception that specifically identifies this case vice a generic "socket.error". I haven't managed to write an exception statement that works with "socket.error", although I've successfully used other exceptions. Even if the "powers that be" feel all hosts should have a DNS address, library functions shouldn't fail just because system owners aren't willing to provide names to all hosts. Thanks... Tim Sharpe ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523230&group_id=5470 From noreply@sourceforge.net Wed Feb 27 09:05:54 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Feb 2002 01:05:54 -0800 Subject: [Python-bugs-list] [ python-Bugs-523301 ] ConfigParser.write(): linebreak handling Message-ID: Bugs item #523301, was opened at 2002-02-27 01:05 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523301&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Matthias Rahlf (rahlf) Assigned to: Nobody/Anonymous (nobody) Summary: ConfigParser.write(): linebreak handling Initial Comment: ConfigParser.read() accepts line rfc822-like line continuations: [xxx] line: this line is longer than my editor likes it ConfigParser.write() does not handle these linebreaks and produces: [xxx] line: this line is longer than my editor likes it which can not be read by ConfigParser.read(). This can be fixed easily by adding a "value.replace('\n', '\n\t')" in ConfigParser.py. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523301&group_id=5470 From noreply@sourceforge.net Wed Feb 27 14:11:20 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Feb 2002 06:11:20 -0800 Subject: [Python-bugs-list] [ python-Bugs-523041 ] Robotparser incorrectly applies regex Message-ID: Bugs item #523041, was opened at 2002-02-26 09:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523041&group_id=5470 Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Submitted By: Costas Malamas (cmalamas) Assigned to: Nobody/Anonymous (nobody) Summary: Robotparser incorrectly applies regex Initial Comment: Robotparser uses re to evaluate the Allow/Disallow directives: nowhere in the RFC is it specified that these directives can be regular expressions. As a result, directives such as the following are mis- interpreted: User-Agent: * Disallow: /. The directive (which is actually syntactically incorrect according to the RFC) denies access to the root directory, but not the entire site; it should pass robotparser but it fails (e.g. http://www.pbs.org/robots.txt) >From the draft RFC (http://www.robotstxt.org/wc/norobots.html): "The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved. For example, Disallow: /help disallows both /help.html" Also the final RFC excludes * as valid in the path directive (http://www.robotstxt.org/wc/norobots- rfc.html). Suggested fix (also fixes bug #522898): robotparser.RuleLine.applies_to becomes: def applies_to(self, filename): if not self.path: self.allowance = 1 return self.path=="*" or self.path.find (filename) == 0 ---------------------------------------------------------------------- Comment By: Bastian Kleineidam (calvin) Date: 2002-02-27 06:11 Message: Logged In: YES user_id=9205 Patch is not good: >>> print RuleLine("/tmp", 0).applies_to("/") 1 >>> This would apply the filename "/" to rule "Disallow: /tmp". I think it should be: return self.path=="*" or filename.startswith(self.path) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523041&group_id=5470 From noreply@sourceforge.net Wed Feb 27 15:00:38 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Feb 2002 07:00:38 -0800 Subject: [Python-bugs-list] [ python-Bugs-523421 ] shelve update fails on "large" entry Message-ID: Bugs item #523421, was opened at 2002-02-27 07:00 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523421&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: j vickroy (jvickroy) Assigned to: Nobody/Anonymous (nobody) Summary: shelve update fails on "large" entry Initial Comment: Attached is a Python script that demonstrates a possible bug when using the shelve module for: Python 2.2 MS Windows 2000 and 98 For 10k, shelve entries, the script works as expected, but for 15k entries, an exception is raised after "some" number of updates. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523421&group_id=5470 From noreply@sourceforge.net Wed Feb 27 15:03:48 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Feb 2002 07:03:48 -0800 Subject: [Python-bugs-list] [ python-Bugs-523425 ] shelve update fails on "large" entry Message-ID: Bugs item #523425, was opened at 2002-02-27 07:03 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523425&group_id=5470 Category: Python Library Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: j vickroy (jvickroy) Assigned to: Nobody/Anonymous (nobody) Summary: shelve update fails on "large" entry Initial Comment: Below is a Python script that demonstrates a possible bug when using the shelve module for: Python 2.2 MS Windows 2000 and 98 For 10k, shelve entries, the script works as expected, but for 15k entries, an exception is raised after "some" number of updates. I first tried to attach the script but received an error. # begin script """ Demonstration of possible update bug using shelve module for: Python 2.2 MS Windows 2000 and 98 """ __author__ = 'jim.vickroy@noaa.gov' import shelve def keys(): return [str(i) for i in range(100)] def archive(): return shelve.open('d:/logs/test') ##note = 'x'*10000 # this works note = 'x'*15000 # this fails with the following exception """ Traceback (most recent call last): File "C:\PYTHON22\lib\site- packages\Pythonwin\pywin\framework\scriptutils.py", line 301, in RunScript exec codeObject in __main__.__dict__ File "D:\py_trials\shelve_test.py", line 42, in ? File "D:\py_trials\shelve_test.py", line 23, in update db.close() File "C:\PYTHON22\lib\shelve.py", line 77, in __setitem__ self.dict[key] = f.getvalue() error: (0, 'Error') """ def update(): db = archive() for this in keys(): if db.has_key(this): entry = db[this] entry.append(note) else: entry = [note] db[this] = entry db.close() def validate(): db = archive() actual_keys = db.keys() expected_keys = keys() assert len(actual_keys) == len(expected_keys), \ 'expected %s -- got %s' % (len(expected_keys), len(actual_keys)) for this in keys(): entry = db[this] assert len(entry) == nbr_of_updates, \ 'expected %s -- got %s' % (nbr_of_updates, len(entry)) db.close() nbr_of_updates = 10 for i in range(nbr_of_updates): update() validate() # end script ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523425&group_id=5470 From noreply@sourceforge.net Wed Feb 27 16:42:40 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Feb 2002 08:42:40 -0800 Subject: [Python-bugs-list] [ python-Bugs-523421 ] shelve update fails on "large" entry Message-ID: Bugs item #523421, was opened at 2002-02-27 07:00 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523421&group_id=5470 Category: Python Library Group: Python 2.2 >Status: Closed >Resolution: Duplicate Priority: 5 Submitted By: j vickroy (jvickroy) Assigned to: Nobody/Anonymous (nobody) >Summary: shelve update fails on "large" entry Initial Comment: Attached is a Python script that demonstrates a possible bug when using the shelve module for: Python 2.2 MS Windows 2000 and 98 For 10k, shelve entries, the script works as expected, but for 15k entries, an exception is raised after "some" number of updates. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-27 08:42 Message: Logged In: YES user_id=21627 Duplicate of #523425 ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523421&group_id=5470 From noreply@sourceforge.net Wed Feb 27 16:49:16 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Feb 2002 08:49:16 -0800 Subject: [Python-bugs-list] [ python-Bugs-523195 ] SMTPLIB does not support " Message-ID: Bugs item #523195, was opened at 2002-02-26 15:30 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523195&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Invalid Priority: 5 Submitted By: Paul F. Dubois (dubois) Assigned to: Nobody/Anonymous (nobody) >Summary: SMTPLIB does not support " Initial Comment: I was trying to use smtplib to send mail through a server. This script had worked before but I had to move it to a new environment. I could not connect to the machine "smtp.something" (name changed to protect the innocent). I wrote to a guru person who told me: "The name smtp.something has an MX record in our external DNS that should cause the email to go first through our incoming mail gateways... if you configured your software with smtp.something and it didn't work, that's saying that it does not use the MX record (which it should...). In that case you can point directly to the incoming mail gateway, smtp-in.something" and indeed this change made it work. I don't know anything about "MX records" but evidently smtplib doesn't support them and somebody thinks it should. I'm a fish out of water on this and don't know anything I didn't just say. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-27 08:49 Message: Logged In: YES user_id=21627 No, smtplib should not support MX records; the application should. If you pass a host name to smtplib, this is treated as the name of the host you want to contact to talk smtp to. This has nothing to do with the system where the email is directed to. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523195&group_id=5470 From noreply@sourceforge.net Wed Feb 27 16:58:51 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Wed, 27 Feb 2002 08:58:51 -0800 Subject: [Python-bugs-list] [ python-Bugs-523473 ] PyModule_AddObject doesn't set exception Message-ID: Bugs item #523473, was opened at 2002-02-27 08:58 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523473&group_id=5470 Category: None Group: None Status: Open Resolution: None Priority: 5 Submitted By: Yakov Markovitch (markovitch) Assigned to: Nobody/Anonymous (nobody) Summary: PyModule_AddObject doesn't set exception Initial Comment: PyModule_AddObject tests for its first parameter to be a module and third to be non-NULL and returns -1 if these are wrong, but doesn't set any exception. This behaviour is obviously wrong (at least for the case when first parameter is not a module - this must be a PyExc_TypeError). ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523473&group_id=5470 From noreply@sourceforge.net Thu Feb 28 09:18:06 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 01:18:06 -0800 Subject: [Python-bugs-list] [ python-Bugs-521526 ] Problems when python is renamed Message-ID: Bugs item #521526, was opened at 2002-02-22 17:11 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521526&group_id=5470 Category: Distutils Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: R. Lindsay Todd (rltodd) Assigned to: M.-A. Lemburg (lemburg) Summary: Problems when python is renamed Initial Comment: I use a RedHat 7.2 system where Python 2.2 in an executable /usr/bin/python2. This causes some problems with using distutils. 1) If I say "python2 setup.py bdist_rpm" it creates an RPM spec file that uses plain "python" instead of "python2". Seems to me that this should make use of the path to the interpreter that is actually running. Fortunately this fails, so I can manually hack the spec file... 2) When including scripts to be interpreted, distutils looks for the leading #! and the word "python". My scripts have the word "python2", since I want to be able to test them directly. It seems like distutils could somehow handle versioned python's, like looking for a word that begins with "python", or perhaps some other magic sequence. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-28 09:18 Message: Logged In: YES user_id=38388 2) I checked in you first RE. (the shebang is magic enough :-) ---------------------------------------------------------------------- Comment By: R. Lindsay Todd (rltodd) Date: 2002-02-22 19:09 Message: Logged In: YES user_id=283405 1) Thanks. I missed that in the documentation (still do, after grepping it). I see it displayed with --help, though. Still, I found this behaviour a little surprising (that the default was not to use the python executable used to invoke setup.py. 2) r'^#!.*python[0-9.]*(\s+.*)?$' would be an improvement, and handle my case. Possibly even r'^#!.*python\S*(\s+.*)?$' Maybe there should instead be a magic comment of some sort to indicate that this is a python script that should have line 1 rewritten? ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-22 17:38 Message: Logged In: YES user_id=38388 1) use "python setup.py bdist_rpm --python python2"; not a bug. 2) this would require extending the RE in build_scripts.py; however, I'm not sure what magic you have in mind here ? ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2002-02-22 17:37 Message: Logged In: YES user_id=38388 1) use "python setup.py bdist_rpm --python python2"; not a bug. 2) this would require extending the RE in build_scripts.py; however, I'm not sure what magic you have in mind here ? ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=521526&group_id=5470 From noreply@sourceforge.net Thu Feb 28 10:12:13 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 02:12:13 -0800 Subject: [Python-bugs-list] [ python-Bugs-523230 ] socket.gethostbyaddrors Message-ID: Bugs item #523230, was opened at 2002-02-27 02:46 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523230&group_id=5470 Category: Python Library Group: Python 2.1.1 >Status: Closed >Resolution: Rejected Priority: 5 Submitted By: Tim Sharpe (beaststwo) Assigned to: Nobody/Anonymous (nobody) Summary: socket.gethostbyaddrors Initial Comment: When socket.gethostbyaddr() is given an IP address for which no DNS name exists (or if DNS fails to respond for some reason), the function abends with a "socket.error". It seems to me that if the function is provided with a hostname that can't be resolved, the following behavior would be more friendly: -If the function can parse the provided hostname as a validly-formatted IP address, the function could provide the IP address for the (hostname, aliaslist, ipaddrlist) result. Then the user's program can continue on to find out if the IP address is valid or not. At least it's a chance to continue execution. -If the function is cannot be resolved and cannot be parsed and identified as an IP address, then throw an exception that specifically identifies this case vice a generic "socket.error". I haven't managed to write an exception statement that works with "socket.error", although I've successfully used other exceptions. Even if the "powers that be" feel all hosts should have a DNS address, library functions shouldn't fail just because system owners aren't willing to provide names to all hosts. Thanks... Tim Sharpe ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-28 11:12 Message: Logged In: YES user_id=21627 The library function you are calling is specifically designed to return the hostname. If it cannot do what it is designed to do, it must fail; failure is indicated in Python with an exception. If the application specifically asks for the host name, it should expect that to fail. If there is meaningful processing possible in case of failure, the application can catch the exception, and perform that processing. I cannot see the flaw in the function, and your proposed change is backwards-incompatible - so this change request (what it really is) is rejected. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523230&group_id=5470 From noreply@sourceforge.net Thu Feb 28 12:20:11 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 04:20:11 -0800 Subject: [Python-bugs-list] [ python-Bugs-523825 ] python-mode.el: honor-comment-indent bug Message-ID: Bugs item #523825, was opened at 2002-02-28 12:20 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523825&group_id=5470 Category: Demos and Tools Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 5 Submitted By: Christian Stork (cst) Assigned to: Nobody/Anonymous (nobody) Summary: python-mode.el: honor-comment-indent bug Initial Comment: Minor bug in the Python Emacs mode (simple patch provided): Choosing neither t nor nil for the custom variable py-honor-comment-indentation prevents proper indention after unindented code. Also it wrongly honors block-comments wrt indention. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523825&group_id=5470 From noreply@sourceforge.net Thu Feb 28 13:02:56 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 05:02:56 -0800 Subject: [Python-bugs-list] [ python-Bugs-523833 ] Inaccuracy in PyErr_SetFromErrno()'s doc Message-ID: Bugs item #523833, was opened at 2002-02-28 13:02 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523833&group_id=5470 Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Submitted By: Florent Rougon (frougon) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Inaccuracy in PyErr_SetFromErrno()'s doc Initial Comment: Python 2.1 and 2.2 documentations (file api/exceptionHandling.html) about PyErr_SetFromErrno say: [...] a wrapper function around a system call can write "return PyErr_SetFromErrno();" when the system call returns an error. but this function's prototype is: PyObject* PyErr_SetFromErrno(PyObject *type) therefore, I would prefer something like: [...] a wrapper function around a system call can write "return PyErr_SetFromErrno(type);" when the system call returns an error. (or PyErr_SetFromErrno(exc_type) or whatever you want provided the function is called with its argument) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523833&group_id=5470 From noreply@sourceforge.net Thu Feb 28 14:45:43 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 06:45:43 -0800 Subject: [Python-bugs-list] [ python-Bugs-523859 ] unexpected endless loop Message-ID: Bugs item #523859, was opened at 2002-02-28 16:45 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523859&group_id=5470 Category: Parser/Compiler Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Anatoly Artamonov (arthem) Assigned to: Nobody/Anonymous (nobody) Summary: unexpected endless loop Initial Comment: This is the simplified example of code that causes endless loop. Save it in file (i.e. bad.py) and run "python bad.py" Tested on 2 pc with FreeBDS and Python 2.1.1 on Win32 with py 1.5.2 compiler says: SyntaxError: 'continue' not properly in loop (line 7) ################################################## k = 0 print "Start" while 1: k=k+1 if k>2: break try: if k>1: continue except: pass ########################EOF####################### removing of try/except makes execution normal. Though in example usage of try/except is unnecessary - this was taken from live application where it was necessary. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523859&group_id=5470 From noreply@sourceforge.net Thu Feb 28 15:25:23 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 07:25:23 -0800 Subject: [Python-bugs-list] [ python-Bugs-523041 ] Robotparser incorrectly applies regex Message-ID: Bugs item #523041, was opened at 2002-02-26 18:14 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523041&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Costas Malamas (cmalamas) Assigned to: Nobody/Anonymous (nobody) Summary: Robotparser incorrectly applies regex Initial Comment: Robotparser uses re to evaluate the Allow/Disallow directives: nowhere in the RFC is it specified that these directives can be regular expressions. As a result, directives such as the following are mis- interpreted: User-Agent: * Disallow: /. The directive (which is actually syntactically incorrect according to the RFC) denies access to the root directory, but not the entire site; it should pass robotparser but it fails (e.g. http://www.pbs.org/robots.txt) >From the draft RFC (http://www.robotstxt.org/wc/norobots.html): "The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved. For example, Disallow: /help disallows both /help.html" Also the final RFC excludes * as valid in the path directive (http://www.robotstxt.org/wc/norobots- rfc.html). Suggested fix (also fixes bug #522898): robotparser.RuleLine.applies_to becomes: def applies_to(self, filename): if not self.path: self.allowance = 1 return self.path=="*" or self.path.find (filename) == 0 ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-28 16:25 Message: Logged In: YES user_id=21627 This has been fixed in robotparser.py 1.11. ---------------------------------------------------------------------- Comment By: Bastian Kleineidam (calvin) Date: 2002-02-27 15:11 Message: Logged In: YES user_id=9205 Patch is not good: >>> print RuleLine("/tmp", 0).applies_to("/") 1 >>> This would apply the filename "/" to rule "Disallow: /tmp". I think it should be: return self.path=="*" or filename.startswith(self.path) ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523041&group_id=5470 From noreply@sourceforge.net Thu Feb 28 15:30:04 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 07:30:04 -0800 Subject: [Python-bugs-list] [ python-Bugs-523859 ] unexpected endless loop Message-ID: Bugs item #523859, was opened at 2002-02-28 15:45 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523859&group_id=5470 Category: Parser/Compiler Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Anatoly Artamonov (arthem) Assigned to: Nobody/Anonymous (nobody) Summary: unexpected endless loop Initial Comment: This is the simplified example of code that causes endless loop. Save it in file (i.e. bad.py) and run "python bad.py" Tested on 2 pc with FreeBDS and Python 2.1.1 on Win32 with py 1.5.2 compiler says: SyntaxError: 'continue' not properly in loop (line 7) ################################################## k = 0 print "Start" while 1: k=k+1 if k>2: break try: if k>1: continue except: pass ########################EOF####################### removing of try/except makes execution normal. Though in example usage of try/except is unnecessary - this was taken from live application where it was necessary. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-28 16:30 Message: Logged In: YES user_id=21627 I cannot reproduce this problem. Can anybody else? For easier extraction from the report, I attach the script in question as a.py. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523859&group_id=5470 From noreply@sourceforge.net Thu Feb 28 15:32:04 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 07:32:04 -0800 Subject: [Python-bugs-list] [ python-Bugs-522898 ] Robotparser does not handle empty paths Message-ID: Bugs item #522898, was opened at 2002-02-26 11:40 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522898&group_id=5470 Category: Python Library Group: None >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Costas Malamas (cmalamas) Assigned to: Nobody/Anonymous (nobody) Summary: Robotparser does not handle empty paths Initial Comment: The robotparser module handles incorrectly empty paths in the allow/disallow directives. According to: http://www.robotstxt.org/wc/norobots- rfc.html, the following rule should be a global *allow*: User-agent: * Disallow: My reading of the RFC is that an empty path is always a global allow (for both Allow and Disallow directives) so that the syntax is backwards compatible --there was no Allow directive in the original syntax. Suggested fix: robotparser.RuleLine.applies_to() becomes: def applies_to(self, filename): if not self.path: self.allowance = 1 return self.path=="*" or re.match(self.path, filename) ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-28 16:32 Message: Logged In: YES user_id=21627 This is fixed in robotparser.py 1.11. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=522898&group_id=5470 From noreply@sourceforge.net Thu Feb 28 15:40:02 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 07:40:02 -0800 Subject: [Python-bugs-list] [ python-Bugs-523859 ] unexpected endless loop Message-ID: Bugs item #523859, was opened at 2002-02-28 14:45 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523859&group_id=5470 Category: Parser/Compiler Group: Python 2.1.1 Status: Open Resolution: None Priority: 5 Submitted By: Anatoly Artamonov (arthem) Assigned to: Nobody/Anonymous (nobody) Summary: unexpected endless loop Initial Comment: This is the simplified example of code that causes endless loop. Save it in file (i.e. bad.py) and run "python bad.py" Tested on 2 pc with FreeBDS and Python 2.1.1 on Win32 with py 1.5.2 compiler says: SyntaxError: 'continue' not properly in loop (line 7) ################################################## k = 0 print "Start" while 1: k=k+1 if k>2: break try: if k>1: continue except: pass ########################EOF####################### removing of try/except makes execution normal. Though in example usage of try/except is unnecessary - this was taken from live application where it was necessary. ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2002-02-28 15:40 Message: Logged In: YES user_id=6656 Isn't this the problem that was fixed in version 2.277 of ceval.c? IOW, get 2.1.2 or 2.2. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-28 15:30 Message: Logged In: YES user_id=21627 I cannot reproduce this problem. Can anybody else? For easier extraction from the report, I attach the script in question as a.py. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523859&group_id=5470 From noreply@sourceforge.net Thu Feb 28 16:15:13 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 08:15:13 -0800 Subject: [Python-bugs-list] [ python-Bugs-523859 ] unexpected endless loop Message-ID: Bugs item #523859, was opened at 2002-02-28 15:45 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523859&group_id=5470 Category: Parser/Compiler Group: Python 2.1.1 >Status: Closed >Resolution: Fixed Priority: 5 Submitted By: Anatoly Artamonov (arthem) Assigned to: Nobody/Anonymous (nobody) Summary: unexpected endless loop Initial Comment: This is the simplified example of code that causes endless loop. Save it in file (i.e. bad.py) and run "python bad.py" Tested on 2 pc with FreeBDS and Python 2.1.1 on Win32 with py 1.5.2 compiler says: SyntaxError: 'continue' not properly in loop (line 7) ################################################## k = 0 print "Start" while 1: k=k+1 if k>2: break try: if k>1: continue except: pass ########################EOF####################### removing of try/except makes execution normal. Though in example usage of try/except is unnecessary - this was taken from live application where it was necessary. ---------------------------------------------------------------------- >Comment By: Martin v. Löwis (loewis) Date: 2002-02-28 17:15 Message: Logged In: YES user_id=21627 That could well be, which would explain why I cannot reproduce it. Closing as fixed. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2002-02-28 16:40 Message: Logged In: YES user_id=6656 Isn't this the problem that was fixed in version 2.277 of ceval.c? IOW, get 2.1.2 or 2.2. ---------------------------------------------------------------------- Comment By: Martin v. Löwis (loewis) Date: 2002-02-28 16:30 Message: Logged In: YES user_id=21627 I cannot reproduce this problem. Can anybody else? For easier extraction from the report, I attach the script in question as a.py. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523859&group_id=5470 From noreply@sourceforge.net Thu Feb 28 18:59:20 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 10:59:20 -0800 Subject: [Python-bugs-list] [ python-Bugs-523995 ] PDB single steps list comprehensions Message-ID: Bugs item #523995, was opened at 2002-02-28 13:59 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523995&group_id=5470 Category: Demos and Tools Group: Python 2.2 Status: Open Resolution: None Priority: 5 Submitted By: Tom Emerson (tree) Assigned to: Nobody/Anonymous (nobody) Summary: PDB single steps list comprehensions Initial Comment: Within PDB you cannot 'n'ext over a list comprehension: instead you step through each iteration. In some cases this is quite painful, since the comprehension may have several hundred elements. For example, def doit(): foo = [ 2 * x for x in range(100) ] print foo requires you to either step through all 100 iterations of the comprehension, or set a temporary breakpoint on the line after the comprehension. My expectation would be that 'n'ext would execute the comprehension and move on to the next line. If this isn't a bug, and is working by design, then I'd like to suggest a command that allows you to fully execute comprehensions. I've seen this with versions 2.0 -- 2.2 on several platforms. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=523995&group_id=5470 From noreply@sourceforge.net Thu Feb 28 19:29:49 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 11:29:49 -0800 Subject: [Python-bugs-list] [ python-Bugs-520644 ] __slots__ are not pickled Message-ID: Bugs item #520644, was opened at 2002-02-20 21:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Samuele Pedroni (pedronis) Assigned to: Nobody/Anonymous (nobody) Summary: __slots__ are not pickled Initial Comment: [Posted on behalf of Kevin Jacobs] I have been hacking on ways to make lighter-weight Python objects using the __slots__ mechanism that came with Python 2.2 new- style class. Everything has gone swimmingly until I noticed that slots do not get pickled/cPickled at all! Here is a simple test case: import pickle,cPickle class Test(object): __slots__ = ['x'] def __init__(self): self.x = 66666 test = Test() pickle_str = pickle.dumps( test ) cpickle_str = cPickle.dumps( test ) untest = pickle.loads( pickle_str ) untestc = cPickle.loads( cpickle_str ) print untest.x # raises AttributeError print untextc.x # raises AttributeError ... see http://aspn.activestate.com/ASPN/Mail/Message/python- dev/1031499 ---------------------------------------------------------------------- >Comment By: Samuele Pedroni (pedronis) Date: 2002-02-28 20:29 Message: Logged In: YES user_id=61408 [Guido on python-dev] In particular, the fact that instances of classes with __slots__ appear picklable but lose all their slot values is a bug -- these should either not be picklable unless you add a __reduce__ method, or they should be pickled properly. ... I haven't made up my mind on how to fix this -- it would be nice if __slots__ would automatically be pickled, but it's tricky (although I think it's doable -- without ever referencing the __slots__ variable :-). [pedronis - my 2cts] unless you plan some low-level (non-python-level) solution, I think a main question is whether member and properties are distinguishable and maybe whether among members basic type members (file.softspace etc) and __slots__ members are distinguishable It would be somehow strange and redundant if properties value would be automatically pickled (I see them as computed value) In java (bean) properties are not pickled and even fields (= slots) can be marked as transient to avoid their serialization. In your picture it seems that all those things are not to be dinstinguished, so probably no automatic serialization, if there are members and given that actually files for example cannot be pickled, would be a reasonable solution. Otherwise (distinguishable case) other automatic approaches can make sense too. Just some - I hope valuable - input and my opinion. ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-22 16:03 Message: Logged In: YES user_id=459565 Oops. Please ignore the last paragraph of point #5. Samuele's __allslots__ is fine with regard to the example I presented. ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-22 15:52 Message: Logged In: YES user_id=459565 Samuele's sltattr.py is an interesting approach, though I am not entirely sure it is necessary or feasible sufficiently address the significant problems with slots via proxying __dict__ (see #5 below). Here is a mostly complete list of smaller changes that are somewhat orthogonal to how we address accesses to __dict__: 1) Flatten slot lists: Change obj.__class__.__slots__ to return an immutable list of all slot descriptors in the object (including all those of base classes). The motivation for this is similar in spirit to storing a flattened __mro__. The advantages of this change are: a) allows for fast and explicit object reflection that correctly finds all dict attributes, all slot attributes. b) allows reflection implementations (like vars (object) and pickle) to treat dict and slot attrs differently if we choose not to proxy __dict__. This has several advantages, as explained in change #2. Also importantly, this way it is not possible to "lose" descriptors permanently by deleting them from obj.__class__.__dict__. 2) Update reflection API even if we do not choose to proxy __dict__: Alter vars(object) to return a dictionary of all attributes, including both the contents of the non-proxied __dict__ and the valid attributes that result from iterating over __slots__ and evaluating the descriptors. The details of how this is best implemented depend on how we wish to define the behavior of modifying the resulting dictionary. It could be either: a) explicitly immutable, which involves creating proxy objects b) mutable, which involves copying c) undefined, which means implicitly immutable Aside from the questions over the nature of the return type, this implementation (coupled with #1) has distinct advantages. Specifically the native object.__dict__ has a very natural internal representation that pairs attribute names directly with values. In contrast, a fair amount of additional work is needed to extract the slots that store values and create a dictionary of their names and values. Other implementations will require a great deal more work since they would have to traverse though base classes to collecting slot descriptors. 3) Flatten slot inheritance: Update the new-style object inheritance mechanism to re-use slots of the same name, rather than creating a new slot and hiding the old. This makes the inheritance semantics of slots equivalent to those of normal instance attributes and avoids introducing an ad-hoc and obscure method of data hiding. 4) Update standard library to use new reflection API (and make them robust to properies at the same time) if we choose not to proxy __dict__. Virtually all of the changes are simple and involve updating these constructs: a) obj.__dict__ b) obj.__dict__[blah] c) obj.__dict__[blah] = x (What these will become depends on other factors, including the context and semantics of vars(obj).) Here is a fairly complete list of Python 2.2 modules that will need to be updated: copy, copy_reg, inspect, pickle, pydoc, cPickle, Bastion, codeop, dis, doctest, gettext, ihooks, imputil, knee, pdb, profile, rexec, rlcompleter, tempfile, unittest, xmllib, xmlrpclib 5) (NB: potentially controversial and not required) We could alter the descriptor protocol to make slots (and properties) more transparent when the values they reference do not exist. Here is an example to illustrate this: class A(object): foo = 1 class B(A): __slots__ = ('foo',) b = B() print b.foo > 1 or AttributeError? Currently an AttributeError is raised. However, it is a fairly easy change to make AttributeErrors signal that attribute resolution is to continue until either a valid descriptor is evaluated, an instance-attribute is found, or until the resolution fails after search the meta-type, the type and the instance dictionary. The problem illustrated by the above code also occurs when trying to create proxies for __dict__, if the proxy worked on the basis of the collected slot descriptors (__allslots__ in Samuele's example). I am prepared to submit patches to address each of these issues. However, I do want feedback beforehand, so that I do not waste time implementing something that will never be accepted. ---------------------------------------------------------------------- Comment By: Samuele Pedroni (pedronis) Date: 2002-02-22 02:33 Message: Logged In: YES user_id=61408 some slots more like attrs illustrative python code ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-21 18:51 Message: Logged In: YES user_id=459565 This bug raises questions about what a slot really is. After a fair amount of discussion on Python-dev, we have come up with basically two answers: 1) a slot is a struct-member that is part of the private implementation of an object. Slots should have their own semantics and not be expected to act like Python instance attributes. 2) slots should be treated just like dict instance attributes except they are allocated statically within the object itself, and require slightly different reflection methods. Under (1), this bug isn't really a bug. The class should implement a __reduce__ function or otherwise hook into the copy_reg system. Under (2), this bug is just the tip of the iceberg. There are about 8 other problems with the current slot implementation that need to be resolved before slots act almost identically to normal instance attributes. Thankfully, I am fairly confident that I can supply patches that can achieve this, though I am waiting for Guido to comment on this issue when he returns from his trip. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 From noreply@sourceforge.net Thu Feb 28 19:36:36 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 11:36:36 -0800 Subject: [Python-bugs-list] [ python-Bugs-520644 ] __slots__ are not pickled Message-ID: Bugs item #520644, was opened at 2002-02-20 15:50 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Samuele Pedroni (pedronis) Assigned to: Nobody/Anonymous (nobody) Summary: __slots__ are not pickled Initial Comment: [Posted on behalf of Kevin Jacobs] I have been hacking on ways to make lighter-weight Python objects using the __slots__ mechanism that came with Python 2.2 new- style class. Everything has gone swimmingly until I noticed that slots do not get pickled/cPickled at all! Here is a simple test case: import pickle,cPickle class Test(object): __slots__ = ['x'] def __init__(self): self.x = 66666 test = Test() pickle_str = pickle.dumps( test ) cpickle_str = cPickle.dumps( test ) untest = pickle.loads( pickle_str ) untestc = cPickle.loads( cpickle_str ) print untest.x # raises AttributeError print untextc.x # raises AttributeError ... see http://aspn.activestate.com/ASPN/Mail/Message/python- dev/1031499 ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2002-02-28 14:36 Message: Logged In: YES user_id=6380 Good point. So maybe it should be up to the class to define how to pickle slots. An alternative idea could look at the type of descriptors; slots use a different type than properties. ---------------------------------------------------------------------- Comment By: Samuele Pedroni (pedronis) Date: 2002-02-28 14:29 Message: Logged In: YES user_id=61408 [Guido on python-dev] In particular, the fact that instances of classes with __slots__ appear picklable but lose all their slot values is a bug -- these should either not be picklable unless you add a __reduce__ method, or they should be pickled properly. ... I haven't made up my mind on how to fix this -- it would be nice if __slots__ would automatically be pickled, but it's tricky (although I think it's doable -- without ever referencing the __slots__ variable :-). [pedronis - my 2cts] unless you plan some low-level (non-python-level) solution, I think a main question is whether member and properties are distinguishable and maybe whether among members basic type members (file.softspace etc) and __slots__ members are distinguishable It would be somehow strange and redundant if properties value would be automatically pickled (I see them as computed value) In java (bean) properties are not pickled and even fields (= slots) can be marked as transient to avoid their serialization. In your picture it seems that all those things are not to be dinstinguished, so probably no automatic serialization, if there are members and given that actually files for example cannot be pickled, would be a reasonable solution. Otherwise (distinguishable case) other automatic approaches can make sense too. Just some - I hope valuable - input and my opinion. ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-22 10:03 Message: Logged In: YES user_id=459565 Oops. Please ignore the last paragraph of point #5. Samuele's __allslots__ is fine with regard to the example I presented. ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-22 09:52 Message: Logged In: YES user_id=459565 Samuele's sltattr.py is an interesting approach, though I am not entirely sure it is necessary or feasible sufficiently address the significant problems with slots via proxying __dict__ (see #5 below). Here is a mostly complete list of smaller changes that are somewhat orthogonal to how we address accesses to __dict__: 1) Flatten slot lists: Change obj.__class__.__slots__ to return an immutable list of all slot descriptors in the object (including all those of base classes). The motivation for this is similar in spirit to storing a flattened __mro__. The advantages of this change are: a) allows for fast and explicit object reflection that correctly finds all dict attributes, all slot attributes. b) allows reflection implementations (like vars (object) and pickle) to treat dict and slot attrs differently if we choose not to proxy __dict__. This has several advantages, as explained in change #2. Also importantly, this way it is not possible to "lose" descriptors permanently by deleting them from obj.__class__.__dict__. 2) Update reflection API even if we do not choose to proxy __dict__: Alter vars(object) to return a dictionary of all attributes, including both the contents of the non-proxied __dict__ and the valid attributes that result from iterating over __slots__ and evaluating the descriptors. The details of how this is best implemented depend on how we wish to define the behavior of modifying the resulting dictionary. It could be either: a) explicitly immutable, which involves creating proxy objects b) mutable, which involves copying c) undefined, which means implicitly immutable Aside from the questions over the nature of the return type, this implementation (coupled with #1) has distinct advantages. Specifically the native object.__dict__ has a very natural internal representation that pairs attribute names directly with values. In contrast, a fair amount of additional work is needed to extract the slots that store values and create a dictionary of their names and values. Other implementations will require a great deal more work since they would have to traverse though base classes to collecting slot descriptors. 3) Flatten slot inheritance: Update the new-style object inheritance mechanism to re-use slots of the same name, rather than creating a new slot and hiding the old. This makes the inheritance semantics of slots equivalent to those of normal instance attributes and avoids introducing an ad-hoc and obscure method of data hiding. 4) Update standard library to use new reflection API (and make them robust to properies at the same time) if we choose not to proxy __dict__. Virtually all of the changes are simple and involve updating these constructs: a) obj.__dict__ b) obj.__dict__[blah] c) obj.__dict__[blah] = x (What these will become depends on other factors, including the context and semantics of vars(obj).) Here is a fairly complete list of Python 2.2 modules that will need to be updated: copy, copy_reg, inspect, pickle, pydoc, cPickle, Bastion, codeop, dis, doctest, gettext, ihooks, imputil, knee, pdb, profile, rexec, rlcompleter, tempfile, unittest, xmllib, xmlrpclib 5) (NB: potentially controversial and not required) We could alter the descriptor protocol to make slots (and properties) more transparent when the values they reference do not exist. Here is an example to illustrate this: class A(object): foo = 1 class B(A): __slots__ = ('foo',) b = B() print b.foo > 1 or AttributeError? Currently an AttributeError is raised. However, it is a fairly easy change to make AttributeErrors signal that attribute resolution is to continue until either a valid descriptor is evaluated, an instance-attribute is found, or until the resolution fails after search the meta-type, the type and the instance dictionary. The problem illustrated by the above code also occurs when trying to create proxies for __dict__, if the proxy worked on the basis of the collected slot descriptors (__allslots__ in Samuele's example). I am prepared to submit patches to address each of these issues. However, I do want feedback beforehand, so that I do not waste time implementing something that will never be accepted. ---------------------------------------------------------------------- Comment By: Samuele Pedroni (pedronis) Date: 2002-02-21 20:33 Message: Logged In: YES user_id=61408 some slots more like attrs illustrative python code ---------------------------------------------------------------------- Comment By: Kevin Jacobs (jacobs99) Date: 2002-02-21 12:51 Message: Logged In: YES user_id=459565 This bug raises questions about what a slot really is. After a fair amount of discussion on Python-dev, we have come up with basically two answers: 1) a slot is a struct-member that is part of the private implementation of an object. Slots should have their own semantics and not be expected to act like Python instance attributes. 2) slots should be treated just like dict instance attributes except they are allocated statically within the object itself, and require slightly different reflection methods. Under (1), this bug isn't really a bug. The class should implement a __reduce__ function or otherwise hook into the copy_reg system. Under (2), this bug is just the tip of the iceberg. There are about 8 other problems with the current slot implementation that need to be resolved before slots act almost identically to normal instance attributes. Thankfully, I am fairly confident that I can supply patches that can achieve this, though I am waiting for Guido to comment on this issue when he returns from his trip. ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=520644&group_id=5470 From noreply@sourceforge.net Thu Feb 28 21:53:42 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 13:53:42 -0800 Subject: [Python-bugs-list] [ python-Bugs-524062 ] USE_CACHE_ALIGNED still helpful? Message-ID: Bugs item #524062, was opened at 2002-02-28 16:53 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=524062&group_id=5470 Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Jack Jansen (jackjansen) Summary: USE_CACHE_ALIGNED still helpful? Initial Comment: Jack asked for this report: """ MacPython uses it. At the time it was put in it caused a 15% increase in Pystones because dictionary entries were aligned in cache lines. But: this was in the PPC 601 and 604 era, I must say that I've never tested whether it made any difference on G3 and G4. Put in a bug report in my name, and one day I'll get around to testing whether it still makes a difference on current hardware and rip it out if it doesn't. """ ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=524062&group_id=5470 From noreply@sourceforge.net Thu Feb 28 22:06:22 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 14:06:22 -0800 Subject: [Python-bugs-list] [ python-Bugs-524066 ] Override sys.stdout.write newstyle class Message-ID: Bugs item #524066, was opened at 2002-02-28 16:06 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=524066&group_id=5470 Category: Type/class unification Group: None Status: Open Resolution: None Priority: 5 Submitted By: Matthew Cowles (mdcowles) Assigned to: Nobody/Anonymous (nobody) Summary: Override sys.stdout.write newstyle class Initial Comment: Posted to python-help. Using Python 2.2, I'm trying to create a file-like class that write in a file and on standard output. But I'm having a problem, since 'print' statement doesn't seems to call the write method when I assign an object inheriting from 'file' to 'sys.stdout'. The following code shows the problem: >>> import sys >>> class test (file): ... def write (_, s): ... sys.__stdout__.write (s) ... >>> log = test ('log', 'r') >>> log.write ('hello\n') hello >>> sys.stdout = log >>> print 'hello' Traceback (most recent call last): File "", line 1, in ? IOError: [Errno 9] Bad file descriptor As you can see, I'm getting error, since Python try to write to a file opened in read-only mode, and so don't call my redefined 'write' method ... On the contrary, when using a standard class, only defining the 'write' method, I'm getting the desired behaviour. >>> import sys >>> class test: ... def write (_, s): ... sys.__stdout__.write (s) ... >>> log = test () >>> log.write ('hello\n') hello >>> sys.stdout = log >>> print 'hello' hello ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=524066&group_id=5470 From noreply@sourceforge.net Thu Feb 28 23:25:12 2002 From: noreply@sourceforge.net (noreply@sourceforge.net) Date: Thu, 28 Feb 2002 15:25:12 -0800 Subject: [Python-bugs-list] [ python-Bugs-508779 ] Disable flat namespace on MacOS X Message-ID: Bugs item #508779, was opened at 2002-01-26 04:44 You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=508779&group_id=5470 Category: Extension Modules Group: Python 2.2.1 candidate Status: Open Resolution: None Priority: 7 Submitted By: Manoj Plakal (terabaap) >Assigned to: Michael Hudson (mwh) Summary: Disable flat namespace on MacOS X Initial Comment: Python: v2.2 OS: MacOS X 10.1 MacOS X 10.1 introduced two forms of linking for loadable modules: flat namespace and two-level namespace. Python 2.2 is set up to use flat namespace by default on OS X for building extension modules. I believe that this is a problem since it introduces spurious run-time linking errors when loading 2 or more modules that happen to have common symbols. The Linux and Windows implementations do not allow symbols within modules to clash with each other. This behavior also goes against expectations of C extension module writers. As a reproducible example, consider two dummy modules foo (foomodule.c) and bar (barmodule.c) both of which are built with a common file baz.c that contains some data variables. With the current Python 2.2 on OS X 10.1, only one of foo or bar can be imported, but NOT BOTH, into the same interpreter session. The files can be picked up from these URLs: http://yumpee.org/python/foomodule.c http://yumpee.org/python/barmodule.c http://yumpee.org/python/baz.c http://yumpee.org/python/setup.py If I run "python setup.py build" with Python 2.2 (built from the 2.2 source tarball) and then import foo followed by bar, I get an ImportError: "Failure linking new module" (from Python/dynload_next.c). If I add a call to NSLinkEditError() to print a more detailed error message, I see that the problem is multiple definitions of the data variables in baz.c. The above example works fine with Python 2.1 on Red Hat Linux 7.2 and Python 2.2a4 on Win98. If I then edit /usr/local/lib/python2.2/Makefile and change LDSHARED and BLDSHARED to not use flat_namespace: $(CC) $(LDFLAGS) -bundle -bundle_loader /usr/local/bin/python2.2 -undefined error then the problem is solved and I can load both foo and bar without problems. More info and discussion is available at these URLs (also search groups.google.com for "comp.lang.python OS X import bug"): http://groups.google.com/groups?hl=en&threadm=j4sn8uu517.fsf%40informatik.hu-berlin.de&prev=/groups%3Fnum%3D25%26hl%3Den%26group%3Dcomp.lang.python%26start%3D75%26group%3Dcomp.lang.python http://mail.python.org/pipermail/pythonmac-sig/2002-January/004977.html It would be great to have this simple change be applied to Python 2.2.1. Manoj terabaap@yumpee.org ---------------------------------------------------------------------- >Comment By: Jack Jansen (jackjansen) Date: 2002-03-01 00:25 Message: Logged In: YES user_id=45365 I turns out I was mistaken: BLDSHARED is used during build, LDSHARED for distutils when Python is installed. Attached is a patch (relative to release22-maint) that does two level namespaces. It has no adverse effects on the core (i.e. make test still works fine). Manoj: if you could test that this not only has on adverse effects but also fixes your problem that would be great. please checkout th release22-maint branch and apply this patch. Michael: I'm assigning this to you, feel free to check it in immediately or wait for feedback from Manoj (or ignore it completely if you don't like it:-). ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 17:19 Message: Logged In: YES user_id=45365 This solution still suffers from the problem we discussed on the Pythonmac-SIG, that BLDSHARED (or whatever replaces it) would need to have one value for -bundle_loader when building the standard extension modules and another during "normal operation"... ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2002-02-25 12:21 Message: Logged In: YES user_id=45365 I usurping this bug, but I'm not sure yet whether it's a good idea to fix this for 2.2.1, as it will break other extension modules that rely on the single flat namespace. ---------------------------------------------------------------------- Comment By: Manoj Plakal (terabaap) Date: 2002-01-26 05:25 Message: Logged In: YES user_id=150105 Another idea is to provide the option for flat or 2-level namespace when building extension modules on OS X, maybe as an extra flag passed to distutils.core.Extension or somewhere else ... ---------------------------------------------------------------------- You can respond by visiting: http://sourceforge.net/tracker/?func=detail&atid=105470&aid=508779&group_id=5470