From noreply at sourceforge.net Mon Jan 1 01:22:20 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 31 Dec 2006 16:22:20 -0800 Subject: [ python-Bugs-1625509 ] 'imp' documentation does not mention that lock is re-entrant Message-ID: Bugs item #1625509, was opened at 2006-12-31 18:22 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1625509&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Dustin J. Mitchell (djmitche) Assigned to: Nobody/Anonymous (nobody) Summary: 'imp' documentation does not mention that lock is re-entrant Initial Comment: My reading of import.c shows that imp.{acquire,release}_lock operate in the fashion of a threading.RLock, rather than a threading.Lock. Of course, this makes sense for the use to which it's put, but it would be great to have that mentioned explicitly in the documentation. Suggestion (stolen from threading documentation): acquire_lock() Acquires the interpreter's import lock for the current thread. This lock should be used by import hooks to ensure thread-safety when importing modules. Once a thread has acquired the import lock, the same thread may acquire it again without blocking; the thread must release it once for each time it has acquired it. On platforms without threads, this function does nothing. New in version 2.3. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1625509&group_id=5470 From noreply at sourceforge.net Mon Jan 1 08:19:56 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 31 Dec 2006 23:19:56 -0800 Subject: [ python-Bugs-1625576 ] add ability to specify name to os.fdopen Message-ID: Bugs item #1625576, was opened at 2007-01-01 07:19 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1625576&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Private: No Submitted By: Mark Diekhans (diekhans) Assigned to: Nobody/Anonymous (nobody) Summary: add ability to specify name to os.fdopen Initial Comment: Please add an optional argument to os.fdopen() to specify the name field in the resulting file object. This would allow for a more useful name than: '...> ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1625576&group_id=5470 From noreply at sourceforge.net Tue Jan 2 04:20:07 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 1 Jan 2007 19:20:07 -0800 Subject: [ python-Bugs-1465643 ] test_logging hangs on cygwin Message-ID: <200701020320.l023K75F031518@sc8-sf-db2-new-b.sourceforge.net> Bugs item #1465643, was opened at 2006-04-06 03:44 Message generated for change (Comment added) made by sf-robot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1465643&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: Miki Tebeka (tebeka) Assigned to: Nobody/Anonymous (nobody) Summary: test_logging hangs on cygwin Initial Comment: Python 2.5a1, CYGWIN make test test_logging hangs ---------------------------------------------------------------------- >Comment By: SourceForge Robot (sf-robot) Date: 2007-01-01 19:20 Message: Logged In: YES user_id=1312539 Originator: NO This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). ---------------------------------------------------------------------- Comment By: Miki Tebeka (tebeka) Date: 2006-12-18 08:02 Message: Logged In: YES user_id=358087 Originator: YES (Should have been "moved to") ---------------------------------------------------------------------- Comment By: Miki Tebeka (tebeka) Date: 2006-12-18 08:01 Message: Logged In: YES user_id=358087 Originator: YES I have no idea, move to Linux/Mac environment. ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2006-12-18 01:57 Message: Logged In: YES user_id=308438 Originator: NO Is this still a problem, now that 2.5 is out? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1465643&group_id=5470 From noreply at sourceforge.net Tue Jan 2 11:22:06 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 02 Jan 2007 02:22:06 -0800 Subject: [ python-Bugs-1568240 ] Tix is not included in 2.5 for Windows Message-ID: Bugs item #1568240, was opened at 2006-09-30 12:19 Message generated for change (Comment added) made by tzot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1568240&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: Christos Georgiou (tzot) Assigned to: Martin v. L?wis (loewis) Summary: Tix is not included in 2.5 for Windows Initial Comment: (I hope "Build" is more precise than "Extension Modules" and "Tkinter" for this specific bug.) At least the following files are missing from 2.5 for Windows: DLLs\tix8184.dll tcl\tix8184.lib tcl\tix8.1\* ---------------------------------------------------------------------- >Comment By: Christos Georgiou (tzot) Date: 2007-01-02 12:22 Message: Logged In: YES user_id=539787 Originator: YES Neal's message is this: http://mail.python.org/pipermail/python-dev/2006-December/070406.html and it refers to the 2.5.1 release, not prior to it. As you see, I refrained from both increasing the priority and assigning it to Neal, and actually just added a comment to the case with a related question, since I know you are the one responsible for the windows build and you already had assigned the bug to you. My adding this comment to the bug was nothing more or less than the action that felt appropriate, and still does feel appropriate to me (ie I didn't overstep any limits). The "we" was just all parties interested, and in this case, the ones I know are at least you (responsible for the windows build) and I (a user of Tix on windows). Happy new year, Martin! ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-12-30 00:26 Message: Logged In: YES user_id=21627 Originator: NO I haven't read Neal's message yet, but I wonder what he could do about it. I plan to fix this with 2.5.1, there is absolutely no way to fix this earlier. I'm not sure who "we" is who would like to bump the bug, and what precisely this bumping would do; tzot, please refrain from changing the priority to higher than 7. These priorities are reserved to the release manager. ---------------------------------------------------------------------- Comment By: Christos Georgiou (tzot) Date: 2006-12-27 19:46 Message: Logged In: YES user_id=539787 Originator: YES Should we bump the bug up and/or assign it to Neal Norwitz as he requested on Python-Dev? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1568240&group_id=5470 From noreply at sourceforge.net Tue Jan 2 17:32:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 02 Jan 2007 08:32:33 -0800 Subject: [ python-Bugs-1626300 ] 'Installing Python Modules' does not work for Windows Message-ID: Bugs item #1626300, was opened at 2007-01-02 11:32 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626300&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Christopher Lambacher (tautology) Assigned to: Nobody/Anonymous (nobody) Summary: 'Installing Python Modules' does not work for Windows Initial Comment: The instructions for installing 3rd party modules will not work in a default Windows install. The documentation (http://docs.python.org/inst/standard-install.html) says: """ As described in section 1.2, building and installing a module distribution using the Distutils is usually one simple command: python setup.py install On Unix, you'd run this command from a shell prompt; on Windows, you have to open a command prompt window (``DOS box'') and do it there; on Mac OS X, you open a Terminal window to get a shell prompt. """ Unfortunately the command 'python setup.py install' does not work because the python executable is not in the path in the default install. 'setup.py install' will work since .py files are associated with python.exe. A suggestion for new wording: """ As described in section 1.2, building and installing a module distribution using the Distutils is usually one simple command: python setup.py install On Unix, you'd run this command from a shell prompt; on Mac OS X, you open a Terminal window to get a shell prompt. On Windows, you have to open a command prompt window (``DOS box'') and modify the command to: setup.py install """ ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626300&group_id=5470 From noreply at sourceforge.net Tue Jan 2 17:36:35 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 02 Jan 2007 08:36:35 -0800 Subject: [ python-Bugs-1626300 ] 'Installing Python Modules' does not work for Windows Message-ID: Bugs item #1626300, was opened at 2007-01-02 11:32 Message generated for change (Comment added) made by tautology You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626300&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Christopher Lambacher (tautology) Assigned to: Nobody/Anonymous (nobody) Summary: 'Installing Python Modules' does not work for Windows Initial Comment: The instructions for installing 3rd party modules will not work in a default Windows install. The documentation (http://docs.python.org/inst/standard-install.html) says: """ As described in section 1.2, building and installing a module distribution using the Distutils is usually one simple command: python setup.py install On Unix, you'd run this command from a shell prompt; on Windows, you have to open a command prompt window (``DOS box'') and do it there; on Mac OS X, you open a Terminal window to get a shell prompt. """ Unfortunately the command 'python setup.py install' does not work because the python executable is not in the path in the default install. 'setup.py install' will work since .py files are associated with python.exe. A suggestion for new wording: """ As described in section 1.2, building and installing a module distribution using the Distutils is usually one simple command: python setup.py install On Unix, you'd run this command from a shell prompt; on Mac OS X, you open a Terminal window to get a shell prompt. On Windows, you have to open a command prompt window (``DOS box'') and modify the command to: setup.py install """ ---------------------------------------------------------------------- >Comment By: Christopher Lambacher (tautology) Date: 2007-01-02 11:36 Message: Logged In: YES user_id=122679 Originator: YES Might as well also deal with section 1.2 as well. http://docs.python.org/inst/trivial-install.html#new-standard says: """ Additionally, the distribution will contain a setup script setup.py, and a file named README.txt or possibly just README, which should explain that building and installing the module distribution is a simple matter of running python setup.py install If all these things are true, then you already know how to build and install the modules you've just downloaded: Run the command above. Unless you need to install things in a non-standard way or customize the build process, you don't really need this manual. Or rather, the above command is everything you need to get out of this manual. """ Could be changed to: """ Additionally, the distribution will contain a setup script setup.py, and a file named README.txt or possibly just README, which should explain that building and installing the module distribution is a simple matter of running python setup.py install On Windows the above command should be modified to: setup.py install If all these things are true, then you already know how to build and install the modules you've just downloaded: Run the command above. Unless you need to install things in a non-standard way or customize the build process, you don't really need this manual. Or rather, the above command is everything you need to get out of this manual. """ ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626300&group_id=5470 From noreply at sourceforge.net Wed Jan 3 01:03:00 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 02 Jan 2007 16:03:00 -0800 Subject: [ python-Bugs-1626545 ] Would you mind renaming object.h to pyobject.h? Message-ID: Bugs item #1626545, was opened at 2007-01-02 16:03 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626545&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Feature Request Status: Open Resolution: None Priority: 5 Private: No Submitted By: Anton Tropashko (atropashko) Assigned to: Nobody/Anonymous (nobody) Summary: Would you mind renaming object.h to pyobject.h? Initial Comment: Would be nice if you could change object.h to pyobject.h or something like that. object.h is a common name found in kjs and Qt :-( Thank you! The patch is against 2.4 --- Makefile.pre.in 2 Jan 2007 20:03:09 -0000 1.3 +++ Makefile.pre.in 2 Jan 2007 23:52:47 -0000 @@ -522,7 +522,7 @@ Include/methodobject.h \ Include/modsupport.h \ Include/moduleobject.h \ - Include/object.h \ + Include/pyobject.h \ Include/objimpl.h \ Include/patchlevel.h \ Include/pydebug.h \ Index: configure =================================================================== RCS file: /cvsroot/faultline/python/configure,v retrieving revision 1.2 diff -d -u -r1.2 configure --- configure 30 Dec 2006 02:55:53 -0000 1.2 +++ configure 2 Jan 2007 23:52:49 -0000 @@ -1,5 +1,5 @@ #! /bin/sh -# From configure.in Revision: 1.1.1.1 . +# From configure.in Revision: 1.2 . # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.59 for python 2.4. # @@ -274,7 +274,7 @@ PACKAGE_STRING='python 2.4' PACKAGE_BUGREPORT='http://www.python.org/python-bugs' -ac_unique_file="Include/object.h" +ac_unique_file="Include/pyobject.h" # Factoring default headers for most tests. ac_includes_default="\ #include Index: configure.in =================================================================== RCS file: /cvsroot/faultline/python/configure.in,v retrieving revision 1.2 diff -d -u -r1.2 configure.in --- configure.in 30 Dec 2006 02:55:53 -0000 1.2 +++ configure.in 2 Jan 2007 23:52:49 -0000 @@ -6,7 +6,7 @@ AC_REVISION($Revision: 1.2 $) AC_PREREQ(2.53) AC_INIT(python, PYTHON_VERSION, http://www.python.org/python-bugs) -AC_CONFIG_SRCDIR([Include/object.h]) +AC_CONFIG_SRCDIR([Include/pyobject.h]) AC_CONFIG_HEADER(pyconfig.h) dnl This is for stuff that absolutely must end up in pyconfig.h. Index: Include/Python.h =================================================================== RCS file: /cvsroot/faultline/python/Include/Python.h,v retrieving revision 1.1.1.1 diff -d -u -r1.1.1.1 Python.h --- Include/Python.h 28 Dec 2006 18:35:20 -0000 1.1.1.1 +++ Include/Python.h 2 Jan 2007 23:52:51 -0000 @@ -73,7 +73,7 @@ #endif #include "pymem.h" -#include "object.h" +#include "pyobject.h" #include "objimpl.h" #include "pydebug.h" Index: Parser/tokenizer.h =================================================================== RCS file: /cvsroot/faultline/python/Parser/tokenizer.h,v retrieving revision 1.1.1.1 diff -d -u -r1.1.1.1 tokenizer.h --- Parser/tokenizer.h 28 Dec 2006 18:35:31 -0000 1.1.1.1 +++ Parser/tokenizer.h 2 Jan 2007 23:52:54 -0000 @@ -4,7 +4,7 @@ extern "C" { #endif -#include "object.h" +#include "pyobject.h" /* Tokenizer interface */ ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626545&group_id=5470 From noreply at sourceforge.net Wed Jan 3 05:30:45 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 02 Jan 2007 20:30:45 -0800 Subject: [ python-Feature Requests-415692 ] smarter temporary file object Message-ID: Feature Requests item #415692, was opened at 2001-04-12 10:37 Message generated for change (Comment added) made by djmitche You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=415692&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Closed Resolution: None Priority: 5 Private: No Submitted By: Guido van Rossum (gvanrossum) Assigned to: Nobody/Anonymous (nobody) Summary: smarter temporary file object Initial Comment: Jim Fulton suggested the following: I wonder if it would be a good idea to have a new kind of temporary file that stored data in memory unless: - The data exceeds some size, or - Somebody asks for a fileno. Then the cgi module (and other apps) could use this thing in a uniform way. ---------------------------------------------------------------------- Comment By: Dustin J. Mitchell (djmitche) Date: 2007-01-02 22:30 Message: Logged In: YES user_id=7446 Originator: NO I have a potential implementation for this, intended to be included in Lib/tempfile.py. Because the issue is closed, I can't attach it. Let's see if posting to the issue will open that option up. Dustin ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-08-09 11:51 Message: Logged In: YES user_id=6380 Thank you. I've moved this feature request to PEP 42, "Feature Requests". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=415692&group_id=5470 From noreply at sourceforge.net Wed Jan 3 05:52:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 02 Jan 2007 20:52:10 -0800 Subject: [ python-Feature Requests-415692 ] smarter temporary file object Message-ID: Feature Requests item #415692, was opened at 2001-04-12 11:37 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=415692&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Open Resolution: None Priority: 5 Private: No Submitted By: Guido van Rossum (gvanrossum) Assigned to: Nobody/Anonymous (nobody) Summary: smarter temporary file object Initial Comment: Jim Fulton suggested the following: I wonder if it would be a good idea to have a new kind of temporary file that stored data in memory unless: - The data exceeds some size, or - Somebody asks for a fileno. Then the cgi module (and other apps) could use this thing in a uniform way. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-02 23:52 Message: Logged In: YES user_id=6380 Originator: YES I've reopened the issue for you. Do try to interest some other core developer in reviewing your code, or it will take a long time... Thanks for remembering! ---------------------------------------------------------------------- Comment By: Dustin J. Mitchell (djmitche) Date: 2007-01-02 23:30 Message: Logged In: YES user_id=7446 Originator: NO I have a potential implementation for this, intended to be included in Lib/tempfile.py. Because the issue is closed, I can't attach it. Let's see if posting to the issue will open that option up. Dustin ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-08-09 12:51 Message: Logged In: YES user_id=6380 Thank you. I've moved this feature request to PEP 42, "Feature Requests". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=415692&group_id=5470 From noreply at sourceforge.net Wed Jan 3 11:47:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 02:47:10 -0800 Subject: [ python-Bugs-1626801 ] posixmodule.c leaks crypto context on Windows Message-ID: Bugs item #1626801, was opened at 2007-01-03 12:47 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Yitz Gale (ygale) Assigned to: Nobody/Anonymous (nobody) Summary: posixmodule.c leaks crypto context on Windows Initial Comment: The Win API docs for CryptAcquireContext require that the context be released after use by calling CryptReleaseContext, but posixmodule.c fails to do so in win32_urandom(). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 From noreply at sourceforge.net Wed Jan 3 12:44:03 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 03:44:03 -0800 Subject: [ python-Bugs-1622010 ] Tcl/Tk auto-expanding window Message-ID: Bugs item #1622010, was opened at 2006-12-25 16:10 Message generated for change (Settings changed) made by fmareyen You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1622010&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tkinter Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Fabian_M (fmareyen) >Assigned to: Nobody/Anonymous (nobody) Summary: Tcl/Tk auto-expanding window Initial Comment: I've experienced an auto-expanding Tcl/Tk-window: (Windows NT) import Tkinter tk = Tkinter.Tk() tk.state("zoomed") #Windows only tk.resizable(False, False) tk.mainloop() As you take the window by curser and move it slowly to the left, it expands automatically to the right. This effect doesn't exist vertically. When you use tk.state("zoomed") you needn't to set tk.resizable, but this call remained in my programm and caused that problem. Systeminformation: ------------------ Windows NT sys.api_version = 1012 #0x3f4 sys.dllhandle = 503316480 #0x1e0000 sys.getwindowsversion() -> (4, 0, 1381, 2, "Service Pack 1") sys.hexversion = 33817328 #0x20402f0 sys.platform = "win32" sys.version = "2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (I sys.version_info = (2, 4, 2, 'final', 0) sys.winver = "2.4" _tkinter.TCL_VERSION = 8.4 _tkinter.TK_VERSION = 8.4 Thanks. Fabian Mareyen ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1622010&group_id=5470 From noreply at sourceforge.net Wed Jan 3 13:12:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 04:12:09 -0800 Subject: [ python-Bugs-1626801 ] posixmodule.c leaks crypto context on Windows Message-ID: Bugs item #1626801, was opened at 2007-01-03 12:47 Message generated for change (Comment added) made by ygale You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Yitz Gale (ygale) Assigned to: Nobody/Anonymous (nobody) Summary: posixmodule.c leaks crypto context on Windows Initial Comment: The Win API docs for CryptAcquireContext require that the context be released after use by calling CryptReleaseContext, but posixmodule.c fails to do so in win32_urandom(). ---------------------------------------------------------------------- >Comment By: Yitz Gale (ygale) Date: 2007-01-03 14:12 Message: Logged In: YES user_id=1033539 Originator: YES You might consider backporting this to 2.5 and 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 From noreply at sourceforge.net Wed Jan 3 15:59:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 06:59:47 -0800 Subject: [ python-Bugs-1568240 ] Tix is not included in 2.5 for Windows Message-ID: Bugs item #1568240, was opened at 2006-09-30 11:19 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1568240&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: Christos Georgiou (tzot) Assigned to: Martin v. L?wis (loewis) Summary: Tix is not included in 2.5 for Windows Initial Comment: (I hope "Build" is more precise than "Extension Modules" and "Tkinter" for this specific bug.) At least the following files are missing from 2.5 for Windows: DLLs\tix8184.dll tcl\tix8184.lib tcl\tix8.1\* ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-03 15:59 Message: Logged In: YES user_id=21627 Originator: NO Ah, ok. No, assigning this report to Neal or bumping its priority should not be done. ---------------------------------------------------------------------- Comment By: Christos Georgiou (tzot) Date: 2007-01-02 11:22 Message: Logged In: YES user_id=539787 Originator: YES Neal's message is this: http://mail.python.org/pipermail/python-dev/2006-December/070406.html and it refers to the 2.5.1 release, not prior to it. As you see, I refrained from both increasing the priority and assigning it to Neal, and actually just added a comment to the case with a related question, since I know you are the one responsible for the windows build and you already had assigned the bug to you. My adding this comment to the bug was nothing more or less than the action that felt appropriate, and still does feel appropriate to me (ie I didn't overstep any limits). The "we" was just all parties interested, and in this case, the ones I know are at least you (responsible for the windows build) and I (a user of Tix on windows). Happy new year, Martin! ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-12-29 23:26 Message: Logged In: YES user_id=21627 Originator: NO I haven't read Neal's message yet, but I wonder what he could do about it. I plan to fix this with 2.5.1, there is absolutely no way to fix this earlier. I'm not sure who "we" is who would like to bump the bug, and what precisely this bumping would do; tzot, please refrain from changing the priority to higher than 7. These priorities are reserved to the release manager. ---------------------------------------------------------------------- Comment By: Christos Georgiou (tzot) Date: 2006-12-27 18:46 Message: Logged In: YES user_id=539787 Originator: YES Should we bump the bug up and/or assign it to Neal Norwitz as he requested on Python-Dev? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1568240&group_id=5470 From noreply at sourceforge.net Wed Jan 3 16:01:22 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 07:01:22 -0800 Subject: [ python-Bugs-1627036 ] website issue reporter down Message-ID: Bugs item #1627036, was opened at 2007-01-03 10:01 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627036&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: website issue reporter down Initial Comment: To request an update for python.org, the procedure seems to be to create a ticket via: http://wiki.python.org/moin/PythonWebsiteCreatingNewTickets which says that self registration is disabled, but sends you to: http://pydotorg.python.org/pydotorg/newticket which says that admin privs are required to create a new ticket. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627036&group_id=5470 From noreply at sourceforge.net Wed Jan 3 16:06:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 07:06:09 -0800 Subject: [ python-Bugs-1627039 ] mention side-lists from python-dev description Message-ID: Bugs item #1627039, was opened at 2007-01-03 10:06 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627039&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: mention side-lists from python-dev description Initial Comment: http://www.python.org/community/lists/ describes mailing lists for python, including python-dev. Change: """ Note: python-dev is for work on developing Python (fixing bugs and adding new features to Python itself); if you're having problems writing a Python program, please post to comp.lang.python. """ to """ Note: python-dev is for work on developing Python (fixing bugs and adding new features to Python itself); if you're having problems writing a Python program, please post to comp.lang.python. If you want to discuss larger changes, please use python-ideas instead. http://mail.python.org/mailman/listinfo/python-ideas """ ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627039&group_id=5470 From noreply at sourceforge.net Wed Jan 3 17:01:39 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 08:01:39 -0800 Subject: [ python-Bugs-1626801 ] posixmodule.c leaks crypto context on Windows Message-ID: Bugs item #1626801, was opened at 2007-01-03 11:47 Message generated for change (Settings changed) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None >Priority: 7 Private: No Submitted By: Yitz Gale (ygale) >Assigned to: Martin v. L?wis (loewis) Summary: posixmodule.c leaks crypto context on Windows Initial Comment: The Win API docs for CryptAcquireContext require that the context be released after use by calling CryptReleaseContext, but posixmodule.c fails to do so in win32_urandom(). ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2007-01-03 13:12 Message: Logged In: YES user_id=1033539 Originator: YES You might consider backporting this to 2.5 and 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 From noreply at sourceforge.net Wed Jan 3 17:06:31 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 08:06:31 -0800 Subject: [ python-Bugs-1627096 ] xml.dom.minidom parse bug Message-ID: Bugs item #1627096, was opened at 2007-01-03 17:06 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627096&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Pierre Imbaud (pmi) Assigned to: Nobody/Anonymous (nobody) Summary: xml.dom.minidom parse bug Initial Comment: xml.dom.minidom was unable to parse an xml file that came from an example provided by an official organism.(http://www.iptc.org/IPTC4XMP) The parsed file was somewhat hairy, but I have been able to reproduce the bug with a simplified version, attached. (ends with .xmp: its supposed to be an xmp file, the xmp standard being built on xml. Well, thats the short story). The offending part is the one that goes: xmpPLUS='....' it triggers an exception: ValueError: too many values to unpack, in _parse_ns_name. Some debugging showed an obvious mistake in the scanning of the name argument, that goes beyond the closing " ' ". I digged a little further thru a pdb session, but the bug seems to be located in c code. Thats the very first time I report a bug, chances are I provide too much or too little information... To whoever it may concern, here is the invoking code: from xml.dom import minidom ... class xmp(dict): def __init__(self, inStream): xmldoc = minidom.parse(inStream) .... x = xmp('/home/pierre/devt/port/IPTCCore-Full/x.xmp') traceback: /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xmpLib.py in __init__(self, inStream) 26 def __init__(self, inStream): 27 print minidom ---> 28 xmldoc = minidom.parse(inStream) 29 xmpmeta = xmldoc.childNodes[1] 30 rdf = xmpmeta.childNodes[1] /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/nxml/dom/minidom.py in parse(file, parser, bufsize) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parse(file, namespaces) 922 fp = open(file, 'rb') 923 try: --> 924 result = builder.parseFile(fp) 925 finally: 926 fp.close() /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parseFile(self, file) 205 if not buffer: 206 break --> 207 parser.Parse(buffer, 0) 208 if first_buffer and self.document.documentElement: 209 self._setup_subset(buffer) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in start_element_handler(self, name, attributes) 743 def start_element_handler(self, name, attributes): 744 if ' ' in name: --> 745 uri, localname, prefix, qname = _parse_ns_name(self, name) 746 else: 747 uri = EMPTY_NAMESPACE /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in _parse_ns_name(builder, name) 125 localname = intern(localname, localname) 126 else: --> 127 uri, localname = parts 128 prefix = EMPTY_PREFIX 129 qname = localname = intern(localname, localname) ValueError: too many values to unpack The offending c statement: /usr/src/packages/BUILD/Python-2.4/Modules/pyexpat.c(582)StartElement() The returned 'name': (Pdb) name Out[5]: u'XMP Photographic Licensing Universal System (xmpPLUS, http://ns.adobe.com/xap/1.0/PLUS/) CreditLineReq xmpPLUS' Its obvious the scanning went beyond the attribute. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627096&group_id=5470 From noreply at sourceforge.net Wed Jan 3 17:08:55 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 08:08:55 -0800 Subject: [ python-Bugs-1619130 ] 64-bit Universal Binary build broken Message-ID: Bugs item #1619130, was opened at 2006-12-20 00:22 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619130&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Macintosh Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Thomas Treadway (treadway) Assigned to: Jack Jansen (jackjansen) Summary: 64-bit Universal Binary build broken Initial Comment: Hi, I'm running into problem building a 4-way universal binary of python. The following has cropped up on both python2.5 and python2.4.2 The configure goes OK, but the make bombs. [2244]$ ./configure --prefix=$VISITPATH/python OPT="-fast -Wall \ -Wstrict-prototypes -fno-common -fPIC \ -isysroot /Developer/SDKs/MacOSX10.4u.sdk \ -arch ppc -arch i386 -arch ppc64 -arch x86_64" \ LDFLAGS="-Wl,-syslibroot,/Developer/SDKs/MacOSX10.4u.sdk,\ -headerpad_max_install_names -arch ppc -arch i386 \ -arch ppc64 -arch x86_64" . . . [2245]$ make gcc -c -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -fast -Wall -Wstrict-prototypes -fno-common -fPIC -isysroot /Developer/SDKs/MacOSX10.4u.sdk -arch ppc -arch i386 -arch ppc64 -arch x86_64 -I. -I./Include -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c In file included from ./Include/Python.h:57In file included from ./Include/Python.h:57, from ./Modules/python.c:3: ./Include/pyport.h:730:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." , from ./Modules/python.c:3: ./Include/pyport.h:730:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." lipo: can't figure out the architecture type of: /var/tmp//ccL3Ewl4.out make: *** [Modules/python.o] Error 1 Comenting out the "#error" statement in pyport.h get me a little further befor getting: gcc -c -fno-strict-aliasing -Wno-long-double -no-cpp-precomp -mno-fused-madd -DNDEBUG -fast -Wall -Wstrict-prototypes -fno-common -fPIC -isysroot /Developer/SDKs/MacOSX10.4u.sdk -arch ppc -arch i386 -arch ppc64 -arch x86_64 -I. -I./Include -DPy_BUILD_CORE -o Python/mactoolboxglue.o Python/mactoolboxglue.c In file included from /Developer/SDKs/MacOSX10.4u.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/DriverServices.h:32, from /Developer/SDKs/MacOSX10.4u.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/CarbonCore.h:125, . . . from /Developer/SDKs/MacOSX10.4u.sdk/System/Library/Frameworks/Carbon.framework/Headers/Carbon.h:20, from ./Include/pymactoolbox.h:10, from Python/mactoolboxglue.c:27: /Developer/SDKs/MacOSX10.4u.sdk/System/Library/Frameworks/CoreServices.framework/Frameworks/CarbonCore.framework/Headers/fp.h:1338: error: 'SIGDIGLEN' undeclared here (not in a function) lipo: can't figure out the architecture type of: /var/tmp//ccEYbpTz.out make: *** [Python/mactoolboxglue.o] Error 1 Seem Carbon doesn't support 64-bits! Is there a solution? trt -- Thomas R. Treadway Computer Scientist Lawrence Livermore Nat'l Lab 7000 East Avenue, L-159 Livermore, CA 94550-0611 ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-03 17:08 Message: Logged In: YES user_id=21627 Originator: NO You are right: four-way universal builds are not supported currently. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619130&group_id=5470 From noreply at sourceforge.net Wed Jan 3 17:33:38 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 08:33:38 -0800 Subject: [ python-Bugs-1519816 ] urllib2 proxy does not work in 2.4.3 Message-ID: Bugs item #1519816, was opened at 2006-07-10 04:29 Message generated for change (Comment added) made by lecaros You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1519816&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michal Niklas (mniklas) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2 proxy does not work in 2.4.3 Initial Comment: My python app had to retrieve some web pages and while our network environment requires proxy it uses urllib2 opener (source is in attachment). It worked very well on older Python interpreters: ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on Python 2.4.2 (#67, Oct 30 2005, 16:11:18) [MSC v.1310 32 bit (Intel)] on win32 It works on linux with 2.3 and 2.4.1: Python 2.4.1 (#2, May 5 2005, 11:32:06) [GCC 3.3.5 (Debian 1:3.3.5-12)] on linux2 But it does not work with newest 2.4.3 on Linux: Python 2.4.3 (#1, Jul 10 2006, 09:57:52) [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 Desired result: isof-mark:~# python2.3 proxy_bug.py trying http://www.python.org ... OK. We have reply from http://www.python.org. Size: 13757 [b] design by pollenation Copyright ?? 1990-2006, Python Software Foundation
Legal Statements isof-mark:~# /usr/local/bin/python proxy_bug.py trying http://www.python.org ... Traceback (most recent call last): File "proxy_bug.py", line 37, in ? get_page() File "proxy_bug.py", line 27, in get_page f = urllib2.urlopen(request) File "/usr/local/lib/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/local/lib/python2.4/urllib2.py", line 364, in open response = meth(req, response) File "/usr/local/lib/python2.4/urllib2.py", line 471, in http_response response = self.parent.error( File "/usr/local/lib/python2.4/urllib2.py", line 402, in error return self._call_chain(*args) File "/usr/local/lib/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/local/lib/python2.4/urllib2.py", line 480, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 407: Proxy Authentication Required I have raported it on ActiveState bug list (http:// bugs.activestate.com/show_bug.cgi?id=47018) while I first spot this bug on their destribution but it seems that bug is in others distributions too. Regards, Michal Niklas ---------------------------------------------------------------------- Comment By: Jos? Lecaros Cisterna (lecaros) Date: 2007-01-03 13:33 Message: Logged In: YES user_id=1410109 Originator: NO i have the same issue on windows xp, python 2.4.3 but using DOMAIN\username format ---------------------------------------------------------------------- Comment By: JerryKhan (jerrykhan) Date: 2006-11-28 12:17 Message: Logged In: YES user_id=867168 Originator: NO Hello, In my sens in a general manner there is something wrong in the urllib2 http code: But this may depends on the environment (I am not an expert in urllib) Here are my tests : using python 2.4.2 on Windows XP These simple codes failed with a 407 http error : Example E1: import urllib2 as URL a=URL.urlopen("http://lan_apache_url") print a.read() OR example E2: import urllib2 as URL r=URL.Request("http://lan_apache_url") a=URL.urlopen(r) print a.read() But succeed with urllib example E3 import urllib a=urllib.urlopen("http://lan_apache_url") print a.read() Notice that different code lines are minimal E1 and E3 are close: Notice also that I'm try to access a lan apache server which is not behind a Proxy. And I don't want to access to any Proxy (like exclusion string in IExplorer) But I found also that If I try to access to a protected link with HTTPS ... on the LAN, there is not problem. The issue is really on the HTTP interpreter or during the configuration of the URL opener. In the same time, some of my programs are able to access to Internet servers using the current Proxy server without any problem. For that, I use: import urllib2 as URL URL.install_opener(URL.build_opener( s.https_handler, s.proxy_auth_handler, s.cookie_handler)) Well, I developed a workaround in my programs ... to use urllib instead of urllib2 in the case where I try to access the LAN (in fact when I don't want to configure the Proxy server, or when the URL match my own proxy exclusion list.) I espect this will help python urllib2 experts to find the issue. J?r?me Vacher alias jerrykhan the foolish dracomorpheus of the emerald dragon dynasty. ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-21 03:59 Message: Logged In: YES user_id=226518 I have just installed new virtual machine with Python 2.5b2 and my program works. It seems that only 2.4.3 is broken. Regards, Michal ---------------------------------------------------------------------- Comment By: John J Lee (jjlee) Date: 2006-07-20 14:09 Message: Logged In: YES user_id=261020 You're sure you didn't copy over the urllib2.py from 2.5b2 also? That might make the bug appear to go away, when really it's still there. The way to be sure is to try it on a different machine. Thanks for your report. ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-19 07:12 Message: Logged In: YES user_id=226518 I have checked that the last wersion my script works with is 2.4.2 and copied that version urllib2.py to 2.4.3 Lib directory. It works again. The only change in urllib2.py is in retry_http_basic_auth(), line 723: 2.4.2 user,pw = self.passwd.find_user_password(realm, host) 2.4.3 user, pw = self.passwd.find_user_password(realm, req.get_full_url()) So "host" is replaced by "req.get_full_url()". Checked again with 2.5b2 and it works! Probably I destroyed my test environment and tested it wrong way :( So the problem is only with 2.4.3. Previous versions and 2.5b works well. Regards, Michal Niklas ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-13 06:09 Message: Logged In: YES user_id=226518 2.5b2 does not work any better: Python 2.5b2 (r25b2:50512, Jul 11 2006, 10:16:14) [MSC v.1310 32 bit (Intel)] on win32 Result is the same as in 2.5b1 :( ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-11 02:27 Message: Logged In: YES user_id=226518 Tried it with 2.5 beta 1 and it is not better :( c:\tools\pyscripts\scripts>c:\python25\python2.5 Python 2.5b1 (r25b1:47027, Jun 20 2006, 09:31:33) [MSC v.1310 32 bit (Intel)] on win32 c:\tools\pyscripts\scripts>c:\python25\python2.5 proxy_bug.py trying http://www.python.org ... Traceback (most recent call last): File "proxy_bug.py", line 37, in get_page() File "proxy_bug.py", line 27, in get_page f = urllib2.urlopen(request) File "c:\python25\lib\urllib2.py", line 121, in urlopen return _opener.open(url, data) File "c:\python25\lib\urllib2.py", line 380, in open response = meth(req, response) File "c:\python25\lib\urllib2.py", line 491, in http_response 'http', request, response, code, msg, hdrs) File "c:\python25\lib\urllib2.py", line 412, in error result = self._call_chain(*args) File "c:\python25\lib\urllib2.py", line 353, in _call_chain result = func(*args) File "c:\python25\lib\urllib2.py", line 831, in http_error_407 authority, req, headers) File "c:\python25\lib\urllib2.py", line 795, in http_error_auth_reqed return self.retry_http_basic_auth(host, req, realm) File "c:\python25\lib\urllib2.py", line 805, in retry_http_basic_auth return self.parent.open(req) File "c:\python25\lib\urllib2.py", line 380, in open response = meth(req, response) File "c:\python25\lib\urllib2.py", line 491, in http_response 'http', request, response, code, msg, hdrs) File "c:\python25\lib\urllib2.py", line 418, in error return self._call_chain(*args) File "c:\python25\lib\urllib2.py", line 353, in _call_chain result = func(*args) File "c:\python25\lib\urllib2.py", line 499, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 407: Proxy Authentication Required ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-07-10 08:49 Message: Logged In: YES user_id=849994 Can you please try with 2.5b1? A lot of urllib2 related bugs have been fixed before this release. ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-10 04:41 Message: Logged In: YES user_id=226518 Cannot add attachment via upload so I put it here: #!/usr/bin/python # -*- coding: cp1250 -*- import urllib import urllib2 def get_page(): url = 'http://www.python.org' print "trying %s ..." % (url) # Setup proxy & authentication proxy = "poczta.heuthes:8080" usr1 = "USER" pass1 = "PASSWD" proxy_handler = urllib2.ProxyHandler({"http" : "http:/ /" + proxy}) pass_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm() pass_mgr.add_password(None, "http://" + proxy, usr1, pass1) pass_mgr.add_password(None, proxy, usr1, pass1) auth_handler = urllib2.HTTPBasicAuthHandler(pass_mgr) proxy_auth_handler = urllib2.ProxyBasicAuthHandler(pass_mgr) # Now build a new URL opener and install it opener = urllib2.build_opener(proxy_handler, proxy_auth_handler, auth_handler, urllib2.HTTPHandler) urllib2.install_opener(opener) request = urllib2.Request(url) f = urllib2.urlopen(request) data = f.read() print "OK. We have reply from %s.\nSize: %d [b]" % (url, len(data)) if len(data) < 400: print data else: print data[:200] print "..." print data[-200:] get_page() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1519816&group_id=5470 From noreply at sourceforge.net Wed Jan 3 19:04:00 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 10:04:00 -0800 Subject: [ python-Bugs-1627244 ] xml.dom.minidom parse bug Message-ID: Bugs item #1627244, was opened at 2007-01-03 19:04 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627244&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Pierre Imbaud (pmi) Assigned to: Nobody/Anonymous (nobody) Summary: xml.dom.minidom parse bug Initial Comment: xml.dom.minidom was unable to parse an xml file that came from an example provided by an official organism.(http://www.iptc.org/IPTC4XMP) The parsed file was somewhat hairy, but I have been able to reproduce the bug with a simplified version, attached. (ends with .xmp: its supposed to be an xmp file, the xmp standard being built on xml. Well, thats the short story). The offending part is the one that goes: xmpPLUS='....' it triggers an exception: ValueError: too many values to unpack, in _parse_ns_name. Some debugging showed an obvious mistake in the scanning of the name argument, that goes beyond the closing " ' ". I digged a little further thru a pdb session, but the bug seems to be located in c code. Thats the very first time I report a bug, chances are I provide too much or too little information... To whoever it may concern, here is the invoking code: from xml.dom import minidom ... class xmp(dict): def __init__(self, inStream): xmldoc = minidom.parse(inStream) .... x = xmp('/home/pierre/devt/port/IPTCCore-Full/x.xmp') traceback: /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xmpLib.py in __init__(self, inStream) 26 def __init__(self, inStream): 27 print minidom ---> 28 xmldoc = minidom.parse(inStream) 29 xmpmeta = xmldoc.childNodes[1] 30 rdf = xmpmeta.childNodes[1] /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/nxml/dom/minidom.py in parse(file, parser, bufsize) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parse(file, namespaces) 922 fp = open(file, 'rb') 923 try: --> 924 result = builder.parseFile(fp) 925 finally: 926 fp.close() /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parseFile(self, file) 205 if not buffer: 206 break --> 207 parser.Parse(buffer, 0) 208 if first_buffer and self.document.documentElement: 209 self._setup_subset(buffer) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in start_element_handler(self, name, attributes) 743 def start_element_handler(self, name, attributes): 744 if ' ' in name: --> 745 uri, localname, prefix, qname = _parse_ns_name(self, name) 746 else: 747 uri = EMPTY_NAMESPACE /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in _parse_ns_name(builder, name) 125 localname = intern(localname, localname) 126 else: --> 127 uri, localname = parts 128 prefix = EMPTY_PREFIX 129 qname = localname = intern(localname, localname) ValueError: too many values to unpack The offending c statement: /usr/src/packages/BUILD/Python-2.4/Modules/pyexpat.c(582)StartElement() The returned 'name': (Pdb) name Out[5]: u'XMP Photographic Licensing Universal System (xmpPLUS, http://ns.adobe.com/xap/1.0/PLUS/) CreditLineReq xmpPLUS' Its obvious the scanning went beyond the attribute. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627244&group_id=5470 From noreply at sourceforge.net Wed Jan 3 19:46:01 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 10:46:01 -0800 Subject: [ python-Feature Requests-1627266 ] optparse "store" action should not gobble up next option Message-ID: Feature Requests item #1627266, was opened at 2007-01-03 13:46 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Raghuram Devarakonda (draghuram) Assigned to: Nobody/Anonymous (nobody) Summary: optparse "store" action should not gobble up next option Initial Comment: Hi, Check the following code: --------------opttest.py---------- from optparse import OptionParser def process_options(): global options, args, parser parser = OptionParser() parser.add_option("--test", action="store_true") parser.add_option("-m", metavar="COMMENT", dest="comment", default=None) (options, args) = parser.parse_args() return process_options() print "comment (%r)" % options.comment --------------------- $ ./opttest.py -m --test comment ('--test') I was expecting this to give an error as "--test" is an option. But it looks like even C library's getopt() behaves similarly. It will be nice if optparse can report error in this case. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 From noreply at sourceforge.net Wed Jan 3 20:37:42 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 11:37:42 -0800 Subject: [ python-Bugs-1519816 ] urllib2 proxy does not work in 2.4.3 Message-ID: Bugs item #1519816, was opened at 2006-07-10 09:29 Message generated for change (Comment added) made by jjlee You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1519816&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michal Niklas (mniklas) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2 proxy does not work in 2.4.3 Initial Comment: My python app had to retrieve some web pages and while our network environment requires proxy it uses urllib2 opener (source is in attachment). It worked very well on older Python interpreters: ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on Python 2.4.2 (#67, Oct 30 2005, 16:11:18) [MSC v.1310 32 bit (Intel)] on win32 It works on linux with 2.3 and 2.4.1: Python 2.4.1 (#2, May 5 2005, 11:32:06) [GCC 3.3.5 (Debian 1:3.3.5-12)] on linux2 But it does not work with newest 2.4.3 on Linux: Python 2.4.3 (#1, Jul 10 2006, 09:57:52) [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 Desired result: isof-mark:~# python2.3 proxy_bug.py trying http://www.python.org ... OK. We have reply from http://www.python.org. Size: 13757 [b] design by pollenation Copyright ?? 1990-2006, Python Software Foundation
Legal Statements isof-mark:~# /usr/local/bin/python proxy_bug.py trying http://www.python.org ... Traceback (most recent call last): File "proxy_bug.py", line 37, in ? get_page() File "proxy_bug.py", line 27, in get_page f = urllib2.urlopen(request) File "/usr/local/lib/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/local/lib/python2.4/urllib2.py", line 364, in open response = meth(req, response) File "/usr/local/lib/python2.4/urllib2.py", line 471, in http_response response = self.parent.error( File "/usr/local/lib/python2.4/urllib2.py", line 402, in error return self._call_chain(*args) File "/usr/local/lib/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/local/lib/python2.4/urllib2.py", line 480, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 407: Proxy Authentication Required I have raported it on ActiveState bug list (http:// bugs.activestate.com/show_bug.cgi?id=47018) while I first spot this bug on their destribution but it seems that bug is in others distributions too. Regards, Michal Niklas ---------------------------------------------------------------------- Comment By: John J Lee (jjlee) Date: 2007-01-03 19:37 Message: Logged In: YES user_id=261020 Originator: NO lecaros and jerrykhan: Do you guys by any chance have a registry key HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyOverride? I'm guessing this setting is causing urllib to avoid using your default proxy for hosts on your local network, thereby saving you the 407 (the 407 means your proxy is complaining that you've not succeeded in authenticating). If so, the difference between urllib and urllib2's behaviours does not imply a bug, but just that urllib2 is missing support for getting proxy overrides from the Windows registry. This could easily be added. ---------------------------------------------------------------------- Comment By: Jos? Lecaros Cisterna (lecaros) Date: 2007-01-03 16:33 Message: Logged In: YES user_id=1410109 Originator: NO i have the same issue on windows xp, python 2.4.3 but using DOMAIN\username format ---------------------------------------------------------------------- Comment By: JerryKhan (jerrykhan) Date: 2006-11-28 15:17 Message: Logged In: YES user_id=867168 Originator: NO Hello, In my sens in a general manner there is something wrong in the urllib2 http code: But this may depends on the environment (I am not an expert in urllib) Here are my tests : using python 2.4.2 on Windows XP These simple codes failed with a 407 http error : Example E1: import urllib2 as URL a=URL.urlopen("http://lan_apache_url") print a.read() OR example E2: import urllib2 as URL r=URL.Request("http://lan_apache_url") a=URL.urlopen(r) print a.read() But succeed with urllib example E3 import urllib a=urllib.urlopen("http://lan_apache_url") print a.read() Notice that different code lines are minimal E1 and E3 are close: Notice also that I'm try to access a lan apache server which is not behind a Proxy. And I don't want to access to any Proxy (like exclusion string in IExplorer) But I found also that If I try to access to a protected link with HTTPS ... on the LAN, there is not problem. The issue is really on the HTTP interpreter or during the configuration of the URL opener. In the same time, some of my programs are able to access to Internet servers using the current Proxy server without any problem. For that, I use: import urllib2 as URL URL.install_opener(URL.build_opener( s.https_handler, s.proxy_auth_handler, s.cookie_handler)) Well, I developed a workaround in my programs ... to use urllib instead of urllib2 in the case where I try to access the LAN (in fact when I don't want to configure the Proxy server, or when the URL match my own proxy exclusion list.) I espect this will help python urllib2 experts to find the issue. J?r?me Vacher alias jerrykhan the foolish dracomorpheus of the emerald dragon dynasty. ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-21 08:59 Message: Logged In: YES user_id=226518 I have just installed new virtual machine with Python 2.5b2 and my program works. It seems that only 2.4.3 is broken. Regards, Michal ---------------------------------------------------------------------- Comment By: John J Lee (jjlee) Date: 2006-07-20 19:09 Message: Logged In: YES user_id=261020 You're sure you didn't copy over the urllib2.py from 2.5b2 also? That might make the bug appear to go away, when really it's still there. The way to be sure is to try it on a different machine. Thanks for your report. ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-19 12:12 Message: Logged In: YES user_id=226518 I have checked that the last wersion my script works with is 2.4.2 and copied that version urllib2.py to 2.4.3 Lib directory. It works again. The only change in urllib2.py is in retry_http_basic_auth(), line 723: 2.4.2 user,pw = self.passwd.find_user_password(realm, host) 2.4.3 user, pw = self.passwd.find_user_password(realm, req.get_full_url()) So "host" is replaced by "req.get_full_url()". Checked again with 2.5b2 and it works! Probably I destroyed my test environment and tested it wrong way :( So the problem is only with 2.4.3. Previous versions and 2.5b works well. Regards, Michal Niklas ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-13 11:09 Message: Logged In: YES user_id=226518 2.5b2 does not work any better: Python 2.5b2 (r25b2:50512, Jul 11 2006, 10:16:14) [MSC v.1310 32 bit (Intel)] on win32 Result is the same as in 2.5b1 :( ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-11 07:27 Message: Logged In: YES user_id=226518 Tried it with 2.5 beta 1 and it is not better :( c:\tools\pyscripts\scripts>c:\python25\python2.5 Python 2.5b1 (r25b1:47027, Jun 20 2006, 09:31:33) [MSC v.1310 32 bit (Intel)] on win32 c:\tools\pyscripts\scripts>c:\python25\python2.5 proxy_bug.py trying http://www.python.org ... Traceback (most recent call last): File "proxy_bug.py", line 37, in get_page() File "proxy_bug.py", line 27, in get_page f = urllib2.urlopen(request) File "c:\python25\lib\urllib2.py", line 121, in urlopen return _opener.open(url, data) File "c:\python25\lib\urllib2.py", line 380, in open response = meth(req, response) File "c:\python25\lib\urllib2.py", line 491, in http_response 'http', request, response, code, msg, hdrs) File "c:\python25\lib\urllib2.py", line 412, in error result = self._call_chain(*args) File "c:\python25\lib\urllib2.py", line 353, in _call_chain result = func(*args) File "c:\python25\lib\urllib2.py", line 831, in http_error_407 authority, req, headers) File "c:\python25\lib\urllib2.py", line 795, in http_error_auth_reqed return self.retry_http_basic_auth(host, req, realm) File "c:\python25\lib\urllib2.py", line 805, in retry_http_basic_auth return self.parent.open(req) File "c:\python25\lib\urllib2.py", line 380, in open response = meth(req, response) File "c:\python25\lib\urllib2.py", line 491, in http_response 'http', request, response, code, msg, hdrs) File "c:\python25\lib\urllib2.py", line 418, in error return self._call_chain(*args) File "c:\python25\lib\urllib2.py", line 353, in _call_chain result = func(*args) File "c:\python25\lib\urllib2.py", line 499, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 407: Proxy Authentication Required ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-07-10 13:49 Message: Logged In: YES user_id=849994 Can you please try with 2.5b1? A lot of urllib2 related bugs have been fixed before this release. ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-10 09:41 Message: Logged In: YES user_id=226518 Cannot add attachment via upload so I put it here: #!/usr/bin/python # -*- coding: cp1250 -*- import urllib import urllib2 def get_page(): url = 'http://www.python.org' print "trying %s ..." % (url) # Setup proxy & authentication proxy = "poczta.heuthes:8080" usr1 = "USER" pass1 = "PASSWD" proxy_handler = urllib2.ProxyHandler({"http" : "http:/ /" + proxy}) pass_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm() pass_mgr.add_password(None, "http://" + proxy, usr1, pass1) pass_mgr.add_password(None, proxy, usr1, pass1) auth_handler = urllib2.HTTPBasicAuthHandler(pass_mgr) proxy_auth_handler = urllib2.ProxyBasicAuthHandler(pass_mgr) # Now build a new URL opener and install it opener = urllib2.build_opener(proxy_handler, proxy_auth_handler, auth_handler, urllib2.HTTPHandler) urllib2.install_opener(opener) request = urllib2.Request(url) f = urllib2.urlopen(request) data = f.read() print "OK. We have reply from %s.\nSize: %d [b]" % (url, len(data)) if len(data) < 400: print data else: print data[:200] print "..." print data[-200:] get_page() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1519816&group_id=5470 From noreply at sourceforge.net Wed Jan 3 21:02:27 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 12:02:27 -0800 Subject: [ python-Bugs-1519816 ] urllib2 proxy does not work in 2.4.3 Message-ID: Bugs item #1519816, was opened at 2006-07-10 04:29 Message generated for change (Comment added) made by lecaros You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1519816&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michal Niklas (mniklas) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2 proxy does not work in 2.4.3 Initial Comment: My python app had to retrieve some web pages and while our network environment requires proxy it uses urllib2 opener (source is in attachment). It worked very well on older Python interpreters: ActivePython 2.4.2 Build 248 (ActiveState Corp.) based on Python 2.4.2 (#67, Oct 30 2005, 16:11:18) [MSC v.1310 32 bit (Intel)] on win32 It works on linux with 2.3 and 2.4.1: Python 2.4.1 (#2, May 5 2005, 11:32:06) [GCC 3.3.5 (Debian 1:3.3.5-12)] on linux2 But it does not work with newest 2.4.3 on Linux: Python 2.4.3 (#1, Jul 10 2006, 09:57:52) [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 Desired result: isof-mark:~# python2.3 proxy_bug.py trying http://www.python.org ... OK. We have reply from http://www.python.org. Size: 13757 [b] design by pollenation Copyright ?? 1990-2006, Python Software Foundation
Legal Statements isof-mark:~# /usr/local/bin/python proxy_bug.py trying http://www.python.org ... Traceback (most recent call last): File "proxy_bug.py", line 37, in ? get_page() File "proxy_bug.py", line 27, in get_page f = urllib2.urlopen(request) File "/usr/local/lib/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/local/lib/python2.4/urllib2.py", line 364, in open response = meth(req, response) File "/usr/local/lib/python2.4/urllib2.py", line 471, in http_response response = self.parent.error( File "/usr/local/lib/python2.4/urllib2.py", line 402, in error return self._call_chain(*args) File "/usr/local/lib/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/local/lib/python2.4/urllib2.py", line 480, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 407: Proxy Authentication Required I have raported it on ActiveState bug list (http:// bugs.activestate.com/show_bug.cgi?id=47018) while I first spot this bug on their destribution but it seems that bug is in others distributions too. Regards, Michal Niklas ---------------------------------------------------------------------- Comment By: Jos? Lecaros Cisterna (lecaros) Date: 2007-01-03 17:02 Message: Logged In: YES user_id=1410109 Originator: NO I have that key set to , but I don't know what this mean :) Use for local OR don't use for local. ---------------------------------------------------------------------- Comment By: John J Lee (jjlee) Date: 2007-01-03 16:37 Message: Logged In: YES user_id=261020 Originator: NO lecaros and jerrykhan: Do you guys by any chance have a registry key HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ProxyOverride? I'm guessing this setting is causing urllib to avoid using your default proxy for hosts on your local network, thereby saving you the 407 (the 407 means your proxy is complaining that you've not succeeded in authenticating). If so, the difference between urllib and urllib2's behaviours does not imply a bug, but just that urllib2 is missing support for getting proxy overrides from the Windows registry. This could easily be added. ---------------------------------------------------------------------- Comment By: Jos? Lecaros Cisterna (lecaros) Date: 2007-01-03 13:33 Message: Logged In: YES user_id=1410109 Originator: NO i have the same issue on windows xp, python 2.4.3 but using DOMAIN\username format ---------------------------------------------------------------------- Comment By: JerryKhan (jerrykhan) Date: 2006-11-28 12:17 Message: Logged In: YES user_id=867168 Originator: NO Hello, In my sens in a general manner there is something wrong in the urllib2 http code: But this may depends on the environment (I am not an expert in urllib) Here are my tests : using python 2.4.2 on Windows XP These simple codes failed with a 407 http error : Example E1: import urllib2 as URL a=URL.urlopen("http://lan_apache_url") print a.read() OR example E2: import urllib2 as URL r=URL.Request("http://lan_apache_url") a=URL.urlopen(r) print a.read() But succeed with urllib example E3 import urllib a=urllib.urlopen("http://lan_apache_url") print a.read() Notice that different code lines are minimal E1 and E3 are close: Notice also that I'm try to access a lan apache server which is not behind a Proxy. And I don't want to access to any Proxy (like exclusion string in IExplorer) But I found also that If I try to access to a protected link with HTTPS ... on the LAN, there is not problem. The issue is really on the HTTP interpreter or during the configuration of the URL opener. In the same time, some of my programs are able to access to Internet servers using the current Proxy server without any problem. For that, I use: import urllib2 as URL URL.install_opener(URL.build_opener( s.https_handler, s.proxy_auth_handler, s.cookie_handler)) Well, I developed a workaround in my programs ... to use urllib instead of urllib2 in the case where I try to access the LAN (in fact when I don't want to configure the Proxy server, or when the URL match my own proxy exclusion list.) I espect this will help python urllib2 experts to find the issue. J?r?me Vacher alias jerrykhan the foolish dracomorpheus of the emerald dragon dynasty. ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-21 03:59 Message: Logged In: YES user_id=226518 I have just installed new virtual machine with Python 2.5b2 and my program works. It seems that only 2.4.3 is broken. Regards, Michal ---------------------------------------------------------------------- Comment By: John J Lee (jjlee) Date: 2006-07-20 14:09 Message: Logged In: YES user_id=261020 You're sure you didn't copy over the urllib2.py from 2.5b2 also? That might make the bug appear to go away, when really it's still there. The way to be sure is to try it on a different machine. Thanks for your report. ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-19 07:12 Message: Logged In: YES user_id=226518 I have checked that the last wersion my script works with is 2.4.2 and copied that version urllib2.py to 2.4.3 Lib directory. It works again. The only change in urllib2.py is in retry_http_basic_auth(), line 723: 2.4.2 user,pw = self.passwd.find_user_password(realm, host) 2.4.3 user, pw = self.passwd.find_user_password(realm, req.get_full_url()) So "host" is replaced by "req.get_full_url()". Checked again with 2.5b2 and it works! Probably I destroyed my test environment and tested it wrong way :( So the problem is only with 2.4.3. Previous versions and 2.5b works well. Regards, Michal Niklas ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-13 06:09 Message: Logged In: YES user_id=226518 2.5b2 does not work any better: Python 2.5b2 (r25b2:50512, Jul 11 2006, 10:16:14) [MSC v.1310 32 bit (Intel)] on win32 Result is the same as in 2.5b1 :( ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-11 02:27 Message: Logged In: YES user_id=226518 Tried it with 2.5 beta 1 and it is not better :( c:\tools\pyscripts\scripts>c:\python25\python2.5 Python 2.5b1 (r25b1:47027, Jun 20 2006, 09:31:33) [MSC v.1310 32 bit (Intel)] on win32 c:\tools\pyscripts\scripts>c:\python25\python2.5 proxy_bug.py trying http://www.python.org ... Traceback (most recent call last): File "proxy_bug.py", line 37, in get_page() File "proxy_bug.py", line 27, in get_page f = urllib2.urlopen(request) File "c:\python25\lib\urllib2.py", line 121, in urlopen return _opener.open(url, data) File "c:\python25\lib\urllib2.py", line 380, in open response = meth(req, response) File "c:\python25\lib\urllib2.py", line 491, in http_response 'http', request, response, code, msg, hdrs) File "c:\python25\lib\urllib2.py", line 412, in error result = self._call_chain(*args) File "c:\python25\lib\urllib2.py", line 353, in _call_chain result = func(*args) File "c:\python25\lib\urllib2.py", line 831, in http_error_407 authority, req, headers) File "c:\python25\lib\urllib2.py", line 795, in http_error_auth_reqed return self.retry_http_basic_auth(host, req, realm) File "c:\python25\lib\urllib2.py", line 805, in retry_http_basic_auth return self.parent.open(req) File "c:\python25\lib\urllib2.py", line 380, in open response = meth(req, response) File "c:\python25\lib\urllib2.py", line 491, in http_response 'http', request, response, code, msg, hdrs) File "c:\python25\lib\urllib2.py", line 418, in error return self._call_chain(*args) File "c:\python25\lib\urllib2.py", line 353, in _call_chain result = func(*args) File "c:\python25\lib\urllib2.py", line 499, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 407: Proxy Authentication Required ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-07-10 08:49 Message: Logged In: YES user_id=849994 Can you please try with 2.5b1? A lot of urllib2 related bugs have been fixed before this release. ---------------------------------------------------------------------- Comment By: Michal Niklas (mniklas) Date: 2006-07-10 04:41 Message: Logged In: YES user_id=226518 Cannot add attachment via upload so I put it here: #!/usr/bin/python # -*- coding: cp1250 -*- import urllib import urllib2 def get_page(): url = 'http://www.python.org' print "trying %s ..." % (url) # Setup proxy & authentication proxy = "poczta.heuthes:8080" usr1 = "USER" pass1 = "PASSWD" proxy_handler = urllib2.ProxyHandler({"http" : "http:/ /" + proxy}) pass_mgr = urllib2.HTTPPasswordMgrWithDefaultRealm() pass_mgr.add_password(None, "http://" + proxy, usr1, pass1) pass_mgr.add_password(None, proxy, usr1, pass1) auth_handler = urllib2.HTTPBasicAuthHandler(pass_mgr) proxy_auth_handler = urllib2.ProxyBasicAuthHandler(pass_mgr) # Now build a new URL opener and install it opener = urllib2.build_opener(proxy_handler, proxy_auth_handler, auth_handler, urllib2.HTTPHandler) urllib2.install_opener(opener) request = urllib2.Request(url) f = urllib2.urlopen(request) data = f.read() print "OK. We have reply from %s.\nSize: %d [b]" % (url, len(data)) if len(data) < 400: print data else: print data[:200] print "..." print data[-200:] get_page() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1519816&group_id=5470 From noreply at sourceforge.net Wed Jan 3 21:26:29 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 12:26:29 -0800 Subject: [ python-Bugs-1627316 ] an extra comma in condition command crashes pdb Message-ID: Bugs item #1627316, was opened at 2007-01-03 12:26 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627316&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ilya Sandler (isandler) Assigned to: Nobody/Anonymous (nobody) Summary: an extra comma in condition command crashes pdb Initial Comment: if instead of condition one enters (note the extra comma): condition , pdb throws an exception and aborts execution of a program Relevant parts of stacktrace: File "/usr/lib/python2.4/bdb.py", line 48, in trace_dispatch return self.dispatch_line(frame) File "/usr/lib/python2.4/bdb.py", line 66, in dispatch_line self.user_line(frame) File "/usr/lib/python2.4/pdb.py", line 135, in user_line self.interaction(frame, None) File "/usr/lib/python2.4/pdb.py", line 158, in interaction self.cmdloop() File "/usr/lib/python2.4/cmd.py", line 142, in cmdloop stop = self.onecmd(line) File "/usr/lib/python2.4/cmd.py", line 219, in onecmd return func(arg) File "/usr/lib/python2.4/pdb.py", line 390, in do_condition bpnum = int(args[0].strip()) ValueError: invalid literal for int(): 2, Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program > /site/tools/pse/lib/python2.4/pdb.py(390)do_condition() -> bpnum = int(args[0].strip()) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627316&group_id=5470 From noreply at sourceforge.net Wed Jan 3 23:20:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 14:20:57 -0800 Subject: [ python-Bugs-1627373 ] Typo in module index for Carbon.CarbonEvt Message-ID: Bugs item #1627373, was opened at 2007-01-03 14:20 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627373&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Brett Cannon (bcannon) Assigned to: Nobody/Anonymous (nobody) Summary: Typo in module index for Carbon.CarbonEvt Initial Comment: The module index lists the name at 'CaronEvt' (notice the missing 'b'). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627373&group_id=5470 From noreply at sourceforge.net Thu Jan 4 00:54:26 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 15:54:26 -0800 Subject: [ python-Bugs-1601399 ] urllib2 does not close sockets properly Message-ID: Bugs item #1601399, was opened at 2006-11-22 21:04 Message generated for change (Comment added) made by jjlee You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1601399&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Brendan Jurd (direvus) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2 does not close sockets properly Initial Comment: Python 2.5 (release25-maint, Oct 29 2006, 12:44:11) [GCC 4.1.2 20061026 (prerelease) (Debian 4.1.1-18)] on linux2 I first noticed this when a program of mine (which makes a brief HTTPS connection every 20 seconds) started having some weird crashes. It turned out that the process had a massive number of file descriptors open. I did some debugging, and it became clear that the program was opening two file descriptors for every HTTPS connection it made with urllib2, and it wasn't closing them, even though I was reading all data from the response objects and then explictly calling close() on them. I found I could easily reproduce the behaviour using the interactive console. Try this while keeping an eye on the file descriptors held open by the python process: To begin with, the process will have the usual FDs 0, 1 and 2 open for std(in|out|err), plus one other. >>> import urllib2 >>> f = urllib2.urlopen("http://www.google.com") Now at this point the process has opened two more sockets. >>> f.read() [... HTML ensues ...] >>> f.close() The two extra sockets are still open. >>> del f The two extra sockets are STILL open. >>> f = urllib2.urlopen("http://www.python.org") >>> f.read() [...] >>> f.close() And now we have a total of four abandoned sockets open. It's not until you terminate the process entirely, or the OS (eventually) closes the socket on idle timeout, that they are closed. Note that if you do the same thing with httplib, the sockets are properly closed: >>> import httplib >>> c = httlib.HTTPConnection("www.google.com", 80) >>> c.connect() A socket has been opened. >>> c.putrequest("GET", "/") >>> c.endheaders() >>> r = c.getresponse() >>> r.read() [...] >>> r.close() And the socket has been closed. ---------------------------------------------------------------------- Comment By: John J Lee (jjlee) Date: 2007-01-03 23:54 Message: Logged In: YES user_id=261020 Originator: NO Confirmed. The cause is the (ab)use of socket._fileobject by urllib2.AbstractHTTPHandler to provide .readline() and .readlines() methods. _fileobject simply does not close the socket on _fileobject.close() (since in the original intended use of _fileobject, _socketobject "owns" the socket, and _fileobject only has a reference to it). The bug was introduced with the upgrade to HTTP/1.1 in revision 36871. The patch here fixes it: http://python.org/sf/1627441 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1601399&group_id=5470 From noreply at sourceforge.net Thu Jan 4 05:49:01 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 20:49:01 -0800 Subject: [ python-Bugs-1627543 ] Status bar on OSX garbled Message-ID: Bugs item #1627543, was opened at 2007-01-03 23:49 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627543&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: IDLE Group: Platform-specific Status: Open Resolution: None Priority: 5 Private: No Submitted By: sigzero (sigzero) Assigned to: Nobody/Anonymous (nobody) Summary: Status bar on OSX garbled Initial Comment: The way that OSX windows work there is always a resizing handle in the lower right hand corner of windows. The way that IDLE currently does the statusbar is: |Ln: 13|Col: 4 This cause the Col number to be placed over the resizer. Something along the lines of: |Ln: 13|Col: 4| would probably ensure that the resizer is not overlayed. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627543&group_id=5470 From noreply at sourceforge.net Thu Jan 4 05:49:58 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 20:49:58 -0800 Subject: [ python-Bugs-1627543 ] Status bar on OSX garbled Message-ID: Bugs item #1627543, was opened at 2007-01-03 23:49 Message generated for change (Comment added) made by sigzero You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627543&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: IDLE Group: Platform-specific Status: Open Resolution: None Priority: 5 Private: No Submitted By: sigzero (sigzero) Assigned to: Nobody/Anonymous (nobody) Summary: Status bar on OSX garbled Initial Comment: The way that OSX windows work there is always a resizing handle in the lower right hand corner of windows. The way that IDLE currently does the statusbar is: |Ln: 13|Col: 4 This cause the Col number to be placed over the resizer. Something along the lines of: |Ln: 13|Col: 4| would probably ensure that the resizer is not overlayed. ---------------------------------------------------------------------- >Comment By: sigzero (sigzero) Date: 2007-01-03 23:49 Message: Logged In: YES user_id=1339209 Originator: YES This is for IDLE 1.1.4 and I am using Python 2.4.4 on OSX Tiger. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627543&group_id=5470 From noreply at sourceforge.net Thu Jan 4 07:08:15 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 22:08:15 -0800 Subject: [ python-Bugs-1627575 ] RotatingFileHandler cannot recover from failed doRollover() Message-ID: Bugs item #1627575, was opened at 2007-01-03 22:08 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627575&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Forest Wilkinson (forest) Assigned to: Nobody/Anonymous (nobody) Summary: RotatingFileHandler cannot recover from failed doRollover() Initial Comment: When RotatingFileHandler.doRollover() raises an exception, it puts the handler object in a permanently failing state, with no way to recover using RotatingFileHandler methods. From that point on, the handler object raises an exception every time a message is logged, which renders logging in an application practically useless. Furthermore, a handleError() method has no good way of correcting the problem, because the API does not expose any way to re-open the file after doRollover() has closed it. Unfortunately, this is a common occurrence on Windows, because doRollover() will fail if someone is running tail -f on the log file. Suggestions: - Make doRollover() always leave the handler object in a usable state, even if the rollover fails. - Add a reOpen() method to FileHandler, which an error handler could use to recover from problems like this. (It would also be useful for applications that want to re-open log files in response to a SIGHUP.) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627575&group_id=5470 From noreply at sourceforge.net Thu Jan 4 07:26:31 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 22:26:31 -0800 Subject: [ python-Bugs-1627373 ] Typo in module index for Carbon.CarbonEvt Message-ID: Bugs item #1627373, was opened at 2007-01-03 14:20 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627373&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Brett Cannon (bcannon) >Assigned to: Neal Norwitz (nnorwitz) Summary: Typo in module index for Carbon.CarbonEvt Initial Comment: The module index lists the name at 'CaronEvt' (notice the missing 'b'). ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-03 22:26 Message: Logged In: YES user_id=33168 Originator: NO Committed revision 53235. Committed revision 53236. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627373&group_id=5470 From noreply at sourceforge.net Thu Jan 4 07:27:44 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 22:27:44 -0800 Subject: [ python-Bugs-1627575 ] RotatingFileHandler cannot recover from failed doRollover() Message-ID: Bugs item #1627575, was opened at 2007-01-03 22:08 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627575&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Forest Wilkinson (forest) >Assigned to: Vinay Sajip (vsajip) Summary: RotatingFileHandler cannot recover from failed doRollover() Initial Comment: When RotatingFileHandler.doRollover() raises an exception, it puts the handler object in a permanently failing state, with no way to recover using RotatingFileHandler methods. From that point on, the handler object raises an exception every time a message is logged, which renders logging in an application practically useless. Furthermore, a handleError() method has no good way of correcting the problem, because the API does not expose any way to re-open the file after doRollover() has closed it. Unfortunately, this is a common occurrence on Windows, because doRollover() will fail if someone is running tail -f on the log file. Suggestions: - Make doRollover() always leave the handler object in a usable state, even if the rollover fails. - Add a reOpen() method to FileHandler, which an error handler could use to recover from problems like this. (It would also be useful for applications that want to re-open log files in response to a SIGHUP.) ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-03 22:27 Message: Logged In: YES user_id=33168 Originator: NO Vinay, was this addressed? I thought there was a similar issue. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627575&group_id=5470 From noreply at sourceforge.net Thu Jan 4 07:32:41 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 03 Jan 2007 22:32:41 -0800 Subject: [ python-Bugs-1627244 ] xml.dom.minidom parse bug Message-ID: Bugs item #1627244, was opened at 2007-01-03 10:04 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627244&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Pierre Imbaud (pmi) Assigned to: Nobody/Anonymous (nobody) Summary: xml.dom.minidom parse bug Initial Comment: xml.dom.minidom was unable to parse an xml file that came from an example provided by an official organism.(http://www.iptc.org/IPTC4XMP) The parsed file was somewhat hairy, but I have been able to reproduce the bug with a simplified version, attached. (ends with .xmp: its supposed to be an xmp file, the xmp standard being built on xml. Well, thats the short story). The offending part is the one that goes: xmpPLUS='....' it triggers an exception: ValueError: too many values to unpack, in _parse_ns_name. Some debugging showed an obvious mistake in the scanning of the name argument, that goes beyond the closing " ' ". I digged a little further thru a pdb session, but the bug seems to be located in c code. Thats the very first time I report a bug, chances are I provide too much or too little information... To whoever it may concern, here is the invoking code: from xml.dom import minidom ... class xmp(dict): def __init__(self, inStream): xmldoc = minidom.parse(inStream) .... x = xmp('/home/pierre/devt/port/IPTCCore-Full/x.xmp') traceback: /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xmpLib.py in __init__(self, inStream) 26 def __init__(self, inStream): 27 print minidom ---> 28 xmldoc = minidom.parse(inStream) 29 xmpmeta = xmldoc.childNodes[1] 30 rdf = xmpmeta.childNodes[1] /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/nxml/dom/minidom.py in parse(file, parser, bufsize) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parse(file, namespaces) 922 fp = open(file, 'rb') 923 try: --> 924 result = builder.parseFile(fp) 925 finally: 926 fp.close() /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parseFile(self, file) 205 if not buffer: 206 break --> 207 parser.Parse(buffer, 0) 208 if first_buffer and self.document.documentElement: 209 self._setup_subset(buffer) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in start_element_handler(self, name, attributes) 743 def start_element_handler(self, name, attributes): 744 if ' ' in name: --> 745 uri, localname, prefix, qname = _parse_ns_name(self, name) 746 else: 747 uri = EMPTY_NAMESPACE /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in _parse_ns_name(builder, name) 125 localname = intern(localname, localname) 126 else: --> 127 uri, localname = parts 128 prefix = EMPTY_PREFIX 129 qname = localname = intern(localname, localname) ValueError: too many values to unpack The offending c statement: /usr/src/packages/BUILD/Python-2.4/Modules/pyexpat.c(582)StartElement() The returned 'name': (Pdb) name Out[5]: u'XMP Photographic Licensing Universal System (xmpPLUS, http://ns.adobe.com/xap/1.0/PLUS/) CreditLineReq xmpPLUS' Its obvious the scanning went beyond the attribute. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-03 22:32 Message: Logged In: YES user_id=33168 Originator: NO Dupe of 1627096 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627244&group_id=5470 From noreply at sourceforge.net Thu Jan 4 10:35:45 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 01:35:45 -0800 Subject: [ python-Bugs-1579370 ] Segfault provoked by generators and exceptions Message-ID: Bugs item #1579370, was opened at 2006-10-18 02:23 Message generated for change (Comment added) made by awaters You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Nobody/Anonymous (nobody) Summary: Segfault provoked by generators and exceptions Initial Comment: A reproducible segfault when using heavily-nested generators and exceptions. Unfortunately, I haven't yet been able to provoke this behaviour with a standalone python2.5 script. There are, however, no third-party c extensions running in the process so I'm fairly confident that it is a problem in the core. The gist of the code is a series of nested generators which leave scope when an exception is raised. This exception is caught and re-raised in an outer loop. The old exception was holding on to the frame which was keeping the generators alive, and the sequence of generator destruction and new finalization caused the segfault. ---------------------------------------------------------------------- Comment By: Andrew Waters (awaters) Date: 2007-01-04 09:35 Message: Logged In: YES user_id=1418249 Originator: NO This fixes the segfault problem that I was able to reliably reproduce on Linux. We need to get this applied (assuming it is the correct fix) to the source to make Python 2.5 usable for me in production code. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-11-27 18:41 Message: Logged In: YES user_id=1611720 Originator: YES The following patch resets the thread state of the generator when it is resumed, which prevents the segfault for me: Index: Objects/genobject.c =================================================================== --- Objects/genobject.c (revision 52849) +++ Objects/genobject.c (working copy) @@ -77,6 +77,7 @@ Py_XINCREF(tstate->frame); assert(f->f_back == NULL); f->f_back = tstate->frame; + f->f_tstate = tstate; gen->gi_running = 1; result = PyEval_EvalFrameEx(f, exc); ---------------------------------------------------------------------- Comment By: Eric Noyau (eric_noyau) Date: 2006-11-27 18:07 Message: Logged In: YES user_id=1388768 Originator: NO We are experiencing the same segfault in our application, reliably. Running our unit test suite just segfault everytime on both Linux and Mac OS X. Applying Martin's patch fixes the segfault, and makes everything fine and dandy, at the cost of some memory leaks if I understand properly. This particular bug prevents us to upgrade to python 2.5 in production. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-28 05:18 Message: Logged In: YES user_id=31435 > I tried Tim's hope.py on Linux x86_64 and > Mac OS X 10.4 with debug builds and neither > one crashed. Tim's guess looks pretty damn > good too. Neal, note that it's the /Windows/ malloc that fills freed memory with "dangerous bytes" in a debug build -- this really has nothing to do with that it's a debug build of /Python/ apart from that on Windows a debug build of Python also links in the debug version of Microsoft's malloc. The valgrind report is pointing at the same thing. Whether this leads to a crash is purely an accident of when and how the system malloc happens to reuse the freed memory. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-28 04:56 Message: Logged In: YES user_id=33168 Mike, what platform are you having the problem on? I tried Tim's hope.py on Linux x86_64 and Mac OS X 10.4 with debug builds and neither one crashed. Tim's guess looks pretty damn good too. Here's the result of valgrind: Invalid read of size 8 at 0x4CEBFE: PyTraceBack_Here (traceback.c:117) by 0x49C1F1: PyEval_EvalFrameEx (ceval.c:2515) by 0x4F615D: gen_send_ex (genobject.c:82) by 0x4F6326: gen_close (genobject.c:128) by 0x4F645E: gen_del (genobject.c:163) by 0x4F5F00: gen_dealloc (genobject.c:31) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x44534E: dict_dealloc (dictobject.c:801) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x4664FF: subtype_dealloc (typeobject.c:686) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x42325D: instancemethod_dealloc (classobject.c:2287) Address 0x56550C0 is 88 bytes inside a block of size 152 free'd at 0x4A1A828: free (vg_replace_malloc.c:233) by 0x4C3899: tstate_delete_common (pystate.c:256) by 0x4C3926: PyThreadState_DeleteCurrent (pystate.c:282) by 0x4D4043: t_bootstrap (threadmodule.c:448) by 0x4B24C48: pthread_start_thread (in /lib/libpthread-0.10.so) The only way I can think to fix this is to keep a set of active generators in the PyThreadState and calling gen_send_ex(exc=1) for all the active generators before killing the tstate in t_bootstrap. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-10-19 07:58 Message: Logged In: YES user_id=6656 > and for some reason Python uses the system malloc directly > to obtain memory for thread states. This bit is fairly easy: they are allocated without the GIL being held, which breaks an assumption of PyMalloc. No idea about the real problem, sadly. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-19 00:38 Message: Logged In: YES user_id=31435 I've attached a much simplified pure-Python script (hope.py) that reproduces a problem very quickly, on Windows, in a /debug/ build of current trunk. It typically prints: exiting generator joined thread at most twice before crapping out. At the time, the `next` argument to newtracebackobject() is 0xdddddddd, and tracing back a level shows that, in PyTraceBack_Here(), frame->tstate is entirely filled with 0xdd bytes. Note that this is not a debug-build obmalloc gimmick! This is Microsoft's similar debug-build gimmick for their malloc, and for some reason Python uses the system malloc directly to obtain memory for thread states. The Microsoft debug free() fills newly-freed memory with 0xdd, which has the same meaning as the debug-build obmalloc's DEADBYTE (0xdb). So somebody is accessing a thread state here after it's been freed. Best guess is that the generator is getting "cleaned up" after the thread that created it has gone away, so the generator's frame's f_tstate is trash. Note that a PyThreadState (a frame's f_tstate) is /not/ a Python object -- it's just a raw C struct, and its lifetime isn't controlled by refcounts. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-19 00:12 Message: Logged In: YES user_id=1611720 Despite Tim's reassurrance, I'm afraid that Martin's patch does infact prevent the segfault. Sounds like it also introduces a memleak. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-18 21:57 Message: Logged In: YES user_id=31435 > Can anybody tell why gi_frame *isn't* incref'ed when > the generator is created? As documented (in concrete.tex), PyGen_New(f) steals a reference to the frame passed to it. Its only call site (well, in the core) is in ceval.c, which returns immediately after PyGen_New takes over ownership of the frame the caller created: """ /* Create a new generator that owns the ready to run frame * and return that as the value. */ return PyGen_New(f); """ In short, that PyGen_New() doesn't incref the frame passed to it is intentional. It's possible that the intent is flawed ;-), but offhand I don't see how. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-10-18 21:05 Message: Logged In: YES user_id=21627 Can you please review/try attached patch? Can anybody tell why gi_frame *isn't* incref'ed when the generator is created? ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 19:47 Message: Logged In: YES user_id=1611720 I cannot yet produce an only-python script which reproduces the problem, but I can give an overview. There is a generator running in one thread, an exception being raised in another thread, and as a consequent, the generator in the first thread is garbage-collected (triggering an exception due to the new generator cleanup). The problem is extremely sensitive to timing--often the insertion/removal of print statements, or reordering the code, causes the problem to vanish, which is confounding my ability to create a simple test script. def getdocs(): def f(): while True: f() yield None # ----------------------------------------------------------------------------- class B(object): def __init__(self,): pass def doit(self): # must be an instance var to trigger segfault self.docIter = getdocs() print self.docIter # this is the generator referred-to in the traceback for i, item in enumerate(self.docIter): if i > 9: break print 'exiting generator' class A(object): """ Process entry point / main thread """ def __init__(self): while True: try: self.func() except Exception, e: print 'right after raise' def func(self): b = B() thread = threading.Thread(target=b.doit) thread.start() start_t = time.time() while True: try: if time.time() - start_t > 1: raise Exception except Exception: print 'right before raise' # SIGSEGV here. If this is changed to # 'break', no segfault occurs raise if __name__ == '__main__': A() ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 19:37 Message: Logged In: YES user_id=1611720 I've produced a simplified traceback with a single generator . Note the frame being used in the traceback (#0) is the same frame being dealloc'd (#11). The relevant call in traceback.c is: PyTraceBack_Here(PyFrameObject *frame) { PyThreadState *tstate = frame->f_tstate; PyTracebackObject *oldtb = (PyTracebackObject *) tstate->curexc_traceback; PyTracebackObject *tb = newtracebackobject(oldtb, frame); and I can verify that oldtb contains garbage: (gdb) print frame $1 = (PyFrameObject *) 0x8964d94 (gdb) print frame->f_tstate $2 = (PyThreadState *) 0x895b178 (gdb) print $2->curexc_traceback $3 = (PyObject *) 0x66 #0 0x080e4296 in PyTraceBack_Here (frame=0x8964d94) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x8964d94, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb7cca4ac, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb7cca4ac, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb7cca4ac) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb7cca4ac) at Objects/genobject.c:31 #6 0x080815b9 in dict_dealloc (mp=0xb7cc913c) at Objects/dictobject.c:801 #7 0x080927b2 in subtype_dealloc (self=0xb7cca76c) at Objects/typeobject.c:686 #8 0x0806028d in instancemethod_dealloc (im=0xb7d07f04) at Objects/classobject.c:2285 #9 0x080815b9 in dict_dealloc (mp=0xb7cc90b4) at Objects/dictobject.c:801 #10 0x080927b2 in subtype_dealloc (self=0xb7cca86c) at Objects/typeobject.c:686 #11 0x081028c5 in frame_dealloc (f=0x8964a94) at Objects/frameobject.c:416 #12 0x080e41b1 in tb_dealloc (tb=0xb7cc1fcc) at Python/traceback.c:34 #13 0x080e41c2 in tb_dealloc (tb=0xb7cc1f7c) at Python/traceback.c:33 #14 0x08080dca in insertdict (mp=0xb7f99824, key=0xb7ccd020, hash=1492466088, value=0xb7ccd054) at Objects/dictobject.c:394 #15 0x080811a4 in PyDict_SetItem (op=0xb7f99824, key=0xb7ccd020, value=0xb7ccd054) at Objects/dictobject.c:619 #16 0x08082dc6 in PyDict_SetItemString (v=0xb7f99824, key=0x8129284 "exc_traceback", item=0xb7ccd054) at Objects/dictobject.c:2103 #17 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb7ccd054) at Python/sysmodule.c:82 #18 0x080bc9e5 in PyEval_EvalFrameEx (f=0x895f934, throwflag=0) at Python/ceval.c:2954 ---Type to continue, or q to quit--- #19 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f6ade8, globals=0xb7fafa44, locals=0x0, args=0xb7cc5ff8, argcount=1, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #20 0x08104083 in function_call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/funcobject.c:517 #21 0x0805a660 in PyObject_Call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/abstract.c:1860 ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 02:23 Message: Logged In: YES user_id=1611720 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1208400192 (LWP 26235)] 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 94 if ((next != NULL && !PyTraceBack_Check(next)) || (gdb) bt #0 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x9c2d7b4, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb64f880c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb64f880c, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb64f880c) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb64f880c) at Objects/genobject.c:31 #6 0x080b9912 in PyEval_EvalFrameEx (f=0x9c2802c, throwflag=1) at Python/ceval.c:2491 #7 0x08101a40 in gen_send_ex (gen=0xb64f362c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #8 0x08101c0f in gen_close (gen=0xb64f362c, args=0x0) at Objects/genobject.c:128 #9 0x08101cde in gen_del (self=0xb64f362c) at Objects/genobject.c:163 #10 0x0810195b in gen_dealloc (gen=0xb64f362c) at Objects/genobject.c:31 #11 0x080815b9 in dict_dealloc (mp=0xb64f4a44) at Objects/dictobject.c:801 #12 0x080927b2 in subtype_dealloc (self=0xb64f340c) at Objects/typeobject.c:686 #13 0x0806028d in instancemethod_dealloc (im=0xb796a0cc) at Objects/classobject.c:2285 #14 0x080815b9 in dict_dealloc (mp=0xb64f78ac) at Objects/dictobject.c:801 #15 0x080927b2 in subtype_dealloc (self=0xb64f810c) at Objects/typeobject.c:686 #16 0x081028c5 in frame_dealloc (f=0x9c272bc) at Objects/frameobject.c:416 #17 0x080e41b1 in tb_dealloc (tb=0xb799166c) at Python/traceback.c:34 #18 0x080e41c2 in tb_dealloc (tb=0xb4071284) at Python/traceback.c:33 #19 0x080e41c2 in tb_dealloc (tb=0xb7991824) at Python/traceback.c:33 #20 0x08080dca in insertdict (mp=0xb7f56824, key=0xb3fb9930, hash=1492466088, value=0xb3fb9914) at Objects/dictobject.c:394 #21 0x080811a4 in PyDict_SetItem (op=0xb7f56824, key=0xb3fb9930, value=0xb3fb9914) at Objects/dictobject.c:619 #22 0x08082dc6 in PyDict_SetItemString (v=0xb7f56824, key=0x8129284 "exc_traceback", item=0xb3fb9914) at Objects/dictobject.c:2103 #23 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb3fb9914) at Python/sysmodule.c:82 #24 0x080bc9e5 in PyEval_EvalFrameEx (f=0x9c10e7c, throwflag=0) at Python/ceval.c:2954 #25 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc890, globals=0xb7bbe57c, locals=0x0, args=0x9b8e2ac, argcount=1, kws=0x9b8e2b0, kwcount=0, defs=0xb7b7aed8, defcount=1, closure=0x0) at Python/ceval.c:2833 #26 0x080bd62a in PyEval_EvalFrameEx (f=0x9b8e16c, throwflag=0) at Python/ceval.c:3662 #27 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc848, globals=0xb7bbe57c, locals=0x0, args=0xb7af9d58, argcount=1, kws=0x9b7a818, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #28 0x08104083 in function_call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/funcobject.c:517 #29 0x0805a660 in PyObject_Call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/abstract.c:1860 #30 0x080bcb4b in PyEval_EvalFrameEx (f=0x9b82c0c, throwflag=0) at Python/ceval.c:3846 #31 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7cd6608, globals=0xb7cd4934, locals=0x0, args=0x9b7765c, argcount=2, kws=0x9b77664, kwcount=0, defs=0x0, defcount=0, closure=0xb7cfe874) at Python/ceval.c:2833 #32 0x080bd62a in PyEval_EvalFrameEx (f=0x9b7751c, throwflag=0) at Python/ceval.c:3662 #33 0x080bdf70 in PyEval_EvalFrameEx (f=0x9a9646c, throwflag=0) at Python/ceval.c:3652 #34 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39728, globals=0xb7f6ca44, locals=0x0, args=0x9b7a00c, argcount=0, kws=0x9b7a00c, kwcount=0, defs=0x0, defcount=0, closure=0xb796410c) at Python/ceval.c:2833 #35 0x080bd62a in PyEval_EvalFrameEx (f=0x9b79ebc, throwflag=0) at Python/ceval.c:3662 #36 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39770, globals=0xb7f6ca44, locals=0x0, args=0x99086c0, argcount=0, kws=0x99086c0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #37 0x080bd62a in PyEval_EvalFrameEx (f=0x9908584, throwflag=0) at Python/ceval.c:3662 #38 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 ---Type to continue, or q to quit--- #39 0x080bff32 in PyEval_EvalCode (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44) at Python/ceval.c:494 #40 0x080ddff1 in PyRun_FileExFlags (fp=0x98a4008, filename=0xbfffd4a3 "scoreserver.py", start=257, globals=0xb7f6ca44, locals=0xb7f6ca44, closeit=1, flags=0xbfffd298) at Python/pythonrun.c:1264 #41 0x080de321 in PyRun_SimpleFileExFlags (fp=Variable "fp" is not available. ) at Python/pythonrun.c:870 #42 0x08056ac4 in Py_Main (argc=1, argv=0xbfffd334) at Modules/main.c:496 #43 0x00a69d5f in __libc_start_main () from /lib/libc.so.6 #44 0x08056051 in _start () ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 From noreply at sourceforge.net Thu Jan 4 11:06:52 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 02:06:52 -0800 Subject: [ python-Bugs-1627690 ] documentation error for "startswith" string method Message-ID: Bugs item #1627690, was opened at 2007-01-04 10:06 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627690&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Keith Briggs (kbriggs) Assigned to: Nobody/Anonymous (nobody) Summary: documentation error for "startswith" string method Initial Comment: At http://docs.python.org/lib/string-methods.html#l2h-241, I think prefix can also be a tuple of suffixes to look for. should be prefix can also be a tuple of prefixes to look for. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627690&group_id=5470 From noreply at sourceforge.net Thu Jan 4 11:42:02 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 02:42:02 -0800 Subject: [ python-Bugs-1579370 ] Segfault provoked by generators and exceptions Message-ID: Bugs item #1579370, was opened at 2006-10-18 04:23 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Nobody/Anonymous (nobody) Summary: Segfault provoked by generators and exceptions Initial Comment: A reproducible segfault when using heavily-nested generators and exceptions. Unfortunately, I haven't yet been able to provoke this behaviour with a standalone python2.5 script. There are, however, no third-party c extensions running in the process so I'm fairly confident that it is a problem in the core. The gist of the code is a series of nested generators which leave scope when an exception is raised. This exception is caught and re-raised in an outer loop. The old exception was holding on to the frame which was keeping the generators alive, and the sequence of generator destruction and new finalization caused the segfault. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 11:42 Message: Logged In: YES user_id=21627 Originator: NO Why do frame objects have a thread state in the first place? In particular, why does PyTraceBack_Here get the thread state from the frame, instead of using the current thread? Introduction of f_tstate goes back to r7882, but it is not clear why it was done that way. ---------------------------------------------------------------------- Comment By: Andrew Waters (awaters) Date: 2007-01-04 10:35 Message: Logged In: YES user_id=1418249 Originator: NO This fixes the segfault problem that I was able to reliably reproduce on Linux. We need to get this applied (assuming it is the correct fix) to the source to make Python 2.5 usable for me in production code. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-11-27 19:41 Message: Logged In: YES user_id=1611720 Originator: YES The following patch resets the thread state of the generator when it is resumed, which prevents the segfault for me: Index: Objects/genobject.c =================================================================== --- Objects/genobject.c (revision 52849) +++ Objects/genobject.c (working copy) @@ -77,6 +77,7 @@ Py_XINCREF(tstate->frame); assert(f->f_back == NULL); f->f_back = tstate->frame; + f->f_tstate = tstate; gen->gi_running = 1; result = PyEval_EvalFrameEx(f, exc); ---------------------------------------------------------------------- Comment By: Eric Noyau (eric_noyau) Date: 2006-11-27 19:07 Message: Logged In: YES user_id=1388768 Originator: NO We are experiencing the same segfault in our application, reliably. Running our unit test suite just segfault everytime on both Linux and Mac OS X. Applying Martin's patch fixes the segfault, and makes everything fine and dandy, at the cost of some memory leaks if I understand properly. This particular bug prevents us to upgrade to python 2.5 in production. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-28 07:18 Message: Logged In: YES user_id=31435 > I tried Tim's hope.py on Linux x86_64 and > Mac OS X 10.4 with debug builds and neither > one crashed. Tim's guess looks pretty damn > good too. Neal, note that it's the /Windows/ malloc that fills freed memory with "dangerous bytes" in a debug build -- this really has nothing to do with that it's a debug build of /Python/ apart from that on Windows a debug build of Python also links in the debug version of Microsoft's malloc. The valgrind report is pointing at the same thing. Whether this leads to a crash is purely an accident of when and how the system malloc happens to reuse the freed memory. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-28 06:56 Message: Logged In: YES user_id=33168 Mike, what platform are you having the problem on? I tried Tim's hope.py on Linux x86_64 and Mac OS X 10.4 with debug builds and neither one crashed. Tim's guess looks pretty damn good too. Here's the result of valgrind: Invalid read of size 8 at 0x4CEBFE: PyTraceBack_Here (traceback.c:117) by 0x49C1F1: PyEval_EvalFrameEx (ceval.c:2515) by 0x4F615D: gen_send_ex (genobject.c:82) by 0x4F6326: gen_close (genobject.c:128) by 0x4F645E: gen_del (genobject.c:163) by 0x4F5F00: gen_dealloc (genobject.c:31) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x44534E: dict_dealloc (dictobject.c:801) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x4664FF: subtype_dealloc (typeobject.c:686) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x42325D: instancemethod_dealloc (classobject.c:2287) Address 0x56550C0 is 88 bytes inside a block of size 152 free'd at 0x4A1A828: free (vg_replace_malloc.c:233) by 0x4C3899: tstate_delete_common (pystate.c:256) by 0x4C3926: PyThreadState_DeleteCurrent (pystate.c:282) by 0x4D4043: t_bootstrap (threadmodule.c:448) by 0x4B24C48: pthread_start_thread (in /lib/libpthread-0.10.so) The only way I can think to fix this is to keep a set of active generators in the PyThreadState and calling gen_send_ex(exc=1) for all the active generators before killing the tstate in t_bootstrap. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-10-19 09:58 Message: Logged In: YES user_id=6656 > and for some reason Python uses the system malloc directly > to obtain memory for thread states. This bit is fairly easy: they are allocated without the GIL being held, which breaks an assumption of PyMalloc. No idea about the real problem, sadly. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-19 02:38 Message: Logged In: YES user_id=31435 I've attached a much simplified pure-Python script (hope.py) that reproduces a problem very quickly, on Windows, in a /debug/ build of current trunk. It typically prints: exiting generator joined thread at most twice before crapping out. At the time, the `next` argument to newtracebackobject() is 0xdddddddd, and tracing back a level shows that, in PyTraceBack_Here(), frame->tstate is entirely filled with 0xdd bytes. Note that this is not a debug-build obmalloc gimmick! This is Microsoft's similar debug-build gimmick for their malloc, and for some reason Python uses the system malloc directly to obtain memory for thread states. The Microsoft debug free() fills newly-freed memory with 0xdd, which has the same meaning as the debug-build obmalloc's DEADBYTE (0xdb). So somebody is accessing a thread state here after it's been freed. Best guess is that the generator is getting "cleaned up" after the thread that created it has gone away, so the generator's frame's f_tstate is trash. Note that a PyThreadState (a frame's f_tstate) is /not/ a Python object -- it's just a raw C struct, and its lifetime isn't controlled by refcounts. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-19 02:12 Message: Logged In: YES user_id=1611720 Despite Tim's reassurrance, I'm afraid that Martin's patch does infact prevent the segfault. Sounds like it also introduces a memleak. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-18 23:57 Message: Logged In: YES user_id=31435 > Can anybody tell why gi_frame *isn't* incref'ed when > the generator is created? As documented (in concrete.tex), PyGen_New(f) steals a reference to the frame passed to it. Its only call site (well, in the core) is in ceval.c, which returns immediately after PyGen_New takes over ownership of the frame the caller created: """ /* Create a new generator that owns the ready to run frame * and return that as the value. */ return PyGen_New(f); """ In short, that PyGen_New() doesn't incref the frame passed to it is intentional. It's possible that the intent is flawed ;-), but offhand I don't see how. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-10-18 23:05 Message: Logged In: YES user_id=21627 Can you please review/try attached patch? Can anybody tell why gi_frame *isn't* incref'ed when the generator is created? ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 21:47 Message: Logged In: YES user_id=1611720 I cannot yet produce an only-python script which reproduces the problem, but I can give an overview. There is a generator running in one thread, an exception being raised in another thread, and as a consequent, the generator in the first thread is garbage-collected (triggering an exception due to the new generator cleanup). The problem is extremely sensitive to timing--often the insertion/removal of print statements, or reordering the code, causes the problem to vanish, which is confounding my ability to create a simple test script. def getdocs(): def f(): while True: f() yield None # ----------------------------------------------------------------------------- class B(object): def __init__(self,): pass def doit(self): # must be an instance var to trigger segfault self.docIter = getdocs() print self.docIter # this is the generator referred-to in the traceback for i, item in enumerate(self.docIter): if i > 9: break print 'exiting generator' class A(object): """ Process entry point / main thread """ def __init__(self): while True: try: self.func() except Exception, e: print 'right after raise' def func(self): b = B() thread = threading.Thread(target=b.doit) thread.start() start_t = time.time() while True: try: if time.time() - start_t > 1: raise Exception except Exception: print 'right before raise' # SIGSEGV here. If this is changed to # 'break', no segfault occurs raise if __name__ == '__main__': A() ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 21:37 Message: Logged In: YES user_id=1611720 I've produced a simplified traceback with a single generator . Note the frame being used in the traceback (#0) is the same frame being dealloc'd (#11). The relevant call in traceback.c is: PyTraceBack_Here(PyFrameObject *frame) { PyThreadState *tstate = frame->f_tstate; PyTracebackObject *oldtb = (PyTracebackObject *) tstate->curexc_traceback; PyTracebackObject *tb = newtracebackobject(oldtb, frame); and I can verify that oldtb contains garbage: (gdb) print frame $1 = (PyFrameObject *) 0x8964d94 (gdb) print frame->f_tstate $2 = (PyThreadState *) 0x895b178 (gdb) print $2->curexc_traceback $3 = (PyObject *) 0x66 #0 0x080e4296 in PyTraceBack_Here (frame=0x8964d94) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x8964d94, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb7cca4ac, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb7cca4ac, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb7cca4ac) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb7cca4ac) at Objects/genobject.c:31 #6 0x080815b9 in dict_dealloc (mp=0xb7cc913c) at Objects/dictobject.c:801 #7 0x080927b2 in subtype_dealloc (self=0xb7cca76c) at Objects/typeobject.c:686 #8 0x0806028d in instancemethod_dealloc (im=0xb7d07f04) at Objects/classobject.c:2285 #9 0x080815b9 in dict_dealloc (mp=0xb7cc90b4) at Objects/dictobject.c:801 #10 0x080927b2 in subtype_dealloc (self=0xb7cca86c) at Objects/typeobject.c:686 #11 0x081028c5 in frame_dealloc (f=0x8964a94) at Objects/frameobject.c:416 #12 0x080e41b1 in tb_dealloc (tb=0xb7cc1fcc) at Python/traceback.c:34 #13 0x080e41c2 in tb_dealloc (tb=0xb7cc1f7c) at Python/traceback.c:33 #14 0x08080dca in insertdict (mp=0xb7f99824, key=0xb7ccd020, hash=1492466088, value=0xb7ccd054) at Objects/dictobject.c:394 #15 0x080811a4 in PyDict_SetItem (op=0xb7f99824, key=0xb7ccd020, value=0xb7ccd054) at Objects/dictobject.c:619 #16 0x08082dc6 in PyDict_SetItemString (v=0xb7f99824, key=0x8129284 "exc_traceback", item=0xb7ccd054) at Objects/dictobject.c:2103 #17 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb7ccd054) at Python/sysmodule.c:82 #18 0x080bc9e5 in PyEval_EvalFrameEx (f=0x895f934, throwflag=0) at Python/ceval.c:2954 ---Type to continue, or q to quit--- #19 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f6ade8, globals=0xb7fafa44, locals=0x0, args=0xb7cc5ff8, argcount=1, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #20 0x08104083 in function_call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/funcobject.c:517 #21 0x0805a660 in PyObject_Call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/abstract.c:1860 ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 04:23 Message: Logged In: YES user_id=1611720 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1208400192 (LWP 26235)] 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 94 if ((next != NULL && !PyTraceBack_Check(next)) || (gdb) bt #0 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x9c2d7b4, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb64f880c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb64f880c, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb64f880c) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb64f880c) at Objects/genobject.c:31 #6 0x080b9912 in PyEval_EvalFrameEx (f=0x9c2802c, throwflag=1) at Python/ceval.c:2491 #7 0x08101a40 in gen_send_ex (gen=0xb64f362c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #8 0x08101c0f in gen_close (gen=0xb64f362c, args=0x0) at Objects/genobject.c:128 #9 0x08101cde in gen_del (self=0xb64f362c) at Objects/genobject.c:163 #10 0x0810195b in gen_dealloc (gen=0xb64f362c) at Objects/genobject.c:31 #11 0x080815b9 in dict_dealloc (mp=0xb64f4a44) at Objects/dictobject.c:801 #12 0x080927b2 in subtype_dealloc (self=0xb64f340c) at Objects/typeobject.c:686 #13 0x0806028d in instancemethod_dealloc (im=0xb796a0cc) at Objects/classobject.c:2285 #14 0x080815b9 in dict_dealloc (mp=0xb64f78ac) at Objects/dictobject.c:801 #15 0x080927b2 in subtype_dealloc (self=0xb64f810c) at Objects/typeobject.c:686 #16 0x081028c5 in frame_dealloc (f=0x9c272bc) at Objects/frameobject.c:416 #17 0x080e41b1 in tb_dealloc (tb=0xb799166c) at Python/traceback.c:34 #18 0x080e41c2 in tb_dealloc (tb=0xb4071284) at Python/traceback.c:33 #19 0x080e41c2 in tb_dealloc (tb=0xb7991824) at Python/traceback.c:33 #20 0x08080dca in insertdict (mp=0xb7f56824, key=0xb3fb9930, hash=1492466088, value=0xb3fb9914) at Objects/dictobject.c:394 #21 0x080811a4 in PyDict_SetItem (op=0xb7f56824, key=0xb3fb9930, value=0xb3fb9914) at Objects/dictobject.c:619 #22 0x08082dc6 in PyDict_SetItemString (v=0xb7f56824, key=0x8129284 "exc_traceback", item=0xb3fb9914) at Objects/dictobject.c:2103 #23 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb3fb9914) at Python/sysmodule.c:82 #24 0x080bc9e5 in PyEval_EvalFrameEx (f=0x9c10e7c, throwflag=0) at Python/ceval.c:2954 #25 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc890, globals=0xb7bbe57c, locals=0x0, args=0x9b8e2ac, argcount=1, kws=0x9b8e2b0, kwcount=0, defs=0xb7b7aed8, defcount=1, closure=0x0) at Python/ceval.c:2833 #26 0x080bd62a in PyEval_EvalFrameEx (f=0x9b8e16c, throwflag=0) at Python/ceval.c:3662 #27 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc848, globals=0xb7bbe57c, locals=0x0, args=0xb7af9d58, argcount=1, kws=0x9b7a818, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #28 0x08104083 in function_call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/funcobject.c:517 #29 0x0805a660 in PyObject_Call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/abstract.c:1860 #30 0x080bcb4b in PyEval_EvalFrameEx (f=0x9b82c0c, throwflag=0) at Python/ceval.c:3846 #31 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7cd6608, globals=0xb7cd4934, locals=0x0, args=0x9b7765c, argcount=2, kws=0x9b77664, kwcount=0, defs=0x0, defcount=0, closure=0xb7cfe874) at Python/ceval.c:2833 #32 0x080bd62a in PyEval_EvalFrameEx (f=0x9b7751c, throwflag=0) at Python/ceval.c:3662 #33 0x080bdf70 in PyEval_EvalFrameEx (f=0x9a9646c, throwflag=0) at Python/ceval.c:3652 #34 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39728, globals=0xb7f6ca44, locals=0x0, args=0x9b7a00c, argcount=0, kws=0x9b7a00c, kwcount=0, defs=0x0, defcount=0, closure=0xb796410c) at Python/ceval.c:2833 #35 0x080bd62a in PyEval_EvalFrameEx (f=0x9b79ebc, throwflag=0) at Python/ceval.c:3662 #36 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39770, globals=0xb7f6ca44, locals=0x0, args=0x99086c0, argcount=0, kws=0x99086c0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #37 0x080bd62a in PyEval_EvalFrameEx (f=0x9908584, throwflag=0) at Python/ceval.c:3662 #38 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 ---Type to continue, or q to quit--- #39 0x080bff32 in PyEval_EvalCode (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44) at Python/ceval.c:494 #40 0x080ddff1 in PyRun_FileExFlags (fp=0x98a4008, filename=0xbfffd4a3 "scoreserver.py", start=257, globals=0xb7f6ca44, locals=0xb7f6ca44, closeit=1, flags=0xbfffd298) at Python/pythonrun.c:1264 #41 0x080de321 in PyRun_SimpleFileExFlags (fp=Variable "fp" is not available. ) at Python/pythonrun.c:870 #42 0x08056ac4 in Py_Main (argc=1, argv=0xbfffd334) at Modules/main.c:496 #43 0x00a69d5f in __libc_start_main () from /lib/libc.so.6 #44 0x08056051 in _start () ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 From noreply at sourceforge.net Thu Jan 4 12:18:18 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 03:18:18 -0800 Subject: [ python-Bugs-1627096 ] xml.dom.minidom parse bug Message-ID: Bugs item #1627096, was opened at 2007-01-03 17:06 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627096&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Pierre Imbaud (pmi) Assigned to: Nobody/Anonymous (nobody) Summary: xml.dom.minidom parse bug Initial Comment: xml.dom.minidom was unable to parse an xml file that came from an example provided by an official organism.(http://www.iptc.org/IPTC4XMP) The parsed file was somewhat hairy, but I have been able to reproduce the bug with a simplified version, attached. (ends with .xmp: its supposed to be an xmp file, the xmp standard being built on xml. Well, thats the short story). The offending part is the one that goes: xmpPLUS='....' it triggers an exception: ValueError: too many values to unpack, in _parse_ns_name. Some debugging showed an obvious mistake in the scanning of the name argument, that goes beyond the closing " ' ". I digged a little further thru a pdb session, but the bug seems to be located in c code. Thats the very first time I report a bug, chances are I provide too much or too little information... To whoever it may concern, here is the invoking code: from xml.dom import minidom ... class xmp(dict): def __init__(self, inStream): xmldoc = minidom.parse(inStream) .... x = xmp('/home/pierre/devt/port/IPTCCore-Full/x.xmp') traceback: /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xmpLib.py in __init__(self, inStream) 26 def __init__(self, inStream): 27 print minidom ---> 28 xmldoc = minidom.parse(inStream) 29 xmpmeta = xmldoc.childNodes[1] 30 rdf = xmpmeta.childNodes[1] /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/nxml/dom/minidom.py in parse(file, parser, bufsize) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parse(file, namespaces) 922 fp = open(file, 'rb') 923 try: --> 924 result = builder.parseFile(fp) 925 finally: 926 fp.close() /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parseFile(self, file) 205 if not buffer: 206 break --> 207 parser.Parse(buffer, 0) 208 if first_buffer and self.document.documentElement: 209 self._setup_subset(buffer) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in start_element_handler(self, name, attributes) 743 def start_element_handler(self, name, attributes): 744 if ' ' in name: --> 745 uri, localname, prefix, qname = _parse_ns_name(self, name) 746 else: 747 uri = EMPTY_NAMESPACE /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in _parse_ns_name(builder, name) 125 localname = intern(localname, localname) 126 else: --> 127 uri, localname = parts 128 prefix = EMPTY_PREFIX 129 qname = localname = intern(localname, localname) ValueError: too many values to unpack The offending c statement: /usr/src/packages/BUILD/Python-2.4/Modules/pyexpat.c(582)StartElement() The returned 'name': (Pdb) name Out[5]: u'XMP Photographic Licensing Universal System (xmpPLUS, http://ns.adobe.com/xap/1.0/PLUS/) CreditLineReq xmpPLUS' Its obvious the scanning went beyond the attribute. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 12:18 Message: Logged In: YES user_id=21627 Originator: NO This is not a bug in Python, but a bug in the XML document. According to section 2.1 of http://www.w3.org/TR/2006/REC-xml-names-20060816/ an XML namespace must be an URI reference; according to RFC 3986, the string "XMP Photographic Licensing Universal System (xmpPLUS, http://ns.adobe.com/xap/1.0/PLUS/)" is not an URI reference, as it contains spaces. Closing this report as invalid. If you want to work around this bug, you can parse the file in non-namespace mode, using xml.dom.expatbuilder.parse("/tmp/x.xmp", namespaces=False) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627096&group_id=5470 From noreply at sourceforge.net Thu Jan 4 17:18:41 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 08:18:41 -0800 Subject: [ python-Bugs-1627952 ] plat-mac videoreader.py auido format info Message-ID: Bugs item #1627952, was opened at 2007-01-04 09:18 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627952&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ryan Owen (ryaowe) Assigned to: Nobody/Anonymous (nobody) Summary: plat-mac videoreader.py auido format info Initial Comment: videoreader.py in the plat-mac modules has a small bug that breaks reader.GetAudioFormat() --- videoreader.py Thu Jan 04 09:05:16 2007 +++ videoreader_fixed.py Thu Jan 04 09:05:11 2007 @@ -13,7 +13,7 @@ from Carbon import Qdoffs from Carbon import QDOffscreen from Carbon import Res try: - import MediaDescr + from Carbon import MediaDescr except ImportError: def _audiodescr(data): return None ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627952&group_id=5470 From noreply at sourceforge.net Thu Jan 4 17:21:03 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 08:21:03 -0800 Subject: [ python-Bugs-1627956 ] documentation error for "startswith" string method Message-ID: Bugs item #1627956, was opened at 2007-01-04 16:21 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627956&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Keith Briggs (kbriggs) Assigned to: Nobody/Anonymous (nobody) Summary: documentation error for "startswith" string method Initial Comment: At http://docs.python.org/lib/string-methods.html#l2h-241, I think prefix can also be a tuple of suffixes to look for. should be prefix can also be a tuple of prefixes to look for. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627956&group_id=5470 From noreply at sourceforge.net Thu Jan 4 19:20:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 10:20:48 -0800 Subject: [ python-Bugs-1598181 ] subprocess.py: O(N**2) bottleneck Message-ID: Bugs item #1598181, was opened at 2006-11-16 22:40 Message generated for change (Comment added) made by mklaas You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ralf W. Grosse-Kunstleve (rwgk) Assigned to: Peter ?strand (astrand) Summary: subprocess.py: O(N**2) bottleneck Initial Comment: subprocess.py (Python 2.5, current SVN, probably all versions) contains this O(N**2) code: bytes_written = os.write(self.stdin.fileno(), input[:512]) input = input[bytes_written:] For large but reasonable "input" the second line is rate limiting. Luckily, it is very easy to remove this bottleneck. I'll upload a simple patch. Below is a small script that demonstrates the huge speed difference. The output on my machine is: creating input 0.888417959213 slow slicing input 61.1553330421 creating input 0.863168954849 fast slicing input 0.0163860321045 done The numbers are times in seconds. This is the source: import time import sys size = 1000000 t0 = time.time() print "creating input" input = "\n".join([str(i) for i in xrange(size)]) print time.time()-t0 t0 = time.time() print "slow slicing input" n_out_slow = 0 while True: out = input[:512] n_out_slow += 1 input = input[512:] if not input: break print time.time()-t0 t0 = time.time() print "creating input" input = "\n".join([str(i) for i in xrange(size)]) print time.time()-t0 t0 = time.time() print "fast slicing input" n_out_fast = 0 input_done = 0 while True: out = input[input_done:input_done+512] n_out_fast += 1 input_done += 512 if input_done >= len(input): break print time.time()-t0 assert n_out_fast == n_out_slow print "done" ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-04 10:20 Message: Logged In: YES user_id=1611720 Originator: NO I reviewed the patch--the proposed fix looks good. Minor comments: - "input_done" sounds like a flag, not a count of written bytes - buffer() could be used to avoid the 512-byte copy created by the slice ---------------------------------------------------------------------- Comment By: Ralf W. Grosse-Kunstleve (rwgk) Date: 2006-11-16 22:43 Message: Logged In: YES user_id=71407 Originator: YES Sorry, I didn't know the tracker would destroy the indentation. I'm uploading the demo source as a separate file. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470 From noreply at sourceforge.net Thu Jan 4 22:08:49 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 13:08:49 -0800 Subject: [ python-Bugs-1566280 ] Logging problem on Windows XP Message-ID: Bugs item #1566280, was opened at 2006-09-27 13:49 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1566280&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed >Resolution: Fixed Priority: 7 Private: No Submitted By: Pavel Krupets (pavel_krupets) Assigned to: Martin v. L?wis (loewis) Summary: Logging problem on Windows XP Initial Comment: Traceback (most recent call last): File "C:\Python\Lib\logging\handlers.py", line 73, in emit if self.shouldRollover(record): File "C:\Python\Lib\logging\handlers.py", line 147, in shouldRollover self.stream.seek(0, 2) #due to non-posix-compliant Windows feature ValueError: I/O operation on closed file not sure why this file is closed. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 22:08 Message: Logged In: YES user_id=21627 Originator: NO Thanks again for the report. This is now fixed in r53249 and r53250. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-12-18 18:26 Message: Logged In: YES user_id=21627 Originator: NO I cannot reproduce the crash with the example given, neither with the released binaries, nor with any of the trunk or release25-maint subversion branches. Therefore, I declare that this report is only about the ValueError; if anybody has a way to provoke a crash in a reproducable way, please submit it as a separate report, along with precise instructions on how to provoke the crash. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-12-06 19:58 Message: Logged In: YES user_id=21627 Originator: NO eloff: It may be that there are different problems that all show the symptom; *this* problem reported here can only occur if you are using multiple threads (atleast for the ValueError; haven't looked into the crash at all). Yes, you can run multiple threads, and yes, you can use logging freely. However, you should not let the main thread just "run off". Instead, you should end your main thread with an explicit .join() operation on all threads it has created; those threads themselves should perform explicit .join() operations on all threads they create. That way, you can guarantee orderly shutdown. threading.py tries to do the joining if you don't, but fails (and the approach it uses is inherently error-prone). ---------------------------------------------------------------------- Comment By: Daniel Eloff (eloff) Date: 2006-12-06 19:43 Message: Logged In: YES user_id=730918 Originator: NO Thanks Martin, I applied the patch. The problem I was having was the IO Error, sorry for being vague. The part I don't understand is I should not have had other threads running (and definately should not have had the logger being used outside the main thread.) Can the problem occur with just one thread? I was running under the debugger in wing, I don't know if that might cause this problem. Anyway if I find out anything else I'll let you know. If you don't hear from me then everything is working great. ---------------------------------------------------------------------- Comment By: Mike Powers (mikepowers48) Date: 2006-12-06 16:22 Message: Logged In: YES user_id=1614975 Originator: NO I'm seeing the I/O error and crash a lot on Windows and the I/O error on Linux. Any help would be greatly appreciated. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-12-06 08:29 Message: Logged In: YES user_id=21627 Originator: NO Ok, so tsample.zip is a test case for the original problem, right? I can reproduce the problem on Linux also. I can't make it crash (on Linux); what do have to do to make it crash? If I access localhost:8080, I get log messages saying 2006-12-06 07:21:06,999 INFO servlet::__init__:1091 code 404, message File not found eloff: this report actually reports two problems (the I/O error, and the crash). Which of these are you having and have found lots of people having? As for the traceback problem: this is due to the main thread terminating, and therefore the logging atexit handler getting invoked, which closes the file. Only then is the threading atexit handler invoked, which waits until all non-daemon threads terminate. As a work-around, add httpServer.join() at the end of your script. I'll attach a patch that fixes this problem in the Python library. File Added: threading.diff ---------------------------------------------------------------------- Comment By: Daniel Eloff (eloff) Date: 2006-12-06 04:05 Message: Logged In: YES user_id=730918 Originator: NO I have this problem, I'm googling this and finding lots of people having the same problem. I'm running python 2.5 on windows XP and using the rotating file handler. I've disabled the logger in my application so I can continue development. ---------------------------------------------------------------------- Comment By: Pavel Krupets (pavel_krupets) Date: 2006-09-29 15:52 Message: Logged In: YES user_id=1007725 to start application please use: src/py/run.bat to get closed handler error (if you manage to start it) please open your web browser and try to visit: http://localhost:8080 You can change http settings in src/conf/development/robot.conf sorry code is quite raw just started. :) ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-09-28 05:29 Message: Logged In: YES user_id=21627 Can you provide a test case for either problem? ---------------------------------------------------------------------- Comment By: Pavel Krupets (pavel_krupets) Date: 2006-09-27 14:01 Message: Logged In: YES user_id=1007725 And I think python crashes on Windows if I try to use logger from several threads. Unhandled exception at 0x7c901010 in python.exe: 0xC0000005: Access violation reading location 0x00000034. > ntdll.dll!7c901010() [Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll] msvcr71.dll!7c34f639() msvcr71.dll!7c36b3b1() python25.dll!1e06c6c0() python25.dll!1e08dc97() python25.dll!1e03ac12() python25.dll!1e03c735() python25.dll!1e03cc5f() python25.dll!1e04026b() python25.dll!1e039a2e() python25.dll!1e03ac82() python25.dll!1e03cc5f() python25.dll!1e04026b() python25.dll!1e039a2e() python25.dll!1e03ac82() python25.dll!1e03cc5f() python25.dll!1e04026b() python25.dll!1e039a2e() python25.dll!1e03ac82() python25.dll!1e03cc5f() python25.dll!1e04026b() python25.dll!1e039a2e() python25.dll!1e03ac82() python25.dll!1e03cc5f() python25.dll!1e04026b() python25.dll!1e039a2e() python25.dll!1e03ac82() python25.dll!1e03cc5f() python25.dll!1e03db7d() python25.dll!1e0715df() python25.dll!1e0268ec() python25.dll!1e040a04() python25.dll!1e039a8c() python25.dll!1e03ac82() python25.dll!1e03cc5f() python25.dll!1e0622d3() python25.dll!1e062660() python25.dll!1e061028() python25.dll!1e0db1bd() python25.dll!1e062676() python25.dll!1e03e8c1() python25.dll!1e041475() python25.dll!1e0414c3() python25.dll!1e094093() python25.dll!1e062676() python25.dll!1e0268ec() python25.dll!1e03987a() python25.dll!1e033edc() python25.dll!1e08dc97() python25.dll!1e03ac12() python25.dll!1e03cc5f() python25.dll!1e07041e() python25.dll!1e070385() python25.dll!1e03db7d() python25.dll!1e039a8c() python25.dll!1e03ac82() python25.dll!1e03cc5f() python25.dll!1e07041e() python25.dll!1e039a2e() python25.dll!1e03ac82() python25.dll!1e03cc5f() python25.dll!1e07041e() python25.dll!1e03db7d() python25.dll!1e0715df() python25.dll!1e0268ec() python25.dll!1e040a04() ntdll.dll!7c90d625() ntdll.dll!7c90eacf() python25.dll!1e0258d2() ntdll.dll!7c9105c8() ntdll.dll!7c910551() ntdll.dll!7c91056d() kernel32.dll!7c80261a() kernel32.dll!7c8025f0() kernel32.dll!7c8025f0() kernel32.dll!7c802532() python25.dll!1e0268ec() python25.dll!1e03987a() python25.dll!1e0cdf07() python25.dll!1e0cd899() msvcr71.dll!7c34940f() kernel32.dll!7c80b683() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1566280&group_id=5470 From noreply at sourceforge.net Thu Jan 4 22:09:45 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 13:09:45 -0800 Subject: [ python-Bugs-1627690 ] documentation error for "startswith" string method Message-ID: Bugs item #1627690, was opened at 2007-01-04 11:06 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627690&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Keith Briggs (kbriggs) Assigned to: Nobody/Anonymous (nobody) Summary: documentation error for "startswith" string method Initial Comment: At http://docs.python.org/lib/string-methods.html#l2h-241, I think prefix can also be a tuple of suffixes to look for. should be prefix can also be a tuple of prefixes to look for. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 22:09 Message: Logged In: YES user_id=21627 Originator: NO This is a duplicate of 1627956. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627690&group_id=5470 From noreply at sourceforge.net Thu Jan 4 22:13:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 13:13:57 -0800 Subject: [ python-Bugs-1626801 ] posixmodule.c leaks crypto context on Windows Message-ID: Bugs item #1626801, was opened at 2007-01-03 11:47 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None >Priority: 5 Private: No Submitted By: Yitz Gale (ygale) Assigned to: Martin v. L?wis (loewis) Summary: posixmodule.c leaks crypto context on Windows Initial Comment: The Win API docs for CryptAcquireContext require that the context be released after use by calling CryptReleaseContext, but posixmodule.c fails to do so in win32_urandom(). ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 22:13 Message: Logged In: YES user_id=21627 Originator: NO I fail to see the problem. Only a single crypto context is allocated, and it is used all the time, i.e. until the Python interpreter finishes, at which time it is automatically released by the operating system. ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2007-01-03 13:12 Message: Logged In: YES user_id=1033539 Originator: YES You might consider backporting this to 2.5 and 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 From noreply at sourceforge.net Thu Jan 4 22:46:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 13:46:13 -0800 Subject: [ python-Bugs-1626801 ] posixmodule.c leaks crypto context on Windows Message-ID: Bugs item #1626801, was opened at 2007-01-03 12:47 Message generated for change (Comment added) made by ygale You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Yitz Gale (ygale) Assigned to: Martin v. L?wis (loewis) Summary: posixmodule.c leaks crypto context on Windows Initial Comment: The Win API docs for CryptAcquireContext require that the context be released after use by calling CryptReleaseContext, but posixmodule.c fails to do so in win32_urandom(). ---------------------------------------------------------------------- >Comment By: Yitz Gale (ygale) Date: 2007-01-04 23:46 Message: Logged In: YES user_id=1033539 Originator: YES How do you know that "it is automatically released by the operating system?" The documentation for CryptAcquireContext states: "When you have finished using the CSP, release the handle by calling the CryptReleaseContext function." In the example code provided, the wording in the comments is even stronger: "When the handle is no longer needed, it must be released." The example code then explicitly calls CryptReleaseContext. Do you know absolutely for certain that we are not leaking resourses if we violate this clear API requirement? Reference: http://msdn2.microsoft.com/en-us/library/aa379886.aspx ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 23:13 Message: Logged In: YES user_id=21627 Originator: NO I fail to see the problem. Only a single crypto context is allocated, and it is used all the time, i.e. until the Python interpreter finishes, at which time it is automatically released by the operating system. ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2007-01-03 14:12 Message: Logged In: YES user_id=1033539 Originator: YES You might consider backporting this to 2.5 and 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 From noreply at sourceforge.net Fri Jan 5 01:46:18 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 04 Jan 2007 16:46:18 -0800 Subject: [ python-Bugs-1626801 ] posixmodule.c leaks crypto context on Windows Message-ID: Bugs item #1626801, was opened at 2007-01-03 11:47 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Yitz Gale (ygale) Assigned to: Martin v. L?wis (loewis) Summary: posixmodule.c leaks crypto context on Windows Initial Comment: The Win API docs for CryptAcquireContext require that the context be released after use by calling CryptReleaseContext, but posixmodule.c fails to do so in win32_urandom(). ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-05 01:46 Message: Logged In: YES user_id=21627 Originator: NO Yes, I'm absolutely certain that terminating a process releases all handles, on Windows NT+. ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2007-01-04 22:46 Message: Logged In: YES user_id=1033539 Originator: YES How do you know that "it is automatically released by the operating system?" The documentation for CryptAcquireContext states: "When you have finished using the CSP, release the handle by calling the CryptReleaseContext function." In the example code provided, the wording in the comments is even stronger: "When the handle is no longer needed, it must be released." The example code then explicitly calls CryptReleaseContext. Do you know absolutely for certain that we are not leaking resourses if we violate this clear API requirement? Reference: http://msdn2.microsoft.com/en-us/library/aa379886.aspx ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 22:13 Message: Logged In: YES user_id=21627 Originator: NO I fail to see the problem. Only a single crypto context is allocated, and it is used all the time, i.e. until the Python interpreter finishes, at which time it is automatically released by the operating system. ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2007-01-03 13:12 Message: Logged In: YES user_id=1033539 Originator: YES You might consider backporting this to 2.5 and 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 From noreply at sourceforge.net Fri Jan 5 09:45:18 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 00:45:18 -0800 Subject: [ python-Bugs-1628484 ] Python 2.5 64 bit compile fails on Solaris 10/gcc 4.1.1 Message-ID: Bugs item #1628484, was opened at 2007-01-05 00:45 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628484&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Bob Atkins (bobatkins) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.5 64 bit compile fails on Solaris 10/gcc 4.1.1 Initial Comment: This looks like a recurring and somewhat sore topic. For those of us that have been fighting the dreaded: ./Include/pyport.h:730:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." when performing a 64 bit compile. I believe I have identified the problems. All of which are directly related to the Makefile(s) that are generated as part of the configure script. There does not seem to be anything wrong with the configure script or anything else once all of the Makefiles are corrected Python will build 64 bit Although it is possible to pass the following environment variables to configure as is typical on most open source software: CC C compiler command CFLAGS C compiler flags LDFLAGS linker flags, e.g. -L if you have libraries in a nonstandard directory CPPFLAGS C/C++ preprocessor flags, e.g. -I if you have headers in a nonstandard directory CPP C preprocessor These flags are *not* being processed through to the generated Makefiles. This is where the problem is. configure is doing everything right and generating all of the necessary stuff for a 64 bit compile but when the compile is actually performed - the necessary CFLAGS are missing and a 32 bit compile is initiated. Taking a close look at the first failure I found the following: gcc -pthread -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -fPIC -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c Where are my CFLAGS??? I ran the configure with: CFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ CXXFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ LDFLAGS="-m64 -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ ./configure --prefix=/opt \ --enable-shared \ --libdir=/opt/lib/sparcv9 Checking the config.log and config.status it was clear that the flags were used properly as the configure script ran however, the failure is in the various Makefiles to actually reference the CFLAGS and LDFLAGS. LDFLAGS is simply not included in any of the link stages in the Makefiles and CFLAGS is overidden by BASECFLAGS, OPT and EXTRA_CFLAGS! Ah! EXTRA_CFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ make Actually got the core parts to compile for the library and then failed to build the library because - LDFLAGS was missing from the Makefile for the library link stage - :-( Close examination suggests that the OPT environment variable could be used to pass the necessary flags through from conifgure but this still did not help the link stage problems. The fixes are pretty minimal to ensure that the configure variables are passed into the Makefile. My patch to the Makefile.pre.in is attached to this bug report. Once these changes are made Python will build properly for both 32 and 64 bit platforms with the correct CFLAGS and LDFLAGS passed into the configure script. BTW, while this bug is reported under a Solaris/gcc build the patches to Makefile.pre.in should fix similar build issues on all platforms. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628484&group_id=5470 From noreply at sourceforge.net Fri Jan 5 11:01:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 02:01:48 -0800 Subject: [ python-Bugs-1626801 ] posixmodule.c leaks crypto context on Windows Message-ID: Bugs item #1626801, was opened at 2007-01-03 12:47 Message generated for change (Comment added) made by ygale You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Yitz Gale (ygale) Assigned to: Martin v. L?wis (loewis) Summary: posixmodule.c leaks crypto context on Windows Initial Comment: The Win API docs for CryptAcquireContext require that the context be released after use by calling CryptReleaseContext, but posixmodule.c fails to do so in win32_urandom(). ---------------------------------------------------------------------- >Comment By: Yitz Gale (ygale) Date: 2007-01-05 12:01 Message: Logged In: YES user_id=1033539 Originator: YES OK, then, fine. You might want to just add a comment there so that people like me won't keep filing bugs against this. :) ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-05 02:46 Message: Logged In: YES user_id=21627 Originator: NO Yes, I'm absolutely certain that terminating a process releases all handles, on Windows NT+. ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2007-01-04 23:46 Message: Logged In: YES user_id=1033539 Originator: YES How do you know that "it is automatically released by the operating system?" The documentation for CryptAcquireContext states: "When you have finished using the CSP, release the handle by calling the CryptReleaseContext function." In the example code provided, the wording in the comments is even stronger: "When the handle is no longer needed, it must be released." The example code then explicitly calls CryptReleaseContext. Do you know absolutely for certain that we are not leaking resourses if we violate this clear API requirement? Reference: http://msdn2.microsoft.com/en-us/library/aa379886.aspx ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 23:13 Message: Logged In: YES user_id=21627 Originator: NO I fail to see the problem. Only a single crypto context is allocated, and it is used all the time, i.e. until the Python interpreter finishes, at which time it is automatically released by the operating system. ---------------------------------------------------------------------- Comment By: Yitz Gale (ygale) Date: 2007-01-03 14:12 Message: Logged In: YES user_id=1033539 Originator: YES You might consider backporting this to 2.5 and 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626801&group_id=5470 From noreply at sourceforge.net Fri Jan 5 15:15:14 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 06:15:14 -0800 Subject: [ python-Bugs-1627956 ] documentation error for "startswith" string method Message-ID: Bugs item #1627956, was opened at 2007-01-04 11:21 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627956&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Keith Briggs (kbriggs) >Assigned to: A.M. Kuchling (akuchling) Summary: documentation error for "startswith" string method Initial Comment: At http://docs.python.org/lib/string-methods.html#l2h-241, I think prefix can also be a tuple of suffixes to look for. should be prefix can also be a tuple of prefixes to look for. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 09:15 Message: Logged In: YES user_id=11375 Originator: NO Fixed; thanks! ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627956&group_id=5470 From noreply at sourceforge.net Fri Jan 5 15:16:52 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 06:16:52 -0800 Subject: [ python-Bugs-1625205 ] sqlite3 documentation omits: close(), commit(), autocommit Message-ID: Bugs item #1625205, was opened at 2006-12-30 23:34 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1625205&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: kitbyaydemir (kitbyaydemir) >Assigned to: Gerhard H?ring (ghaering) Summary: sqlite3 documentation omits: close(), commit(), autocommit Initial Comment: The Python 2.5 Library documentation (HTML format), Section 13.13 (sqlite3) fails to mention several important methods of Connection objects. Specifically, the close() and commit() methods. Considering that autocommit mode is not the default, I'm not sure how a user is supposed to figure out that they need to call these methods to ensure that changes are reflected on disk. (The only reason I discovered these was from http://initd.org/tracker/pysqlite/wiki/basicintro .) Furthermore, Section 13.13.5 mentions the existence of "autocommit mode", but fails to describe what that mode is and why it might be useful. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1625205&group_id=5470 From noreply at sourceforge.net Fri Jan 5 15:25:00 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 06:25:00 -0800 Subject: [ python-Bugs-1622533 ] null bytes in docstrings Message-ID: Bugs item #1622533, was opened at 2006-12-26 12:47 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1622533&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library >Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Fredrik Lundh (effbot) >Assigned to: A.M. Kuchling (akuchling) Summary: null bytes in docstrings Initial Comment: the following docstrings contain bogus control characters: module difflib, function _mdiff, contains four invalid bytes: ['\x00', '\x00', '\x00', '\x01'] module StringIO, method readline, contains a null byte: ['\x00'] since this breaks help() and probably a bunch of other documentation tools, it would probably be a good idea to add the missing backslashes... ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 09:24 Message: Logged In: YES user_id=11375 Originator: NO Fixed in trunk rev. 53262, 25-maint rev. 53263. Thanks! ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2006-12-26 13:27 Message: Logged In: YES user_id=80475 Originator: NO Clearer and simpler to make the whole docstring raw. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1622533&group_id=5470 From noreply at sourceforge.net Fri Jan 5 15:33:08 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 06:33:08 -0800 Subject: [ python-Bugs-831574 ] Solaris term.h needs curses.h Message-ID: Bugs item #831574, was opened at 2003-10-28 01:53 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=831574&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Anthony Baxter (anthonybaxter) Assigned to: Anthony Baxter (anthonybaxter) Summary: Solaris term.h needs curses.h Initial Comment: Solaris' term.h requires curses.h to be included first. This causes the configure script to emit lines about a bug in autoconf. From the autoconf mailing lists, their standard response is to fix the configure script, see e.g. http://mail.gnu.org/archive/html/bug-autoconf/2003-05/msg00118.html The following patch against 2.3 branch for configure and configure.in makes things a bit happier. Note that Include/py_curses.h already includes curses.h before term.h, this just fixes the breakage of configure. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 09:33 Message: Logged In: YES user_id=11375 Originator: NO Is this bug still relevant to Python 2.5? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-10-31 10:22 Message: Logged In: YES user_id=21627 I find it confusing that the test for curses.h already refers to HAVE_CURSES_H; I think you should first check for curses.h, and then use HAVE_CURSES_H in the test for term.h I also agree that #ifdef is better than #if, even though it should not matter in an ISO C compiler (which replaces undefined symbols by 0 in an #if). ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2003-10-28 20:38 Message: Logged In: YES user_id=29957 Dunno if #ifdef is better or not - I just worked from the example in the attached autoconf mailing list message. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-10-28 08:08 Message: Logged In: YES user_id=33168 Should the #if be an #ifdef ? Looks fine to me, but I don't know much about autoconf. :-) I think Martin is the expert. Martin do you have an opinion? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=831574&group_id=5470 From noreply at sourceforge.net Fri Jan 5 15:36:20 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 06:36:20 -0800 Subject: [ python-Bugs-1119331 ] curses.initscr - initscr exit w/o env(TERM) set Message-ID: Bugs item #1119331, was opened at 2005-02-09 09:51 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1119331&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jacob Lilly (jrlilly) Assigned to: Michael Hudson (mwh) Summary: curses.initscr - initscr exit w/o env(TERM) set Initial Comment: the initscr in ncurses will cause an immeadiation exit if the env doesn't have the TERM variable set. Could the curses.initscr be changed so it tests if TERM is set and raises an exception? It would be helpful to be able to try and except this instead of just having ncurses exit for you. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 09:36 Message: Logged In: YES user_id=11375 Originator: NO Patch #2 looks OK. Any objections if I just commit it (to trunk only)? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2005-06-13 14:13 Message: Logged In: YES user_id=6656 How about the attached, then? (sorry for the long, long wait) ---------------------------------------------------------------------- Comment By: Jacob Lilly (jrlilly) Date: 2005-02-10 08:41 Message: Logged In: YES user_id=774886 The only thing that worries me about that is it takes a different path than ncurses does (or at least 5.4 does). If the env variable isn't set, initscr in ncurses assumes the term type is "unknown" (if no env) and passes "unknown" along, whereas setupterm assumes that if you pass it NULL for the term and the env isn't set, then it simply returns 0. I doubt anyone will have a valid term setup for unknown, but who knows. Beyound that works for me. BTW, the gnu ncurses guys say this is the the behavior in the standard. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2005-02-10 06:22 Message: Logged In: YES user_id=6656 The motivation for calling setupterm() ourselves was that I noticed TERM=garbage python -c 'import curses; curses.initscr()' hit the irritating error path too. I also hadn't realised there was a version of initscr in curses/__init__.py, which makes things easier... how about the attached? ---------------------------------------------------------------------- Comment By: Jacob Lilly (jrlilly) Date: 2005-02-09 19:06 Message: Logged In: YES user_id=774886 if you pass setupterm 0 for the term name it just tries to get the env variable, so the env test should cover it pretty well. It might make more sense to check the env first and then pass "unkown", setuperm("unknown", -1). This would seem to match the path that curses initscr follows. This would also raise _curses and curses shared exception. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2005-02-09 18:19 Message: Logged In: YES user_id=6656 Yeah, I noticed that. We could at least call setupterm(0, NULL) first, I guess... ---------------------------------------------------------------------- Comment By: Jacob Lilly (jrlilly) Date: 2005-02-09 14:51 Message: Logged In: YES user_id=774886 that is any return of 0 from newterm ---------------------------------------------------------------------- Comment By: Jacob Lilly (jrlilly) Date: 2005-02-09 14:49 Message: Logged In: YES user_id=774886 sorry, I should have done that in the beginning; I have it raising a RuntimeError, I think thats what it is. This doesn't really solve the problem in whole, since ncurses initscr has lots of ways it could decide to decide to exit (any return value from newterm causes it to exit), but it does solve a more common one. Anything else would require modifying ncruses to be responsible. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2005-02-09 13:45 Message: Logged In: YES user_id=6656 How amazingly terrible (on ncurses part). Do you want to/are you able to work on a patch? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1119331&group_id=5470 From noreply at sourceforge.net Fri Jan 5 15:42:14 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 06:42:14 -0800 Subject: [ python-Bugs-849046 ] gzip.GzipFile is slow Message-ID: Bugs item #849046, was opened at 2003-11-25 10:45 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=849046&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 3 Private: No Submitted By: Ronald Oussoren (ronaldoussoren) >Assigned to: Bob Ippolito (etrepum) Summary: gzip.GzipFile is slow Initial Comment: gzip.GzipFile is significantly (an order of a magnitude) slower than using the gzip binary. I've been bitten by this several times, and have replaced "fd = gzip.open('somefile', 'r')" by "fd = os.popen('gzcat somefile', 'r')" on several occassions. Would a patch that implemented GzipFile in C have any change of being accepted? ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 09:42 Message: Logged In: YES user_id=11375 Originator: NO Patch #1281707 improved readline() performance and has been applied. I'll close this bug; please re-open if there are still performance issues. ---------------------------------------------------------------------- Comment By: April King (marumari) Date: 2005-05-04 12:18 Message: Logged In: YES user_id=747439 readlines(X) is even worse, as all it does is call readline() X times. readline() is also biased towards files where each line is less than 100 characters: readsize = min(100, size) So, if it's longer than that, it calls read, which calls _read, and so on. I've found using popen to be roughly 20x faster than using the gzip module. That's pretty bad. ---------------------------------------------------------------------- Comment By: Ronald Oussoren (ronaldoussoren) Date: 2003-12-28 11:25 Message: Logged In: YES user_id=580910 Leaving out the assignment sure sped thing up, but only because the input didn't contain lines anymore ;-) I did an experiment where I replaced self.extrabuf by a list, but that did slow things down. This may be because there seemed to be very few chunks in the buffer (most of the time just 2) According to profile.run('testit()') the function below spends about 50% of its time in the readline method: def testit() fd = gzip.open('testfile.gz', 'r') ln = fd.readline() cnt = bcnt = 0 while ln: ln = fd.readline() cnt += 1 bcnt += len(ln) print bcnt, cnt return bcnt,cnt testfile.gz is a simple textfile containing 40K lines of about 70 characters each. Replacing the 'buffers' in readline by a string (instead of a list) slightly speeds things up (about 10%). Other experiments did not bring any improvement. Even writing a simple C function to split the buffer returned by self.read() didn't help a lot (splitline(strval, max) -> match, rest, match is strval upto the first newline and at most max characters, rest is the rest of strval). ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2003-12-23 12:10 Message: Logged In: YES user_id=11375 It should be simple to check if the string operations are responsible -- comment out the 'self.extrabuf = self.extrabuf + data' in _add_read_data. If that makes a big difference, then _read should probably be building a list instead of modifying a string. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-12-04 14:51 Message: Logged In: YES user_id=357491 Looking at GzipFile.read and ._read , I think a large chunk of time is burned in the decompression of small chunks of data. It initially reads and decompresses 1024 bits, and then if that read did not hit the EOF, it multiplies it by 2 and continues until the EOF is reached and then finishes up. The problem is that for each read a call to _read is made that sets up a bunch of objects. I would not be surprised if the object creation and teardown is hurting the performance. I would also not be surprised if the reading of small chunks of data is an initial problem as well. This is all guesswork, though, since I did not run the profiler on this. *But*, there might be a good reason for reading small chunks. If you are decompressing a large file, you might run out of memory very quickly by reading the file into memory *and* decompressing at the same time. Reading it in successively larger chunks means you don't hold the file's entire contents in memory at any one time. So the question becomes whether causing your memory to get overloaded and major thrashing on your swap space is worth the performance increase. There is also the option of inlining _read into 'read', but since it makes two calls that seems like poor abstraction and thus would most likely not be accepted as a solution. Might be better to just have some temporary storage in an attribute of objects that are used in every call to _read and then delete the attribute once the reading is done. Or maybe allow for an optional argument to read that allowed one to specify the initial read size (and that might be a good way to see if any of these ideas are reasonable; just modify the code to read the whole thing and go at it from that). But I am in no position to make any of these calls, though, since I never use gzip. If someone cares to write up a patch to try to fix any of this it will be considered. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2003-11-25 17:05 Message: Logged In: YES user_id=764593 In the library, I see a fair amount of work that doesn't really do anything except make sure you're getting exactly a line at a time. Would it be an option to just read the file in all at once, split it on newlines, and then loop over the list? (Or read it into a cStringIO, I suppose.) ---------------------------------------------------------------------- Comment By: Ronald Oussoren (ronaldoussoren) Date: 2003-11-25 16:12 Message: Logged In: YES user_id=580910 To be more precise: $ ls -l gzippedfile -rw-r--r-- 1 ronald admin 354581 18 Nov 10:21 gzippedfile $ gzip -l gzippedfile compressed uncompr. ratio uncompressed_name 354581 1403838 74.7% gzippedfile The file contains about 45K lines of text (about 40 characters/line) $ time gzip -dc gzippedfile > /dev/null real 0m0.100s user 0m0.060s sys 0m0.000s $ python read.py gzippedfile > /dev/null real 0m3.222s user 0m3.020s sys 0m0.070s $ cat read.py #!/usr/bin/env python import sys import gzip fd = gzip.open(sys.argv[1], 'r') ln = fd.readline() while ln: sys.stdout.write(ln) ln = fd.readline() The difference is also significant for larger files (e.g. the difference is not caused by the different startup-times) ---------------------------------------------------------------------- Comment By: Ronald Oussoren (ronaldoussoren) Date: 2003-11-25 16:03 Message: Logged In: YES user_id=580910 The files are created using GzipFile. That speed is acceptable because it happens in a batch-job, reading back is the problem because that happens on demand and a user is waiting for the results. gzcat is a *uncompress* utility (specifically it is "gzip -dc"), the compression level is irrelevant for this discussion. The python code seems to do quite some string manipulation, maybe that is causing the slowdown (I'm using fd.readline() in a fairly tight loop). I'll do some profiling to check what is taking so much time. BTW. I'm doing this on Unix systems (Sun Solaris and Mac OS X). ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2003-11-25 12:35 Message: Logged In: YES user_id=764593 Which compression level are you using? It looks like most of the work is already done by zlib (which is in C), but GzipFile defaults to compression level 9. Many other zips (including your gzcat?) default to a lower (but much faster) compression level. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=849046&group_id=5470 From noreply at sourceforge.net Fri Jan 5 15:46:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 06:46:28 -0800 Subject: [ python-Bugs-756982 ] mailbox should use email not rfc822 Message-ID: Bugs item #756982, was opened at 2003-06-18 22:19 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=756982&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.3 Status: Open Resolution: None >Priority: 1 Private: No Submitted By: Ben Leslie (benno37) Assigned to: Barry A. Warsaw (bwarsaw) Summary: mailbox should use email not rfc822 Initial Comment: The mailbox module uses the rfc822 module as its default factory for creating message objects. The rfc822 documentation claims that its use is deprecated. The mailbox module should probably use the new email module as its default factory. Of course this has backward compatibility issues, in which case it should at least be mentioned in the mailbox documentation that it uses the deprecated rfc822 module, and provide an example of how to use the email module instead. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 09:46 Message: Logged In: YES user_id=11375 Originator: NO The reworking of mailbox.py introduced in Python 2.5 adds new mailbox classes that do use email.Message. Arguably we could begin deprecating the old classes (or just remove them all for Python 3000?). ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2005-01-10 02:56 Message: Logged In: YES user_id=29957 Given the amount of code out there using rfc822, should we instead PDW it? In any case, I'm -0 on putting a DeprecationWarning on it unless we've removed all use of it from the stdlib. ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2005-01-08 10:49 Message: Logged In: YES user_id=12800 It's a good question. I'd like to say yes so that we can start adding deprecation warnings to rfc822 for Python 2.5. ---------------------------------------------------------------------- Comment By: Johannes Gijsbers (jlgijsbers) Date: 2005-01-08 09:22 Message: Logged In: YES user_id=469548 So, with the plans to seriously start working deprecating rfc822, should we use the email module as the default factory now? ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2003-06-20 17:48 Message: Logged In: YES user_id=12800 I've added some sample code to the mailbox documentation that explain how to use the email package with the mailbox module. We can't change the default for backward compatibility reasons, as you point out. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=756982&group_id=5470 From noreply at sourceforge.net Fri Jan 5 16:19:29 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 07:19:29 -0800 Subject: [ python-Feature Requests-1627266 ] optparse "store" action should not gobble up next option Message-ID: Feature Requests item #1627266, was opened at 2007-01-03 13:46 Message generated for change (Comment added) made by draghuram You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Raghuram Devarakonda (draghuram) Assigned to: Nobody/Anonymous (nobody) Summary: optparse "store" action should not gobble up next option Initial Comment: Hi, Check the following code: --------------opttest.py---------- from optparse import OptionParser def process_options(): global options, args, parser parser = OptionParser() parser.add_option("--test", action="store_true") parser.add_option("-m", metavar="COMMENT", dest="comment", default=None) (options, args) = parser.parse_args() return process_options() print "comment (%r)" % options.comment --------------------- $ ./opttest.py -m --test comment ('--test') I was expecting this to give an error as "--test" is an option. But it looks like even C library's getopt() behaves similarly. It will be nice if optparse can report error in this case. ---------------------------------------------------------------------- >Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-05 10:19 Message: Logged In: YES user_id=984087 Originator: YES I am attaching the code fragment as a file since the indentation got all messed up in the original post. File Added: opttest.py ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 From noreply at sourceforge.net Fri Jan 5 17:09:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 08:09:48 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None >Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Fri Jan 5 17:14:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 08:14:50 -0800 Subject: [ python-Bugs-1552726 ] Python polls unnecessarily every 0.1 second when interactive Message-ID: Bugs item #1552726, was opened at 2006-09-05 10:42 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1552726&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: Fixed >Priority: 9 Private: No Submitted By: Richard Boulton (richardb) Assigned to: A.M. Kuchling (akuchling) Summary: Python polls unnecessarily every 0.1 second when interactive Initial Comment: When python is running an interactive session, and is idle, it calls "select" with a timeout of 0.1 seconds repeatedly. This is intended to allow PyOS_InputHook() to be called every 0.1 seconds, but happens even if PyOS_InputHook() isn't being used (ie, is NULL). To reproduce: - start a python session - attach to it using strace -p PID - observe that python repeatedly This isn't a significant problem, since it only affects idle interactive python sessions and uses only a tiny bit of CPU, but people are whinging about it (though some appear to be doing so tongue-in-cheek) and it would be nice to fix it. The attached patch (against Python-2.5c1) modifies the readline.c module so that the polling doesn't happen unless PyOS_InputHook is not NULL. ---------------------------------------------------------------------- Comment By: Richard Boulton (richardb) Date: 2006-09-08 10:30 Message: Logged In: YES user_id=9565 I'm finding the function because it's defined in the compiled library - the header files aren't examined by configure when testing for this function. (this is because configure.in uses AC_CHECK_LIB to check for rl_callback_handler_install, which just tries to link the named function against the library). Presumably, rlconf.h is the configuration used when the readline library was compiled, so if READLINE_CALLBACKS is defined in it, I would expect the relevant functions to be present in the compiled library. In any case, this isn't desperately important, since you've managed to hack around the test anyway. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-08 09:12 Message: Logged In: YES user_id=11375 That's exactly my setup. I don't think there is a -dev package for readline 4. I do note that READLINE_CALLBACKS is defined in /usr/include/readline/rlconf.h, but Python's readline.c doesn't include this file, and none of the readline headers include it. So I don't know why you're finding the function! ---------------------------------------------------------------------- Comment By: Richard Boulton (richardb) Date: 2006-09-08 05:34 Message: Logged In: YES user_id=9565 HAVE_READLINE_CALLBACK is defined by configure.in whenever the readline library on the platform supports the rl_callback_handler_install() function. I'm using Ubuntu Dapper, and have libreadline 4 and 5 installed (more precisely, 4.3-18 and 5.1-7build1), but only the -dev package for 5.1-7build1. "info readline" describes rl_callback_handler_install(), and configure.in finds it, so I'm surprised it wasn't found on akuchling's machine. I agree that the code looks buggy on platforms in which signals don't necessarily get delivered to the main thread, but looks no more buggy with the patch than without. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 10:38 Message: Logged In: YES user_id=11375 On looking at the readline code, I think this patch makes no difference to signals. The code in readline.c for the callbacks looks like this: has_input = 0; while (!has_input) { ... has_input = select.select(rl_input); } if (has_input > 0) {read character} elif (errno == EINTR) {check signals} So I think that, if a signal is delivered to a thread and select() in the main thread doesn't return EINTR, the old code is just as problematic as the code with this patch. The (while !has_input) loop doesn't check for signals at all as an exit condition. I'm not sure what to do at this point. I think the new code is no worse than the old code with regard to signals. Maybe this loop is buggy w.r.t. to signals, but I don't know how to test that. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 10:17 Message: Logged In: YES user_id=11375 HAVE_READLINE_CALLBACK was not defined with readline 5.1 on Ubuntu Dapper, until I did the configure/CFLAG trick. I didn't think of a possible interaction with signals, and will re-open the bug while trying to work up a test case. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-09-07 10:12 Message: Logged In: YES user_id=6656 I'd be cautious about applying this to 2.5: we could end up with the same problem currently entertaining python-dev, i.e. a signal gets delivered to a non- main thread but the main thread is sitting in a select with no timeout so any python signal handler doesn't run until the user hits a key. HAVE_READLINE_CALLBACK is defined when readline is 2.1 *or newer* I think... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 10:02 Message: Logged In: YES user_id=11375 Recent versions of readline can still support callbacks if READLINE_CALLBACK is defined, so I could test the patch by running 'CFLAGS=-DREADLINE_CALLBACK' and re-running configure. Applied as rev. 51815 to the trunk, so the fix will be in Python 2.6. The 2.5 release manager needs to decide if it should be applied to the 2.5 branch. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 09:24 Message: Logged In: YES user_id=11375 Original report: http://perkypants.org/blog/2006/09/02/rfte-python This is tied to the version of readline being used; the select code is only used if HAVE_RL_CALLBACK is defined, and a comment in Python's configure.in claims it's only defined with readline 2.1. Current versions of readline are 4.3 and 5.1; are people still using such an ancient version of readline? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1552726&group_id=5470 From noreply at sourceforge.net Fri Jan 5 17:24:59 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 08:24:59 -0800 Subject: [ python-Bugs-1628895 ] Pydoc sets choices for doc locations incorrectly Message-ID: Bugs item #1628895, was opened at 2007-01-05 10:24 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628895&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Skip Montanaro (montanaro) Assigned to: Nobody/Anonymous (nobody) Summary: Pydoc sets choices for doc locations incorrectly Initial Comment: In pydoc.Helper.__init__ I see this code: execdir = os.path.dirname(sys.executable) homedir = os.environ.get('PYTHONHOME') for dir in [os.environ.get('PYTHONDOCS'), homedir and os.path.join(homedir, 'doc'), os.path.join(execdir, 'doc'), '/usr/doc/python-docs-' + split(sys.version)[0], '/usr/doc/python-' + split(sys.version)[0], '/usr/doc/python-docs-' + sys.version[:3], '/usr/doc/python-' + sys.version[:3], os.path.join(sys.prefix, 'Resources/English.lproj/Documenta$ if dir and os.path.isdir(os.path.join(dir, 'lib')): self.docdir = dir I think the third choice in the list of candidate directories is wrong. execdir is the directory of the Python executable (e.g., it's /usr/local/bin by default). I think it should be set as execdir = os.path.dirname(os.path.dirname(sys.executable)) You're not going to find a "doc" directory in /usr/local/bin. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628895&group_id=5470 From noreply at sourceforge.net Fri Jan 5 17:28:16 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 08:28:16 -0800 Subject: [ python-Feature Requests-1627266 ] optparse "store" action should not gobble up next option Message-ID: Feature Requests item #1627266, was opened at 2007-01-03 13:46 Message generated for change (Comment added) made by goodger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Raghuram Devarakonda (draghuram) >Assigned to: Greg Ward (gward) Summary: optparse "store" action should not gobble up next option Initial Comment: Hi, Check the following code: --------------opttest.py---------- from optparse import OptionParser def process_options(): global options, args, parser parser = OptionParser() parser.add_option("--test", action="store_true") parser.add_option("-m", metavar="COMMENT", dest="comment", default=None) (options, args) = parser.parse_args() return process_options() print "comment (%r)" % options.comment --------------------- $ ./opttest.py -m --test comment ('--test') I was expecting this to give an error as "--test" is an option. But it looks like even C library's getopt() behaves similarly. It will be nice if optparse can report error in this case. ---------------------------------------------------------------------- >Comment By: David Goodger (goodger) Date: 2007-01-05 11:28 Message: Logged In: YES user_id=7733 Originator: NO I think what you're asking for is ambiguous at best. In your example, how could optparse possibly decide that the "--test" is a second option, not an option argument? What if you *do* want "--test" as an argument? Assigning to Greg Ward. Recommend closing as invalid. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-05 10:19 Message: Logged In: YES user_id=984087 Originator: YES I am attaching the code fragment as a file since the indentation got all messed up in the original post. File Added: opttest.py ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 From noreply at sourceforge.net Fri Jan 5 17:37:21 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 08:37:21 -0800 Subject: [ python-Bugs-1628902 ] xml.dom.minidom parse bug Message-ID: Bugs item #1628902, was opened at 2007-01-05 17:37 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628902&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Pierre Imbaud (pmi) Assigned to: Nobody/Anonymous (nobody) Summary: xml.dom.minidom parse bug Initial Comment: xml.dom.minidom was unable to parse an xml file that came from an example provided by an official organism.(http://www.iptc.org/IPTC4XMP) The parsed file was somewhat hairy, but I have been able to reproduce the bug with a simplified version, attached. (ends with .xmp: its supposed to be an xmp file, the xmp standard being built on xml. Well, thats the short story). The offending part is the one that goes: xmpPLUS='....' it triggers an exception: ValueError: too many values to unpack, in _parse_ns_name. Some debugging showed an obvious mistake in the scanning of the name argument, that goes beyond the closing " ' ". I digged a little further thru a pdb session, but the bug seems to be located in c code. Thats the very first time I report a bug, chances are I provide too much or too little information... To whoever it may concern, here is the invoking code: from xml.dom import minidom ... class xmp(dict): def __init__(self, inStream): xmldoc = minidom.parse(inStream) .... x = xmp('/home/pierre/devt/port/IPTCCore-Full/x.xmp') traceback: /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xmpLib.py in __init__(self, inStream) 26 def __init__(self, inStream): 27 print minidom ---> 28 xmldoc = minidom.parse(inStream) 29 xmpmeta = xmldoc.childNodes[1] 30 rdf = xmpmeta.childNodes[1] /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/nxml/dom/minidom.py in parse(file, parser, bufsize) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parse(file, namespaces) 922 fp = open(file, 'rb') 923 try: --> 924 result = builder.parseFile(fp) 925 finally: 926 fp.close() /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parseFile(self, file) 205 if not buffer: 206 break --> 207 parser.Parse(buffer, 0) 208 if first_buffer and self.document.documentElement: 209 self._setup_subset(buffer) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in start_element_handler(self, name, attributes) 743 def start_element_handler(self, name, attributes): 744 if ' ' in name: --> 745 uri, localname, prefix, qname = _parse_ns_name(self, name) 746 else: 747 uri = EMPTY_NAMESPACE /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in _parse_ns_name(builder, name) 125 localname = intern(localname, localname) 126 else: --> 127 uri, localname = parts 128 prefix = EMPTY_PREFIX 129 qname = localname = intern(localname, localname) ValueError: too many values to unpack The offending c statement: /usr/src/packages/BUILD/Python-2.4/Modules/pyexpat.c(582)StartElement() The returned 'name': (Pdb) name Out[5]: u'XMP Photographic Licensing Universal System (xmpPLUS, http://ns.adobe.com/xap/1.0/PLUS/) CreditLineReq xmpPLUS' Its obvious the scanning went beyond the attribute. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628902&group_id=5470 From noreply at sourceforge.net Fri Jan 5 17:45:02 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 08:45:02 -0800 Subject: [ python-Bugs-1628906 ] clarify 80-char limit Message-ID: Bugs item #1628906, was opened at 2007-01-05 11:45 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628906&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 3000 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: clarify 80-char limit Initial Comment: PEP 3099 says: """ Coding style ============ * The (recommended) maximum line width will remain 80 characters, for both C and Python code. Thread: "C style guide", http://mail.python.org/pipermail/python-3000/2006-March/000131.html """ It should be clarified that this really means 72-79 characters, perhaps by adding the following sentence: Note that according to PEP 8, this actually means no more than 79 characters in a line, and no more than about 72 in docstrings or comments. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628906&group_id=5470 From noreply at sourceforge.net Fri Jan 5 18:58:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 09:58:57 -0800 Subject: [ python-Feature Requests-1627266 ] optparse "store" action should not gobble up next option Message-ID: Feature Requests item #1627266, was opened at 2007-01-03 13:46 Message generated for change (Comment added) made by draghuram You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Raghuram Devarakonda (draghuram) Assigned to: Greg Ward (gward) Summary: optparse "store" action should not gobble up next option Initial Comment: Hi, Check the following code: --------------opttest.py---------- from optparse import OptionParser def process_options(): global options, args, parser parser = OptionParser() parser.add_option("--test", action="store_true") parser.add_option("-m", metavar="COMMENT", dest="comment", default=None) (options, args) = parser.parse_args() return process_options() print "comment (%r)" % options.comment --------------------- $ ./opttest.py -m --test comment ('--test') I was expecting this to give an error as "--test" is an option. But it looks like even C library's getopt() behaves similarly. It will be nice if optparse can report error in this case. ---------------------------------------------------------------------- >Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-05 12:58 Message: Logged In: YES user_id=984087 Originator: YES It is possible to deduce "--test" as an option because it is in the list of options given to optparse. But your point about what if the user really wants "--test" as an option argument is valid. I guess this request can be closed. Thanks, Raghu. ---------------------------------------------------------------------- Comment By: David Goodger (goodger) Date: 2007-01-05 11:28 Message: Logged In: YES user_id=7733 Originator: NO I think what you're asking for is ambiguous at best. In your example, how could optparse possibly decide that the "--test" is a second option, not an option argument? What if you *do* want "--test" as an argument? Assigning to Greg Ward. Recommend closing as invalid. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-05 10:19 Message: Logged In: YES user_id=984087 Originator: YES I am attaching the code fragment as a file since the indentation got all messed up in the original post. File Added: opttest.py ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 From noreply at sourceforge.net Fri Jan 5 19:43:44 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 10:43:44 -0800 Subject: [ python-Bugs-1628987 ] inspect trouble when source file changes Message-ID: Bugs item #1628987, was opened at 2007-01-05 13:43 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628987&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: phil (philipdumont) Assigned to: Nobody/Anonymous (nobody) Summary: inspect trouble when source file changes Initial Comment: backtrace (relevant outer frames only): File "/path/to/myfile", line 1198, in get_hook_name for frame_record in inspect.stack(): File "/usr/mbench2.2/lib/python2.4/inspect.py", line 819, in stack return getouterframes(sys._getframe(1), context) File "/usr/mbench2.2/lib/python2.4/inspect.py", line 800, in getouterframes framelist.append((frame,) + getframeinfo(frame, context)) File "/usr/mbench2.2/lib/python2.4/inspect.py", line 775, in getframeinfo lines, lnum = findsource(frame) File "/usr/mbench2.2/lib/python2.4/inspect.py", line 437, in findsource if pat.match(lines[lnum]): break IndexError: list index out of range Based on a quick look at the inspect code, I think this happens when you: - Start python and load a module - While it's running, edit the source file for the module (before inspect tries to look into it). - Call a routine in the edited module that will lead to a call to inspect.stack(). During an inspect.stack() call, inspect will open source files to get the source code for the routines on the stack. If the source file that is opened doesn't match the byte compiled code that's being run, there are problems. Inspect caches the files it reads (using the linecache module), so if the file gets cached before it is edited, nothing should go wrong. But if the source file is edited after the module is loaded and before inspect has a chance to cache the source, you're out of luck. Of course, this shouldn't be a problem in production code, but it has bit us more than once in a development environment. Seems like it would be easy to avoid by just comparing the timestamps on the source/object files. If the source file is newer, just behave the same as if it wasn't there. Attached is a stupid little python script that reproduces the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628987&group_id=5470 From noreply at sourceforge.net Fri Jan 5 20:24:12 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 11:24:12 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Fri Jan 5 20:51:19 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 11:51:19 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Fri Jan 5 20:54:34 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 11:54:34 -0800 Subject: [ python-Feature Requests-698900 ] Provide " plucker" format docs. Message-ID: Feature Requests item #698900, was opened at 2003-03-06 13:45 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=698900&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Documentation >Group: None Status: Open Resolution: None Priority: 4 Private: No Submitted By: Fred L. Drake, Jr. (fdrake) Assigned to: Fred L. Drake, Jr. (fdrake) Summary: Provide "plucker" format docs. Initial Comment: There have been a few requests for documents to be provided in the "plucker" format for use on PDAs. Plucker has the adantage of being free software (both in terms of liberty and price), whereas iSilo is merely low-priced (free in some flavors?). Information on Plucker can be found at www.plkr.org. Documentation for the conversion tool appears slim. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=698900&group_id=5470 From noreply at sourceforge.net Fri Jan 5 20:57:36 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 11:57:36 -0800 Subject: [ python-Bugs-956303 ] Update pickle docs to describe format of persistent IDs Message-ID: Bugs item #956303, was opened at 2004-05-18 18:45 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=956303&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Allan Crooks (amc1) Assigned to: Nobody/Anonymous (nobody) >Summary: Update pickle docs to describe format of persistent IDs Initial Comment: There is a bug in save_pers in both the pickle and cPickle modules in Python. It occurs when someone uses a Pickler instance which is using an ASCII protocol and also has persistent_id defined which can return a persistent reference that can contain newline characters in. The current implementation of save_pers in the pickle module is as follows: ---- def save_pers(self, pid): # Save a persistent id reference if self.bin: self.save(pid) self.write(BINPERSID) else: self.write(PERSID + str(pid) + '\n') ---- The else clause assumes that the 'pid' will not be a string which one or more newline characters. If the pickler pickles a persistent ID which has a newline in it, then an unpickler with a corresponding persistent_load method will incorrectly unpickle the data - usually interpreting the character after the newline as a marker indicating what type of data should be expected (usually resulting in an exception being raised when the remaining data is not in the format expected). I have attached an example file which illustrates in what circumstances the error occurs. Workarounds for this bug are: 1) Use binary mode for picklers. 2) Modify subclass implementations of save_pers to ensure that newlines are not returned for persistent ID's. Although you may assume in general that this bug would only occur on rare occasions (due to the unlikely situation where someone would implement persistent_id so that it would return a string with a newline character embedded), it may occur more frequently if the subclass implementation of persistent_id uses a string which has been constructed using the marshal module. This bug was discovered when our code implemented the persistent_id method, which was returning the marshalled format of a tuple which contained strings. It occurred when one or more of the strings had a length of ten characters - the marshalled format of that string contains the string's length, where the byte used to represent the number 10 is the same as the one which represents the newline character: >>> marshal.dumps('a' * 10) 's\n\x00\x00\x00aaaaaaaaaa' >>> chr(10) '\n' I have replicated this bug on Python 1.5.2 and Python 2.3b1, and I believe it is present on all 2.x versions of Python. Many thanks to SourceForge user (and fellow colleague) SMST who diagnosed the bug and provided the test cases attached. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-07-03 08:41 Message: Logged In: YES user_id=21627 Also lowering the priority. amc1, if you are still interested, are you willing to provide a documentation patch? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-11-07 17:40 Message: Logged In: YES user_id=31435 Unassigned myself (I don't have time for it), but changed the Category to Documentation. (Changing what a persistent ID can be would need to be a new feature request.) ---------------------------------------------------------------------- Comment By: Allan Crooks (amc1) Date: 2004-05-19 11:30 Message: Logged In: YES user_id=39733 I would at least like the documentation modified to make it clearer that certain characters are not permitted for persistent ID's. I think the text which indicates the requirement of printable ASCII characters is too subtle - there should be something which makes the requirement more obvious, the use of a "must" or "should" would help get the point across (as would some text after the statement indicating that characters such as '\b', '\n', '\r' are not permitted). Perhaps it would be an idea for save_pers to do some argument checking before storing the persistent ID, perhaps using an assertion statement to verify that the ID doesn't contain non-permitted characters (or at least, checking for the presence of a '\n' character embedded in the string). I think it is preferable to have safeguards implemented in Pickler to prevent potentially dodgy data being stored - I would rather have an error raised when I'm trying to pickle something than have the data stored and corrupted, only to notice it when it is unpickled (when it is too late). Confusingly, the code in save_pers in the pickle module seems to indicate that it would happily accept non-String based persistent ID's: ---- else: self.write(PERSID + str(pid) + '\n') ---- I don't understand why we are using the str function if we are expecting pid to be a string in the first place. I would rather that this method would raise an error if it tried to perform string concatenation on something which isn't a string. I agree with SMST, I would like the restriction removed over what persistent ID's we can use, it seems somewhat arbitary - there's no reason, for example, why we can't use any simple data type which can be marshalled as an ID. Apart from the reason that it wouldn't be backwardly compatible, which is probably a good enough reason. :) ---------------------------------------------------------------------- Comment By: Steve Tregidgo (smst) Date: 2004-05-19 06:31 Message: Logged In: YES user_id=42335 I'd overlooked that note in the documentation before, and in fact developed the opposite view on what was allowed by seeing that the binary pickle format happens to allow persistent IDs containing non-printable ASCII characters. Given that the non-binary format can represent strings (containing any character, printable or not) by escaping them, it seems unfortunate that the same escaping was not applied to the saving of persistent IDs. It might be helpful if the documentation indicated that the acceptance by the binary pickle format of strings without restriction is not to be relied upon, underlining the fact that printable ASCII is all that's allowed by the format. Personally I'd like to see the restriction on persistent IDs lifted in a future version of the pickle module, but I don't have a compelling reason for it (other than it seeming to be unnecessary). On the other hand, it seems to be a limitation which hasn't caused much grief (if any) over the years... perhaps such a change (albeit a minor one) in the specifications should be left until another protocol is introduced. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2004-05-18 23:02 Message: Logged In: YES user_id=31435 The only documentation is the "Pickling and unpickling external objects" section of the Library Reference Manual, which says: """ Such objects are referenced by a ``persistent id'', which is just an arbitrary string of printable ASCII characters. """ A newline is universally considered to be a control character, not a printable character (e.g., try isprint('\n') under your local C compiler). So this is functioning as designed and as documented. If you don't find the docs clear, we should call this a documentation bug. If you think the semantics should change to allow more than printable characters, then this should become a feature request, and more is needed to define exactly which characters should be allowed. The current implementation is correct for persistent ids that meet the documented requirement. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=956303&group_id=5470 From noreply at sourceforge.net Sat Jan 6 00:15:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 15:15:48 -0800 Subject: [ python-Bugs-1629125 ] Incorrect type in PyDict_Next() example code Message-ID: Bugs item #1629125, was opened at 2007-01-05 15:15 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629125&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jason Evans (jasonevans) Assigned to: Nobody/Anonymous (nobody) Summary: Incorrect type in PyDict_Next() example code Initial Comment: In the PyDict_Next() documentation, there are two example snippets of code. In both snippets, the line: int pos = 0; should instead be: ssize_t pos = 0; or perhaps: Py_ssize_t pos = 0; On an LP64 system, the unfixed snippets will cause a compiler warning due to size mismatch between int and ssize_t. Using Python 2.5 on RHEL WS 4, x86_64. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629125&group_id=5470 From noreply at sourceforge.net Sat Jan 6 00:57:08 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 15:57:08 -0800 Subject: [ python-Bugs-1629125 ] Incorrect type in PyDict_Next() example code Message-ID: Bugs item #1629125, was opened at 2007-01-05 18:15 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629125&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jason Evans (jasonevans) >Assigned to: Neal Norwitz (nnorwitz) Summary: Incorrect type in PyDict_Next() example code Initial Comment: In the PyDict_Next() documentation, there are two example snippets of code. In both snippets, the line: int pos = 0; should instead be: ssize_t pos = 0; or perhaps: Py_ssize_t pos = 0; On an LP64 system, the unfixed snippets will cause a compiler warning due to size mismatch between int and ssize_t. Using Python 2.5 on RHEL WS 4, x86_64. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629125&group_id=5470 From noreply at sourceforge.net Sat Jan 6 01:46:26 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 16:46:26 -0800 Subject: [ python-Bugs-1628902 ] xml.dom.minidom parse bug Message-ID: Bugs item #1628902, was opened at 2007-01-05 17:37 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628902&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Pierre Imbaud (pmi) Assigned to: Nobody/Anonymous (nobody) Summary: xml.dom.minidom parse bug Initial Comment: xml.dom.minidom was unable to parse an xml file that came from an example provided by an official organism.(http://www.iptc.org/IPTC4XMP) The parsed file was somewhat hairy, but I have been able to reproduce the bug with a simplified version, attached. (ends with .xmp: its supposed to be an xmp file, the xmp standard being built on xml. Well, thats the short story). The offending part is the one that goes: xmpPLUS='....' it triggers an exception: ValueError: too many values to unpack, in _parse_ns_name. Some debugging showed an obvious mistake in the scanning of the name argument, that goes beyond the closing " ' ". I digged a little further thru a pdb session, but the bug seems to be located in c code. Thats the very first time I report a bug, chances are I provide too much or too little information... To whoever it may concern, here is the invoking code: from xml.dom import minidom ... class xmp(dict): def __init__(self, inStream): xmldoc = minidom.parse(inStream) .... x = xmp('/home/pierre/devt/port/IPTCCore-Full/x.xmp') traceback: /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xmpLib.py in __init__(self, inStream) 26 def __init__(self, inStream): 27 print minidom ---> 28 xmldoc = minidom.parse(inStream) 29 xmpmeta = xmldoc.childNodes[1] 30 rdf = xmpmeta.childNodes[1] /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/nxml/dom/minidom.py in parse(file, parser, bufsize) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parse(file, namespaces) 922 fp = open(file, 'rb') 923 try: --> 924 result = builder.parseFile(fp) 925 finally: 926 fp.close() /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parseFile(self, file) 205 if not buffer: 206 break --> 207 parser.Parse(buffer, 0) 208 if first_buffer and self.document.documentElement: 209 self._setup_subset(buffer) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in start_element_handler(self, name, attributes) 743 def start_element_handler(self, name, attributes): 744 if ' ' in name: --> 745 uri, localname, prefix, qname = _parse_ns_name(self, name) 746 else: 747 uri = EMPTY_NAMESPACE /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in _parse_ns_name(builder, name) 125 localname = intern(localname, localname) 126 else: --> 127 uri, localname = parts 128 prefix = EMPTY_PREFIX 129 qname = localname = intern(localname, localname) ValueError: too many values to unpack The offending c statement: /usr/src/packages/BUILD/Python-2.4/Modules/pyexpat.c(582)StartElement() The returned 'name': (Pdb) name Out[5]: u'XMP Photographic Licensing Universal System (xmpPLUS, http://ns.adobe.com/xap/1.0/PLUS/) CreditLineReq xmpPLUS' Its obvious the scanning went beyond the attribute. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-06 01:46 Message: Logged In: YES user_id=21627 Originator: NO Dupe of 1627096 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628902&group_id=5470 From noreply at sourceforge.net Sat Jan 6 01:52:59 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 16:52:59 -0800 Subject: [ python-Bugs-1628484 ] Python 2.5 64 bit compile fails on Solaris 10/gcc 4.1.1 Message-ID: Bugs item #1628484, was opened at 2007-01-05 09:45 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628484&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Bob Atkins (bobatkins) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.5 64 bit compile fails on Solaris 10/gcc 4.1.1 Initial Comment: This looks like a recurring and somewhat sore topic. For those of us that have been fighting the dreaded: ./Include/pyport.h:730:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." when performing a 64 bit compile. I believe I have identified the problems. All of which are directly related to the Makefile(s) that are generated as part of the configure script. There does not seem to be anything wrong with the configure script or anything else once all of the Makefiles are corrected Python will build 64 bit Although it is possible to pass the following environment variables to configure as is typical on most open source software: CC C compiler command CFLAGS C compiler flags LDFLAGS linker flags, e.g. -L if you have libraries in a nonstandard directory CPPFLAGS C/C++ preprocessor flags, e.g. -I if you have headers in a nonstandard directory CPP C preprocessor These flags are *not* being processed through to the generated Makefiles. This is where the problem is. configure is doing everything right and generating all of the necessary stuff for a 64 bit compile but when the compile is actually performed - the necessary CFLAGS are missing and a 32 bit compile is initiated. Taking a close look at the first failure I found the following: gcc -pthread -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -fPIC -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c Where are my CFLAGS??? I ran the configure with: CFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ CXXFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ LDFLAGS="-m64 -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ ./configure --prefix=/opt \ --enable-shared \ --libdir=/opt/lib/sparcv9 Checking the config.log and config.status it was clear that the flags were used properly as the configure script ran however, the failure is in the various Makefiles to actually reference the CFLAGS and LDFLAGS. LDFLAGS is simply not included in any of the link stages in the Makefiles and CFLAGS is overidden by BASECFLAGS, OPT and EXTRA_CFLAGS! Ah! EXTRA_CFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ make Actually got the core parts to compile for the library and then failed to build the library because - LDFLAGS was missing from the Makefile for the library link stage - :-( Close examination suggests that the OPT environment variable could be used to pass the necessary flags through from conifgure but this still did not help the link stage problems. The fixes are pretty minimal to ensure that the configure variables are passed into the Makefile. My patch to the Makefile.pre.in is attached to this bug report. Once these changes are made Python will build properly for both 32 and 64 bit platforms with the correct CFLAGS and LDFLAGS passed into the configure script. BTW, while this bug is reported under a Solaris/gcc build the patches to Makefile.pre.in should fix similar build issues on all platforms. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-06 01:52 Message: Logged In: YES user_id=21627 Originator: NO Can you please report what the actual problem is that you got? I doubt it's the #error, as that error is generated by the preprocessor, yet your fix seems to deal with LDFLAGS only. So please explain what command you invoked, what the actual output was, and what the expected output was. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628484&group_id=5470 From noreply at sourceforge.net Sat Jan 6 02:17:52 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 17:17:52 -0800 Subject: [ python-Bugs-1409443 ] frame->f_lasti not always correct Message-ID: Bugs item #1409443, was opened at 2006-01-18 16:57 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1409443&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: John Ehresman (jpe) Assigned to: Raymond Hettinger (rhettinger) Summary: frame->f_lasti not always correct Initial Comment: Contrary to the comment in ceval.c, the f_lasti field is not always correct because it is not updated by the PREDICT / PREDICTED macros. This means that when a GET_ITER is followed by a FOR_ITER, f_lasti will be left at the index of the GET_ITER the first time FOR_ITER is executed. I don't think this is a problem for YIELD_VALUE because it's not predicted to follow any other opcode. I'm running into this when examining bytecode in calling frames within a debugger callback. I suggest either documenting that f_lasti may be incorrect or adjusting it in the PREDICTED macro. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-05 20:17 Message: Logged In: YES user_id=80475 Originator: NO Expanded comment in rev 53285. IMO, the f->f_lasti is not incorrect. In effect, a successful prediction links the opcodes so that two codes function as a single new code (GET_ITER, FOR_ITER) --> GET_ITER_FOR_ITER. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-01-19 00:32 Message: Logged In: YES user_id=33168 Raymond? Given that PREDICTED was added for performance, I would lean toward updating the doc. I didn't look at the code, but I'm pretty sure John's description is accurate. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1409443&group_id=5470 From noreply at sourceforge.net Sat Jan 6 03:05:55 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 18:05:55 -0800 Subject: [ python-Bugs-1514428 ] NaN comparison in Decimal broken Message-ID: Bugs item #1514428, was opened at 2006-06-29 11:19 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1514428&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Nick Maclaren (nmm) >Assigned to: Tim Peters (tim_one) Summary: NaN comparison in Decimal broken Initial Comment: Methinks this is a bit off :-) True should be False. Python 2.5b1 (trunk:47059, Jun 29 2006, 14:26:46) [GCC 4.1.0 (SUSE Linux)] on linux2 >>> import decimal >>> d = decimal.Decimal >>> inf = d("inf") >>> nan = d("nan") >>> nan > inf True >>> nan < inf False >>> inf > nan True >>> inf < nan False b ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-05 21:05 Message: Logged In: YES user_id=80475 Originator: NO The Decimal Arithmetic Specification says that NaN comparisons should return NaN. The decimal module correctly implements this through the compare() method: >>> nan.compare(nan) Decimal('NaN') Since python's < and > operators return a boolean result, the standard is silent on what should be done. The current implementation uses the __cmp__ method which can only return -1, 0, or 1, so there is not a direct way to make both < and > both return False. If you want to go beyond the standard and have both < and > return False for all NaN comparisons, then the __cmp__ implementation would need to be replaced with rich comparisons. I'm not sure that this is desirable. IMO, that would be no better than the current arbitrary choice where all comparisons involving NaN report self > other. If someone has an application that would be harmed by the current implementation, then it should almost certainly be use the standard compliant compare() method instead of the boolean < and > operators. Tim, what say you? ---------------------------------------------------------------------- Comment By: CharlesMerriam (charlesmerriam) Date: 2006-08-23 03:43 Message: Logged In: YES user_id=1581732 More specifically, any comparison with a NaN should equal False, even inf, per IEEE 754. A good starting point to convince oneself of this is http://en.wikipedia.org/wiki/NaN. ---------------------------------------------------------------------- Comment By: Nick Maclaren (nmm) Date: 2006-07-13 05:35 Message: Logged In: YES user_id=42444 It's still there in Beta 2. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1514428&group_id=5470 From noreply at sourceforge.net Sat Jan 6 03:16:46 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 18:16:46 -0800 Subject: [ python-Bugs-1105286 ] Undocumented implicit strip() in split(None) string method Message-ID: Bugs item #1105286, was opened at 2005-01-19 10:04 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1105286&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None >Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: YoHell (yohell) Assigned to: Raymond Hettinger (rhettinger) Summary: Undocumented implicit strip() in split(None) string method Initial Comment: Hi! I noticed that the string method split() first does an implicit strip() before splitting when it's used with no arguments or with None as the separator (sep in the docs). There is no mention of this implicit strip() in the docs. Example 1: s = " word1 word2 " s.split() then returns ['word1', 'word2'] and not ['', 'word1', 'word2', ''] as one might expect. WHY IS THIS BAD? 1. Because it's undocumented. See: http://www.python.org/doc/current/lib/string-methods.html#l2h-197 2. Because it may lead to unexpected behavior in programs. Example 2: FASTA sequence headers are one line descriptors of biological sequences and are on this form: ">" + Identifier + whitespace + free text description. Let sHeader be a Python string containing a FASTA header. One could then use the following syntax to extract the identifier from the header: sID = sHeader[1:].split(None, 1)[0] However, this does not work if sHeader contains a faulty FASTA header where the identifier is missing or consists of whitespace. In that case sID will contain the first word of the free text description, which is not the desired behavior. WHAT SHOULD BE DONE? The implicit strip() should be removed, or at least should programmers be given the option to turn it off. At the very least it should be documented so that programmers have a chance of adapting their code to it. Thank you for an otherwise splendid language! /Joel Hedlund Ph.D. Student IFM Bioinformatics Link?ping University ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-05 21:16 Message: Logged In: YES user_id=80475 Originator: NO I think the current wording is clear enough and that further attempts to specify corner cases will only make the docs harder to understand and less useful. ---------------------------------------------------------------------- Comment By: YoHell (yohell) Date: 2006-11-07 09:11 Message: Logged In: YES user_id=1008220 *resubmission: grammar corrected* I'm opening this again, since the docs still don't reflect the behavior of the method. from the docs: """ If sep is not specified or is None, a different splitting algorithm is applied. First, whitespace characters (spaces, tabs, newlines, returns, and formfeeds) are stripped from both ends. """ This is not true when maxsplit is given. Example: >>> " foo bar ".split(None) ['foo', 'bar'] >>> " foo bar ".split(None, 1) ['foo', 'bar '] Whitespace is obviously not stripped from the ends before the rest of the string is split. ---------------------------------------------------------------------- Comment By: YoHell (yohell) Date: 2006-11-07 09:06 Message: Logged In: YES user_id=1008220 I'm opening this again, since the docs still don't reflect the behavior of the method. from the docs: """ If sep is not specified or is None, a different splitting algorithm is applied. First, whitespace characters (spaces, tabs, newlines, returns, and formfeeds) are stripped from both ends. """ This is not true when maxsplit is given. Example: >>> " foo bar ".split(None) ['foo', 'bar'] >>> " foo bar ".split(None, 1) ['foo', 'bar '] Whitespace is obviously not stripping whitespace from the ends of the string before splitting the rest of the string. ---------------------------------------------------------------------- Comment By: Wummel (calvin) Date: 2005-01-24 07:51 Message: Logged In: YES user_id=9205 This should probably also be added to rsplit()? ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2005-01-24 02:15 Message: Logged In: YES user_id=593130 To me, the removal of whitespace at the ends (stripping) is consistent with the removal (or collapsing) of extra whitespace in between so that .split() does not return empty words anywhere. Consider: >>> ',1,,2,'.split(',') ['', '1', '', '2', ''] If ' 1 2 '.split() were to return null strings at the beginning and end of the list, then to be consistent, it should also put one in the middle. One can get this by being explicit (mixed WS can be handled by translation): >>> ' 1 2 '.split(' ') ['', '1', '', '2', ''] Having said this, I also agree that the extra words proposed by jj are helpful. BUG?? In 2.2, splitting an empty or whitespace only string produces an empty list [], not a list with a null word ['']. >>> ''.split() [] >>> ' '.split() [] which is what I see as consistent with the rest of the no-null- word behavior. Has this changed since? (Yes, must upgrade.) I could find no indication of such change in either the tracker or CVS. ---------------------------------------------------------------------- Comment By: YoHell (yohell) Date: 2005-01-20 09:59 Message: Logged In: YES user_id=1008220 Brilliant, guys! Thanks again for a superb scripting language, and with documentation to match! Take care! /Joel Hedlund ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2005-01-20 09:50 Message: Logged In: YES user_id=80475 The prosposed wording is fine. If there are no objections or concerns, I'll apply it soon. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2005-01-20 09:28 Message: Logged In: YES user_id=764593 Replacing the quoted line: """ ... If sep is not specified or is None, a different splitting algorithm is applied. First whitespace (spaces, tabs, newlines, returns, and formfeeds) is stripped from both ends. Then words are separated by arbitrary length strings of whitespace characters . Consecutive whitespace delimiters are treated as a single delimiter ("'1 2 3'.split()" returns "['1', '2', '3']"). Splitting an empty (or whitespace- only) string returns "['']". """ ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2005-01-20 09:04 Message: Logged In: YES user_id=80475 What new wording do you propose to be added? ---------------------------------------------------------------------- Comment By: YoHell (yohell) Date: 2005-01-20 05:15 Message: Logged In: YES user_id=1008220 In RE to tim_one: > I think the docs for split() under "String Methods" are quite > clear: On the countrary, my friend, and here's why: > """ > ... > If sep is not specified or is None, a different splitting > algorithm is applied. This sentecnce does not say that whitespace will be implicitly stripped from the edges of the string. > Words are separated by arbitrary length strings of whitespace > characters (spaces, tabs, newlines, returns, and formfeeds). Neither does this one. > Consecutive whitespace delimiters are treated as a single delimiter ("'1 > 2 3'.split()" returns "['1', '2', '3']"). And not that one. > Splitting an empty string returns "['']". > """ And that last one does not mention it either. In fact, there is no mention in the docs of how separators on edges of strings are treated by the split method. And furthermore, there is no mention of that s.split(sep) treats them differrently when sep is None than it does otherwise. Example: >>> ",2,".split(',') ['', '2', ''] >>> " 2 ".split() ['2'] This inconsistent behavior is not in line with how beautifully thought out the Python language is otherwise, and how brilliantly everything else is documented on the http://python.org/doc/ documentation pages. > This won't change, because mountains of code rely on this > behavior -- it's probably the single most common use case > for .split(). I thought as much. However - it's would be Really easy for an admin to add a line of documentation to .split() to explain this. That would certainly help make me a happier man, and hopefully others too. Cheers guys! /Joel ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2005-01-19 11:56 Message: Logged In: YES user_id=31435 I think the docs for split() under "String Methods" are quite clear: """ ... If sep is not specified or is None, a different splitting algorithm is applied. Words are separated by arbitrary length strings of whitespace characters (spaces, tabs, newlines, returns, and formfeeds). Consecutive whitespace delimiters are treated as a single delimiter ("'1 2 3'.split()" returns "['1', '2', '3']"). Splitting an empty string returns "['']". """ This won't change, because mountains of code rely on this behavior -- it's probably the single most common use case for .split(). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1105286&group_id=5470 From noreply at sourceforge.net Sat Jan 6 03:19:22 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 18:19:22 -0800 Subject: [ python-Bugs-1380970 ] split() description not fully accurate Message-ID: Bugs item #1380970, was opened at 2005-12-14 18:33 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1380970&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.4 >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: K.C. (kace) Assigned to: Raymond Hettinger (rhettinger) Summary: split() description not fully accurate Initial Comment: The page http://docs.python.org/lib/string-methods.html reads, in part, "If sep is not specified or is None, a different splitting algorithm is applied. First, whitespace characters (spaces, tabs, newlines, returns, and formfeeds) are stripped from both ends." However, this is not the behaviour that I'm seeing. (Although, I should note that I'd find the described behaviour more desirable.) Example, >>> trow = '1586\tsome-int-name\tNODES: 111_222\n' >>> print trow 1234 some-int-name NODES: 111_222 >>> trow.split(None,2) ['1234', 'some-int-name', 'NODES: 111_222\n'] # end example. Notice that the trailing newline has not been stripped as the documentation said it should be. Thanks all. K.C. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-05 21:19 Message: Logged In: YES user_id=80475 Originator: NO I prefer the docs as they currently read. ---------------------------------------------------------------------- Comment By: Collin Winter (collinwinter) Date: 2006-01-26 11:04 Message: Logged In: YES user_id=1344176 I've provided a patch for this: #1414934. ---------------------------------------------------------------------- Comment By: K.C. (kace) Date: 2005-12-14 18:36 Message: Logged In: YES user_id=741142 Also, (oops) the example comes from the most recent version: $ python Python 2.4.2 (#2, Oct 4 2005, 13:57:10) [GCC 3.4.2 [FreeBSD] 20040728] on freebsd5 Type "help", "copyright", "credits" or "license" for more information. >>> ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1380970&group_id=5470 From noreply at sourceforge.net Sat Jan 6 03:23:24 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 18:23:24 -0800 Subject: [ python-Bugs-1414673 ] Underspecified behaviour of string methods split, rsplit Message-ID: Bugs item #1414673, was opened at 2006-01-25 10:23 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1414673&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.4 >Status: Closed >Resolution: Wont Fix Priority: 5 Private: No Submitted By: Collin Winter (collinwinter) Assigned to: Raymond Hettinger (rhettinger) Summary: Underspecified behaviour of string methods split, rsplit Initial Comment: The documentation for the string methods split and rsplit do not address the case where sep=None and maxsplit=0. Should this strip off the leading and trailing whitespace, but not do any splits? Should it simply return the invocant string? ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-05 21:23 Message: Logged In: YES user_id=80475 Originator: NO The docs for split() and rsplit() have reached their limits of complexity. Let the corner cases be defined by what the implementation currently does. IMO, any more attempts to expand these docs can only result in a decrease in clarity and usability. What is there now does a good job at showing you what you need to know to use the methods effectively. ---------------------------------------------------------------------- Comment By: Matt Fleming (splitscreen) Date: 2006-02-18 07:12 Message: Logged In: YES user_id=1126061 >From the documentation of split() "If maxsplit is given, splits at no more than maxsplit places (resulting in at most maxsplit+1 words)." I know that at the moment rsplit() and split() remove any leading whitespace but leave trailing space intact, but I would have thought leaving the string entirely intact would make more sense. Surely, to comply with the statement 'resulting in at most maxsplit+1 words)' the entire string should be returned when maxsplit=0. I can see the point that the leading whitespace isn't actually returned but i don't see why it should be discarded. Just a thought. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1414673&group_id=5470 From noreply at sourceforge.net Sat Jan 6 03:24:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 18:24:09 -0800 Subject: [ python-Bugs-1472695 ] 32/64bit pickled Random incompatiblity Message-ID: Bugs item #1472695, was opened at 2006-04-18 20:10 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1472695&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Wont Fix Priority: 5 Private: No Submitted By: Peter Maxwell (pm67nz) Assigned to: Raymond Hettinger (rhettinger) Summary: 32/64bit pickled Random incompatiblity Initial Comment: The unsigned long integers which make up the state of a Random instance are converted to Python integers via a cast to long in _randommodule.c's random_getstate function, so on a 32bit platform Random.getstate() returns a mix of postitive and negative integers, while on a 64bit platform the negative numbers are replaced by larger positive numbers, their 32bit-2s-complement equivalents. As a result, unpicking a Random instance from a 64bit machine on a 32bit platform produces the error "OverflowError: long int too large to convert to int". Unpickling a 32bit Random on a 64bit machine succeeds, but the resulting object is in a slightly confused state: >>> r32 = cPickle.load(open('r32_3.pickle')) >>> for i in range(3): ... print r64.random(), r32.random() ... 0.237964627092 4292886520.32 0.544229225296 0.544229225296 0.369955166548 4292886520.19 ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-04-25 19:26 Message: Logged In: YES user_id=31435 > do you think we should require that the world not > change for 32-bit pickles? I don't understand the question. If a pre-2.5 pickle here can be read in 2.5, where both producer & consumer are the same 32-vs-64 bit choice; and a 2.5+ pickle here is portable between 32- and 64- boxes, I'd say "good enough". While desirable, it's not really critical that a 2.5 pickle here be readable by an older Python. While that's critical for pickle in general, and critical too for everyone-uses-'em types (ints, strings, lists, ...), when fixing a bug in a specific rarely-used type's pickling strategy some slop is OK. IOW, it's just not worth heroic efforts to hide all pain. The docs should mention incompatibilities, though. Does that answer the question? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2006-04-25 18:00 Message: Logged In: YES user_id=80475 Tim, do you think we should require that the world not change for 32-bit pickles? ---------------------------------------------------------------------- Comment By: Peter Maxwell (pm67nz) Date: 2006-04-21 01:03 Message: Logged In: YES user_id=320286 OK, here is a candidate patch, though I don't know if it is the best way to do it or meets the style guidelines etc. It makes Random pickles interchangable between 32bit and 64bit machines by encoding their states as Python long integers. An old pre-patch 32bit pickle loaded on a 64bit machine still fails (OverflowError: can't convert negative value to unsigned long) but I hope that combination is rare enough to ignore. Also on a 32bit machine new Random pickles can't be unpickled by a pre-patch python, but again there are limits to sane backward compatability. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-04-19 02:02 Message: Logged In: YES user_id=33168 Peter, thanks for the report. Do you think you could work up a patch to correct this problem? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1472695&group_id=5470 From noreply at sourceforge.net Sat Jan 6 03:26:56 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 05 Jan 2007 18:26:56 -0800 Subject: [ python-Bugs-1486663 ] Over-zealous keyword-arguments check for built-in set class Message-ID: Bugs item #1486663, was opened at 2006-05-11 11:17 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 6 Private: No Submitted By: dib (dib_at_work) >Assigned to: Georg Brandl (gbrandl) Summary: Over-zealous keyword-arguments check for built-in set class Initial Comment: The fix for bug #1119418 (xrange() builtin accepts keyword arg silently) included in Python 2.4.2c1+ breaks code that passes keyword argument(s) into classes derived from the built-in set class, even if those derived classes explictly accept those keyword arguments and avoid passing them down to the built-in base class. Simplified version of code in attached BuiltinSetKeywordArgumentsCheckBroken.py fails at (G) due to bug #1119418 if version < 2.4.2c1; if version >= 2.4.2c1 (G) passes thanks to that bug fix, but instead (H) incorrectly-in-my-view fails. [Presume similar cases would fail for xrange and the other classes mentioned in #1119418.] -- David Bruce (Tested on 2.4, 2.4.2, 2.5a2 on linux2, win32.) ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-05 21:26 Message: Logged In: YES user_id=80475 Originator: NO I prefer the approach used by list(). ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 20:19 Message: Logged In: YES user_id=1326842 See patch #1491939 ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 15:02 Message: Logged In: YES user_id=1326842 This bug was introduced as part of the fix for bug #1119418. It also affects collections.deque. Can't the _PyArg_NoKeywords check simply be moved to set_init and deque_init as it was done for zipimport.zipimporter? array.array doesn't need to be changed, since it already does all of its initialization in its __new__ method. The rest of the types changed in that fix should not be affected, since they are immutable. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-05-11 12:23 Message: Logged In: YES user_id=849994 Raymond, what to do in this case? Note that other built-in types, such as list(), do accept keyword arguments. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 From noreply at sourceforge.net Sat Jan 6 13:31:20 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 04:31:20 -0800 Subject: [ python-Bugs-1629369 ] email._parseaddr AddrlistClass bug Message-ID: Bugs item #1629369, was opened at 2007-01-06 12:31 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629369&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Tokio Kikuchi (tkikuchi) Assigned to: Nobody/Anonymous (nobody) Summary: email._parseaddr AddrlistClass bug Initial Comment: email._parseaddr AddrlistClass incorrectly parse multilined comment (display name). According to RFC2822, folding white space is allowed in display name. Thus following header should be parsed as a single address "foo at example.com" having multilined display name. To: Foo Bar On the other hand, following program results in: from email.Utils import getaddresses s = """Foo Bar """ print getaddresses([s]) [('', 'Foo'), ('Bar', 'foo at example.com')] Note that the first address is not valid one. Looks like the bug is in _parseaddr.py. Please check the patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629369&group_id=5470 From noreply at sourceforge.net Sat Jan 6 13:33:21 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 04:33:21 -0800 Subject: [ python-Bugs-1629369 ] email._parseaddr AddrlistClass bug Message-ID: Bugs item #1629369, was opened at 2007-01-06 12:31 Message generated for change (Settings changed) made by tkikuchi You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629369&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Tokio Kikuchi (tkikuchi) >Assigned to: Barry A. Warsaw (bwarsaw) Summary: email._parseaddr AddrlistClass bug Initial Comment: email._parseaddr AddrlistClass incorrectly parse multilined comment (display name). According to RFC2822, folding white space is allowed in display name. Thus following header should be parsed as a single address "foo at example.com" having multilined display name. To: Foo Bar On the other hand, following program results in: from email.Utils import getaddresses s = """Foo Bar """ print getaddresses([s]) [('', 'Foo'), ('Bar', 'foo at example.com')] Note that the first address is not valid one. Looks like the bug is in _parseaddr.py. Please check the patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629369&group_id=5470 From noreply at sourceforge.net Sat Jan 6 16:24:51 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 07:24:51 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: David Watson (baikie) Date: 2007-01-06 15:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 19:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 17:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 19:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 14:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 13:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 21:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 20:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 20:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Sat Jan 6 16:30:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 07:30:57 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: David Watson (baikie) Date: 2007-01-06 15:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 19:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 17:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 19:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 14:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 13:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 21:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 20:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 20:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Sat Jan 6 20:57:22 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 11:57:22 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: David Watson (baikie) Date: 2007-01-06 19:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 19:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 17:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 19:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 14:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 13:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 21:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 20:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 20:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Sat Jan 6 22:19:05 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 13:19:05 -0800 Subject: [ python-Bugs-1627244 ] xml.dom.minidom parse bug Message-ID: Bugs item #1627244, was opened at 2007-01-03 10:04 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627244&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Closed Resolution: Duplicate Priority: 5 Private: No Submitted By: Pierre Imbaud (pmi) Assigned to: Nobody/Anonymous (nobody) Summary: xml.dom.minidom parse bug Initial Comment: xml.dom.minidom was unable to parse an xml file that came from an example provided by an official organism.(http://www.iptc.org/IPTC4XMP) The parsed file was somewhat hairy, but I have been able to reproduce the bug with a simplified version, attached. (ends with .xmp: its supposed to be an xmp file, the xmp standard being built on xml. Well, thats the short story). The offending part is the one that goes: xmpPLUS='....' it triggers an exception: ValueError: too many values to unpack, in _parse_ns_name. Some debugging showed an obvious mistake in the scanning of the name argument, that goes beyond the closing " ' ". I digged a little further thru a pdb session, but the bug seems to be located in c code. Thats the very first time I report a bug, chances are I provide too much or too little information... To whoever it may concern, here is the invoking code: from xml.dom import minidom ... class xmp(dict): def __init__(self, inStream): xmldoc = minidom.parse(inStream) .... x = xmp('/home/pierre/devt/port/IPTCCore-Full/x.xmp') traceback: /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xmpLib.py in __init__(self, inStream) 26 def __init__(self, inStream): 27 print minidom ---> 28 xmldoc = minidom.parse(inStream) 29 xmpmeta = xmldoc.childNodes[1] 30 rdf = xmpmeta.childNodes[1] /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/nxml/dom/minidom.py in parse(file, parser, bufsize) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parse(file, namespaces) 922 fp = open(file, 'rb') 923 try: --> 924 result = builder.parseFile(fp) 925 finally: 926 fp.close() /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in parseFile(self, file) 205 if not buffer: 206 break --> 207 parser.Parse(buffer, 0) 208 if first_buffer and self.document.documentElement: 209 self._setup_subset(buffer) /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in start_element_handler(self, name, attributes) 743 def start_element_handler(self, name, attributes): 744 if ' ' in name: --> 745 uri, localname, prefix, qname = _parse_ns_name(self, name) 746 else: 747 uri = EMPTY_NAMESPACE /home/pierre/devt/fileInfo/svnRep/branches/xml/xmpLib/xml/dom/expatbuilder.py in _parse_ns_name(builder, name) 125 localname = intern(localname, localname) 126 else: --> 127 uri, localname = parts 128 prefix = EMPTY_PREFIX 129 qname = localname = intern(localname, localname) ValueError: too many values to unpack The offending c statement: /usr/src/packages/BUILD/Python-2.4/Modules/pyexpat.c(582)StartElement() The returned 'name': (Pdb) name Out[5]: u'XMP Photographic Licensing Universal System (xmpPLUS, http://ns.adobe.com/xap/1.0/PLUS/) CreditLineReq xmpPLUS' Its obvious the scanning went beyond the attribute. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-06 13:19 Message: Logged In: YES user_id=33168 Originator: NO Dupe of 1627096 ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-03 22:32 Message: Logged In: YES user_id=33168 Originator: NO Dupe of 1627096 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627244&group_id=5470 From noreply at sourceforge.net Sat Jan 6 22:19:22 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 13:19:22 -0800 Subject: [ python-Bugs-1623890 ] module docstring for subprocess is wrong Message-ID: Bugs item #1623890, was opened at 2006-12-28 13:49 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1623890&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Neal Norwitz (nnorwitz) Assigned to: Neal Norwitz (nnorwitz) Summary: module docstring for subprocess is wrong Initial Comment: The module docstring for subprocess is wrong. It says: communicate(input=None) Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional stdin argument should be a string to be sent to the child process, or None, if no data should be sent to the child. I'm not sure how to word the first stdin, but the second one should definitely be input, not stdin. Also need to verify the docstring for the communicate method. I'm guessing this affects Python 2.4 and later. Looking at 2.4.1? right now. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-06 13:19 Message: Logged In: YES user_id=33168 Originator: YES Committed revision 53187. (2.5) Committed revision 53188. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-12-28 19:02 Message: Logged In: YES user_id=33168 Originator: YES Committed revision 53187. (2.5) Committed revision 53188. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1623890&group_id=5470 From noreply at sourceforge.net Sat Jan 6 22:19:32 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 13:19:32 -0800 Subject: [ python-Bugs-1545837 ] array.array borks on deepcopy Message-ID: Bugs item #1545837, was opened at 2006-08-24 02:49 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1545837&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Open >Resolution: Accepted Priority: 9 Private: No Submitted By: V?clav Haisman (wilx) >Assigned to: Neal Norwitz (nnorwitz) Summary: array.array borks on deepcopy Initial Comment: Hi, I think there is a bug arraymodule.c this line: {"__deepcopy__",(PyCFunction)array_copy, METH_NOARGS, copy_doc}, should probably have METH_O instead of METH_NOARGS there, since according to docs and the prototype of the array_copy() function there is one parameter. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-06 13:19 Message: Logged In: YES user_id=33168 Originator: NO Thomas, was there any reason this wasn't checked in to 2.5? I'm guessing it was just forgotten. I don't see any mention in Misc/NEWS either. ---------------------------------------------------------------------- Comment By: Thomas Wouters (twouters) Date: 2006-12-29 07:05 Message: Logged In: YES user_id=34209 Originator: NO Backported. ---------------------------------------------------------------------- Comment By: Thomas Wouters (twouters) Date: 2006-12-28 01:11 Message: Logged In: YES user_id=34209 Originator: NO The 2.5 branch at the time was still pre-2.5.0, and we didn't want to make this change between release candidate and release. It's safe to be checked into release25-maint now. I'll do it sometime tonight. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-12-27 23:59 Message: Logged In: YES user_id=33168 Originator: NO Thomas, was there any reason this wasn't checked in to 2.5? I'm guessing it was just forgotten. I don't see any mention in Misc/NEWS either. ---------------------------------------------------------------------- Comment By: Thomas Wouters (twouters) Date: 2006-08-29 00:35 Message: Logged In: YES user_id=34209 Not unless you want another release candidate. copy.deepcopy has never worked on array instances, so it's not a release-preventing bug (but each bugfix may *add* a release-preventing bug by accident :) ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2006-08-28 06:32 Message: Logged In: YES user_id=80475 Should this be fixed in the release candidate? ---------------------------------------------------------------------- Comment By: Thomas Wouters (twouters) Date: 2006-08-24 11:50 Message: Logged In: YES user_id=34209 Thanks! Fixed in the trunk (which is 2.6-to-be) revision 51565, and it will also be fixed for Python 2.4.4 and 2.5.1. It's unfortunately just a bit too late for 2.5.0. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1545837&group_id=5470 From noreply at sourceforge.net Sat Jan 6 22:21:54 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 13:21:54 -0800 Subject: [ python-Bugs-1545837 ] array.array borks on deepcopy Message-ID: Bugs item #1545837, was opened at 2006-08-24 02:49 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1545837&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 9 Private: No Submitted By: V?clav Haisman (wilx) >Assigned to: Thomas Wouters (twouters) Summary: array.array borks on deepcopy Initial Comment: Hi, I think there is a bug arraymodule.c this line: {"__deepcopy__",(PyCFunction)array_copy, METH_NOARGS, copy_doc}, should probably have METH_O instead of METH_NOARGS there, since according to docs and the prototype of the array_copy() function there is one parameter. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-06 13:21 Message: Logged In: YES user_id=33168 Originator: NO Stupid SF. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-06 13:19 Message: Logged In: YES user_id=33168 Originator: NO Thomas, was there any reason this wasn't checked in to 2.5? I'm guessing it was just forgotten. I don't see any mention in Misc/NEWS either. ---------------------------------------------------------------------- Comment By: Thomas Wouters (twouters) Date: 2006-12-29 07:05 Message: Logged In: YES user_id=34209 Originator: NO Backported. ---------------------------------------------------------------------- Comment By: Thomas Wouters (twouters) Date: 2006-12-28 01:11 Message: Logged In: YES user_id=34209 Originator: NO The 2.5 branch at the time was still pre-2.5.0, and we didn't want to make this change between release candidate and release. It's safe to be checked into release25-maint now. I'll do it sometime tonight. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-12-27 23:59 Message: Logged In: YES user_id=33168 Originator: NO Thomas, was there any reason this wasn't checked in to 2.5? I'm guessing it was just forgotten. I don't see any mention in Misc/NEWS either. ---------------------------------------------------------------------- Comment By: Thomas Wouters (twouters) Date: 2006-08-29 00:35 Message: Logged In: YES user_id=34209 Not unless you want another release candidate. copy.deepcopy has never worked on array instances, so it's not a release-preventing bug (but each bugfix may *add* a release-preventing bug by accident :) ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2006-08-28 06:32 Message: Logged In: YES user_id=80475 Should this be fixed in the release candidate? ---------------------------------------------------------------------- Comment By: Thomas Wouters (twouters) Date: 2006-08-24 11:50 Message: Logged In: YES user_id=34209 Thanks! Fixed in the trunk (which is 2.6-to-be) revision 51565, and it will also be fixed for Python 2.4.4 and 2.5.1. It's unfortunately just a bit too late for 2.5.0. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1545837&group_id=5470 From noreply at sourceforge.net Sat Jan 6 22:37:03 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 13:37:03 -0800 Subject: [ python-Bugs-1629566 ] documentation of email.utils.parsedate is confusing Message-ID: Bugs item #1629566, was opened at 2007-01-06 15:37 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629566&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Nicholas Riley (nriley) Assigned to: Nobody/Anonymous (nobody) Summary: documentation of email.utils.parsedate is confusing Initial Comment: This sentence in the documentation for email.utils.parsedate confused me: "Note that fields 6, 7, and 8 of the result tuple are not usable." These indices are zero-based, so it's actually fields 7, 8 and 9 that they are talking about (in normal English). Either this should be changed to 7-9 or be re-expressed in a way that makes it clear it's zero-based, for example by using Python indexing notation. Thanks. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629566&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:04:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:04:47 -0800 Subject: [ python-Bugs-889153 ] asyncore.dispactcher: incorrect connect Message-ID: Bugs item #889153, was opened at 2004-02-02 11:04 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889153&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Sankov Dmitry Alexandrovich (sankov_da) >Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.dispactcher: incorrect connect Initial Comment: When i use non-blocking socket, connect() method of asyncore.dispatcher class looks like works incorrect. Example: if connection have not established then socket merely closed and handle_error not called and no exception throwed. One more example: if writable() and readble() methods returns zero than handle_connect() will never be called even if connection will be established. Thanks. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 03:22 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889153&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:04:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:04:47 -0800 Subject: [ python-Bugs-760475 ] asyncore.py and "handle_error" Message-ID: Bugs item #760475, was opened at 2003-06-25 09:11 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=760475&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jes?s Cea Avi?n (jcea) >Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.py and "handle_error" Initial Comment: When an uncached exception arises in "asyncore", the method "handle_error" is called. This method calls "self.close()" when it MUST call "self.handle_close()", in order to support correctly the "delegation" design pattern, for example. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2003-06-25 14:11 Message: Logged In: YES user_id=31392 Can you expand on your comments. I don't know what the delegation design pattern you refer to is. Can you provide an example of why it is necessary that asyncore not call close()? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=760475&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:04:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:04:47 -0800 Subject: [ python-Bugs-539444 ] asyncore file wrapper & os.error Message-ID: Bugs item #539444, was opened at 2002-04-04 15:57 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jeremy Hylton (jhylton) >Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore file wrapper & os.error Initial Comment: The file wrapper makes a file descriptor look like an asycnore socket. When its recv() method is invoked, it calls os.read(). I use this in an application where os.read() occasionally raises os.error (11, 'Resource temporarily unavailable'). I think that asyncore should catch this error and treat it just like EWOULDBLOCK. But I'd like a second opinion. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-07 05:03 Message: Logged In: YES user_id=21627 I'm still uncertain what precisely was happening here. What system was this on? On many systems, EAGAIN is EWOULDBLOCK; if that is the case, adding EAGAIN to the places that currently handle EWOULDBLOCK won't change anything. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2002-04-05 11:44 Message: Logged In: YES user_id=31392 It happens when the file is a pipe. For details, see the ZEO bug report at https://sourceforge.net/tracker/index.php? func=detail&aid=536416&group_id=15628&atid=115628 I've included the traceback from that bug report, too. error: uncaptured python exception, closing channel <select-trigger (pipe) at 81059cc> (exceptions.OSError:[Errno 11] Resource temporarily unavailable [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|poll|92] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|handle_read_event|386] [/home/zope/opt/Python-2.1.2/lib/python2.1/site- packages/ZEO/trigger.py|handle_read|95] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|338] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|520]) Exception exceptions.OSError: (9, 'Bad file descriptor') in <method trigger.__del__ of trigger instance at 0x81059cc> ignored ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-05 04:00 Message: Logged In: YES user_id=21627 Can you report details of the file that returns EWOULDBLOCK? This is not supposed to happen in applications of the file_wrapper. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:04:46 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:04:46 -0800 Subject: [ python-Bugs-953599 ] asyncore misses socket closes when poll is used Message-ID: Bugs item #953599, was opened at 2004-05-13 17:47 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953599&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 6 Private: No Submitted By: Shane Kerr (shane_kerr) >Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore misses socket closes when poll is used Initial Comment: Problem: If the loop() function of asyncore is invoked with poll rather than select, the function readwrite() is used when I/O is available on a socket. However, this function does not check for hangup - provided by POLLHUP. If a socket is attempting to write, then POLLOUT never gets set, so the socket hangs. Because poll() is returning immediately, but the return value is never used, asyncore busy-loops, consuming all available CPU. Possible solutions: The easy solution is to check for POLLHUP in the readwrite() function: if flags & (select.POLLOUT | select.POLLHUP): obj.handle_write_event() This makes the poll work exactly like the select - the application raises a socket.error set to EPIPE. An alternate solution - possibly more graceful - is to invoke the handle_close() method of the object: if flags & select.POLLHUP: obj.handle_close() else: if flags & select.POLLIN: obj.handle_read_event() if flags & select.pollout: obj.handle_write_event() This is incompatible with the select model, but it means that the read and write logic is now the same for socket hangups - handle_close() is invoked. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-07-02 09:56 Message: Logged In: YES user_id=410460 Perhaps, it would be better to raise exception: def readwrite(obj, flags): try: if flags & (select.POLLIN | select.POLLPRI): obj.handle_read_event() if flags & select.POLLOUT: obj.handle_write_event() if flags & (select.POLLERR | select.POLLHUP | select.POLLNVAL): obj.handle_expt_event() except ExitNow: raise except: obj.handle_error() ... def handle_expt_event(self): err = self.socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) assert(err != 0) raise socket.error, (err, errorcode[err]) Since asyncore closes socket in handle_error, this solves the problem too. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953599&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:04:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:04:47 -0800 Subject: [ python-Bugs-1161031 ] Neverending warnings from asyncore Message-ID: Bugs item #1161031, was opened at 2005-03-10 19:34 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1161031&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Tony Meyer (anadelonbrin) >Assigned to: Josiah Carlson (josiahcarlson) Summary: Neverending warnings from asyncore Initial Comment: Changes in asyncore from 2.3 to 2.4 mean that asyncore.poll() now passes all the sockets in the map to select.select() to be checked for errors, which is probably a good thing. If an error occurs, then handle_expt() is called, which by default logs the error. asyncore.dispatcher creates nonblocking sockets. When connect_ex() is called on a nonblocking socket, it will probably return EWOULDBLOCK (connecting takes time), which may mean the connection is successful, or may not (asyncore dispatcher keeps going assuming all is well). If the connection is not successful, and then asyncore.loop() is called, then select.select() will indicate that there is an error with the socket (can't connect) and the error will be logged. The trouble is that asyncore.loop then keeps going, and will log this error again. My not-that-fast system here gets about 10,000 logged messages per second with a single socket in the asyncore map. There are ways to avoid this: (1) if the socket is blocking when connect()ing (and then nonblocking afterwards) an error is raised if the connect fails. (2) Before calling asyncore.loop(), the caller can run through all the sockets, checking that they are ok. (3) handle_expt() can be overridden with a function that repairs or removes the socket from the map (etc) However, I'm not convinced that this is good behavior for asyncore to have, by default. On Windows, select.select() will only indicate an error when trying to connect (nonblocking) or if SO_OOBINLINE is disabled, but this may not be the case (i.e. errors can occur at other times) with other platforms, right? Unless the error is temporary, asyncore will by default start streaming (extremely fast) a lot of "warning: unhandled exception" (not very helpful an error message, either) messages. Even if the error only lasts a minute, that could easily result in 10,000 warnings being logged. Do any of the python developers agree that this is a flaw? I'm happy to work up a patch for whatever the desired solution is, if so. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2005-06-09 12:11 Message: Logged In: YES user_id=31435 My guess is rev 1.57. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2005-06-09 11:41 Message: Logged In: YES user_id=11375 What change to asyncore caused the problem? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2005-06-02 22:02 Message: Logged In: YES user_id=31435 I agree the change in default behavior here was at least questionable, and spent more than a week of my own "dealing with consequences" of 2.4 asyncore changes in ZODB and Zope. Assigning to Andrew, since it looks like he made the change in question here. Andrew, why did this change? How can it be improved? I don't think Tony has mentioned it here, but when SpamBayes was first released with Python 2.4, it was a disaster because some users found their hard drives literally filled with gigabytes of mysterious "warning: unhandled exception" messages. ---------------------------------------------------------------------- Comment By: Tony Meyer (anadelonbrin) Date: 2005-06-02 21:38 Message: Logged In: YES user_id=552329 I am not at all unwilling (and this isn't a problem for me personally - the issue here is whether this is good for Python in general) to subclass. Deciding to subclass does *not* mean that you should have to replace all functions in the parent class. If you did, then there would be little point in subclassing at all. Sensible default behaviour should be provided as methods of classes. The current behaviour of the handle_expt() method is not sensible. It essentially forces the user to override that method, even if they have no interest in handling errors (e.g. and would normally just override handle_read and handle_write). This is *not* rare. You haven't seen any in years, because this was a change introduced in Python 2.4, which hasn't been released for even one year yet. I agree that the desired behaviour will be application specific. But what is the point of having default behaviour that will essentially crash the program/system running it? Having default behaviour be "pass" would be more useful. At the very least, this is a problem that many people (compared to the number that will use asyncore) will come across and should be reflected as such in the documentation. If you haven't replicated this problem on your system so that you understand it, please do. I am happy to provide a simple script to demonstrate, if necessary. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2005-05-31 15:34 Message: Logged In: YES user_id=341410 You seem to be unwilling to subclass asyncore.dispatcher to extend its functionality, and the only reason you have given as to why you are unwilling is "As much as possible a class should provide sensible methods, so that overriding is kept to a minimum." (I personally subclass dispatcher and its async_chat derivative qutie often) Now, in the case of the other standard socket server and client framework in the Python standard library, namely the SocketServer module and its derivatives, you will find extending the functionality of those classes is via subclassing and overriding methods as necessary. To me, when two 'competing' methods of generating socket servers and clients in the standard library offer the same method of extension of their base functionality, then perhaps that is what should be done. The fact that basically all of the standard library is subclassable (some C modules are exceptions to the rule, but should be fixed in Python 2.5), including types in the base language, further suggests to me that subclassing is the standard mechanism for extending the functionality of a class, regardless of its usefulness in its base state. In regards to the documentation, it seems to be that whenever an object has an error, the handle_expt() method is called (in spending two minutes reading the source). Whether or not those errors are rare, perhaps debatable (I've not seen any in years), but it seems to be application-specific as to what behavior the socket should have in the case of an error (you may want to close, I may want to report the error and reconnect, etc.). ---------------------------------------------------------------------- Comment By: Tony Meyer (anadelonbrin) Date: 2005-05-31 03:42 Message: Logged In: YES user_id=552329 dispatcher is not at all unusable without subclassing. You can get data with recv() and send it with send() etc. It can be treated as a thin wrapper around socket objects. Yes, you will want to subclass it to get more useful behaviour than you can get from a basic socket. I don't see that this means that you should be required to override the handle_expt() function, though. As much as possible a class should provide sensible methods, so that overriding is kept to a minimum. At the very least, this is a documentation error, since the documentation states: """ handle_expt( ) Called when there is out of band (OOB) data for a socket connection. This will almost never happen, as OOB is tenuously supported and rarely used. """ "Almost never" is completely wrong. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2005-05-31 03:31 Message: Logged In: YES user_id=341410 I hate to point out the obvious, but dispatcher is wholly unusable without subclassing. How would you get data to/from a connection without replacing handle_read, handle_write? How do you handle the case when you want to connect to someone else or accept connections from someone else without overloading handle_connect or handle_accept? ---------------------------------------------------------------------- Comment By: Tony Meyer (anadelonbrin) Date: 2005-05-31 03:15 Message: Logged In: YES user_id=552329 Yes this problem is easily solved by subclassing. However I firmly believe that it is a terrible default behaviour, and that it's likely to hit many asyncore users. A class shouldn't have to be subclassed to be usable (ignoring virtual classes and all that), and that really is the case here. The simplest solution would be to change the handler to not log the message. Or log the message once per socket or something. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2005-05-31 03:03 Message: Logged In: YES user_id=341410 Option 1 is not really an option in any case where a large number of connections are opened (so I don't believe should be the default). >From what I understand, certain methods are supposed to be overridden in a subclass if someone using a class wants different behavior. In this case, I believe you can perform either option 2 or 3 in your own code to avoid the thousands of logged lines; either by creating your own loop() function, or by creating a subclass of dispatcher and implementing a handle_expt() method that does as you desire. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1161031&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:06:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:06:13 -0800 Subject: [ python-Bugs-658749 ] asyncore connect() and winsock errors Message-ID: Bugs item #658749, was opened at 2002-12-26 13:25 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=658749&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Guido van Rossum (gvanrossum) >Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore connect() and winsock errors Initial Comment: asyncore's connect() method should interpret the winsock errors; these are different from Unix (and different between the Win98 family and the Win2k family). ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 03:24 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=658749&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:06:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:06:13 -0800 Subject: [ python-Bugs-654766 ] asyncore.py and "handle_expt" Message-ID: Bugs item #654766, was opened at 2002-12-16 13:42 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=654766&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jes?s Cea Avi?n (jcea) >Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.py and "handle_expt" Initial Comment: Python 2.2.2 here. Asyncore.py doesn't invoke "handle_expt" ever ("handle_expt" is documented in docs). Managing OOB data is imprescindible to handle "connection refused" errors in Windows, for example. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 03:24 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=654766&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:06:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:06:13 -0800 Subject: [ python-Bugs-1063924 ] asyncore should handle ECONNRESET in send Message-ID: Bugs item #1063924, was opened at 2004-11-10 11:27 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1063924&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Guillaume (efge) >Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore should handle ECONNRESET in send Initial Comment: asyncore.dispatcher.send doesn't handle ECONNRESET, whereas recv correctly does. When such an error occurs, Zope displays for instance: ERROR(200) ZServer uncaptured python exception, closing channel (socket.error:(104, 'Connection reset by peer') [/usr/local/lib/python2.3/asynchat.py|initiate_send|218] [/usr/local/zope/2.7.2/lib/python/ZServer/medusa/http_server.py|send|417] [/usr/local/lib/python2.3/asyncore.py|send|337]) zhttp_channel is just a subclass of http_channel. The exception is raised by asyncore itself when it receives the unhandled error. A proposed fix would be to do the same kind of handling than is done in recv, but I don't know enough about asyncore to know if it's correct ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1063924&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:06:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:06:13 -0800 Subject: [ python-Bugs-777588 ] asyncore is broken for windows if connection is refused Message-ID: Bugs item #777588, was opened at 2003-07-25 10:43 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=777588&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Garth Bushell (garth42) >Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore is broken for windows if connection is refused Initial Comment: asyncore.poll is broken on windows. If a connection is refused happens it will hang for ever and never raise an exception. The Select statment never checks the exfds. This is needed as this is where windows reports failed connections. The documentation in the microsoft platform SDK mentions this. http://msdn.microsoft.com/library/default.asp?url=/library/en-us/winsock/winsock/select_2.asp The suggested fix is shown below althought this is untested. The correct error number is recived from getsockopt(SOL_SOCKET,SO_ERROR) def poll(timeout=0.0, map=None): if map is None: map = socket_map if map: r = []; w = []; e = [] for fd, obj in map.items(): if obj.readable(): r.append(fd) if obj.writable(): w.append(fd) if sys.platform == 'win32': if not obj.connected: e.append(fd) if [] == r == w == e: time.sleep(timeout) else: try: r, w, e = select.select(r, w, e, timeout) except select.error, err: if err[0] != EINTR: raise else: return if sys.platform == 'win32': for fd in e: obj = map.get(fd) if obj is None: continue errno = fs.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) raise socket.error,(errno,socketerrorTab[error]) for fd in r: obj = map.get(fd) if obj is None: continue read(obj) for fd in w: obj = map.get(fd) if obj is None: continue write(obj) ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 03:23 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- Comment By: John J Smith (johnjsmith) Date: 2003-07-29 08:49 Message: Logged In: YES user_id=830565 I was bitten by the same problem. My workaround (in a Tkinter application) is given below. Would it make sense to modify poll() to simply add the union of r and w to e, and call handle_error() for any fd in e? Workaround: try: self.connect(send_addr) except socket.error: self.handle_error() if sys.platform == 'win32': # Win98 select() doesn't seem to report errors for a # non-blocking connect(). self.__connected = 0 self.__frame.after(2000, self.__win_connect_poll) ... if sys.platform == 'win32': def __win_connect_poll (self): if self.__connected: return e = self.socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) if e in (0, errno.EINPROGRESS, errno.WSAEINPROGRESS): self.__frame.after(2000, self.__win_connect_poll) else: try: str = socket.errorTab[e] except KeyError: str = os.strerror(e) try: raise socket.error(e, str) except socket.error: self.handle_error() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=777588&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:06:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:06:13 -0800 Subject: [ python-Bugs-1025525 ] asyncore.file_dispatcher should not take fd as argument Message-ID: Bugs item #1025525, was opened at 2004-09-09 22:14 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1025525&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: david houlder (dhoulder) >Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.file_dispatcher should not take fd as argument Initial Comment: Only relevant to posix. asyncore.file_dispatcher closes the file descriptor behind the file object, and not the file object itself. When another file gets opened, it gets the next available fd, which on posix, is the one just released by the close. Tested on python 2.2.3 on RedHat Enterprise Linux 3 and python 2.2.1 on HP Tru64 unix. See attached script for details and a solution. 'case 1' should show the problem regardless of the garbage collection strategy in python. 'case 2' relies on the file object being closed as soon as the last reference to it disappears, which seems to be the (current?) behaviour. [djh900 at dh djh900]$ python file_dispatcher_bug.py case 1: (Read 'I am the first pipe\n' from pipe) (pipe closing. fd== 3 ) (Read '' from pipe) firstPipe.read() says 'I am the second pipe\n' firstPipe.fileno()== 3 secondPipe.fileno()== 3 case 2: (Read 'I am the first pipe\n' from pipe) (pipe closing. fd== 3 ) (Read '' from pipe) secondPipe.fileno()== 3 dispatcher.secondPipe.read() says Traceback (most recent call last): File "file_dispatcher_bug.py", line 77, in ? print "dispatcher.secondPipe.read() says", repr(dispatcher.secondPipe.read()) IOError: [Errno 9] Bad file descriptor [djh900 at dh djh900]$ ---------------------------------------------------------------------- Comment By: david houlder (dhoulder) Date: 2004-11-17 18:43 Message: Logged In: YES user_id=1119185 In an ideal world I'd propose replacing the guts of file_wrapper() and file_dispatcher() by my pipe_wrapper() and PipeDispatcher(), since the general problem of closing the file descriptor behind the python object applies to all python objects that are based on a file descriptor, not just pipes. So, yes, probably best not to call it pipe_dispatcher(). And I guess file_dispatcher() may be in use by other peoples' code and changing it to take a file object rather than an fd will break that. Maybe file_dispatcher.__init__() could be changed to take either an integer file descriptor or a file object as it's argument, and behave like the current file_dispatcher() when given an fd, and like pipe_dispatcher() when given a file-like object (i.e. any object with fileno() and close() methods will probably be enough). I'm happy to whip up an example if people think that's a good idea. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-11-07 10:23 Message: Logged In: YES user_id=31392 I'm not sure whether you propose a change to asyncore or are describing a pattern that allows you to use a pipe with it safely. And, looking at your code more closely, I think pipe is confusing, because you're not talking about os.pipe() right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1025525&group_id=5470 From noreply at sourceforge.net Sat Jan 6 23:48:59 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 14:48:59 -0800 Subject: [ python-Bugs-1025525 ] asyncore.file_dispatcher should not take fd as argument Message-ID: Bugs item #1025525, was opened at 2004-09-09 19:14 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1025525&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: david houlder (dhoulder) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.file_dispatcher should not take fd as argument Initial Comment: Only relevant to posix. asyncore.file_dispatcher closes the file descriptor behind the file object, and not the file object itself. When another file gets opened, it gets the next available fd, which on posix, is the one just released by the close. Tested on python 2.2.3 on RedHat Enterprise Linux 3 and python 2.2.1 on HP Tru64 unix. See attached script for details and a solution. 'case 1' should show the problem regardless of the garbage collection strategy in python. 'case 2' relies on the file object being closed as soon as the last reference to it disappears, which seems to be the (current?) behaviour. [djh900 at dh djh900]$ python file_dispatcher_bug.py case 1: (Read 'I am the first pipe\n' from pipe) (pipe closing. fd== 3 ) (Read '' from pipe) firstPipe.read() says 'I am the second pipe\n' firstPipe.fileno()== 3 secondPipe.fileno()== 3 case 2: (Read 'I am the first pipe\n' from pipe) (pipe closing. fd== 3 ) (Read '' from pipe) secondPipe.fileno()== 3 dispatcher.secondPipe.read() says Traceback (most recent call last): File "file_dispatcher_bug.py", line 77, in ? print "dispatcher.secondPipe.read() says", repr(dispatcher.secondPipe.read()) IOError: [Errno 9] Bad file descriptor [djh900 at dh djh900]$ ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 14:48 Message: Logged In: YES user_id=341410 Originator: NO I believe that asyncore.file_dispatcher taking a file descriptor is fine. The problem is that the documentation doesn't suggest that you os.dup() the file handle so that both the original handle (from a pipe, file, etc.) can be closed independently from the one being used by the file_dispatcher. In the case of socket.makefile(), the duplication is done automatically, so there isn't the same problem. My suggested fix would be to accept a file or a file handle. For files, we first get its file number via the standard f.fileno(), and with that, or the handle we are provided, we os.dup() the handle. ---------------------------------------------------------------------- Comment By: david houlder (dhoulder) Date: 2004-11-17 15:43 Message: Logged In: YES user_id=1119185 In an ideal world I'd propose replacing the guts of file_wrapper() and file_dispatcher() by my pipe_wrapper() and PipeDispatcher(), since the general problem of closing the file descriptor behind the python object applies to all python objects that are based on a file descriptor, not just pipes. So, yes, probably best not to call it pipe_dispatcher(). And I guess file_dispatcher() may be in use by other peoples' code and changing it to take a file object rather than an fd will break that. Maybe file_dispatcher.__init__() could be changed to take either an integer file descriptor or a file object as it's argument, and behave like the current file_dispatcher() when given an fd, and like pipe_dispatcher() when given a file-like object (i.e. any object with fileno() and close() methods will probably be enough). I'm happy to whip up an example if people think that's a good idea. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-11-07 07:23 Message: Logged In: YES user_id=31392 I'm not sure whether you propose a change to asyncore or are describing a pattern that allows you to use a pipe with it safely. And, looking at your code more closely, I think pipe is confusing, because you're not talking about os.pipe() right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1025525&group_id=5470 From noreply at sourceforge.net Sun Jan 7 00:02:31 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 15:02:31 -0800 Subject: [ python-Bugs-760475 ] asyncore.py and "handle_error" Message-ID: Bugs item #760475, was opened at 2003-06-25 06:11 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=760475&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jes?s Cea Avi?n (jcea) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.py and "handle_error" Initial Comment: When an uncached exception arises in "asyncore", the method "handle_error" is called. This method calls "self.close()" when it MUST call "self.handle_close()", in order to support correctly the "delegation" design pattern, for example. ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 15:02 Message: Logged In: YES user_id=341410 Originator: NO While the default .close() method is called inside .handle_close(), not calling .handle_close() in asyncore prevents any subclassed .handle_close() behavior to be run. Say, for example, that a user has written a subclass where within .handle_connect() the socket is registered somewhere (perhaps for I/O statistics in an FTP or Bittorrent application). Where it would make sense to place the unregistration code is within a .handle_close() method, which is bypassed by the standard .handle_error() code. I suggest switching to the self.handle_close() call at the end of handle_error() . Doing so preserves the passing of the test suite on release 2.5 on Windows. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2003-06-25 11:11 Message: Logged In: YES user_id=31392 Can you expand on your comments. I don't know what the delegation design pattern you refer to is. Can you provide an example of why it is necessary that asyncore not call close()? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=760475&group_id=5470 From noreply at sourceforge.net Sun Jan 7 00:05:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 15:05:10 -0800 Subject: [ python-Bugs-889153 ] asyncore.dispactcher: incorrect connect Message-ID: Bugs item #889153, was opened at 2004-02-02 08:04 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889153&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Sankov Dmitry Alexandrovich (sankov_da) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.dispactcher: incorrect connect Initial Comment: When i use non-blocking socket, connect() method of asyncore.dispatcher class looks like works incorrect. Example: if connection have not established then socket merely closed and handle_error not called and no exception throwed. One more example: if writable() and readble() methods returns zero than handle_connect() will never be called even if connection will be established. Thanks. ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 15:05 Message: Logged In: YES user_id=341410 Originator: NO It sounds as though the original poster is passing a socket that has been created, but which is not yet connected, to the dispatcher constructor. We should update the documentation to state that either the user should pass a completely connected socket (as returned by socket.accept(), or which has connected as the result of a a blocking socket.connect() call), or use the .create_socket() and .connect() methods of the dispatcher. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 00:22 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889153&group_id=5470 From noreply at sourceforge.net Sun Jan 7 00:11:32 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 15:11:32 -0800 Subject: [ python-Bugs-1161031 ] Neverending warnings from asyncore Message-ID: Bugs item #1161031, was opened at 2005-03-10 16:34 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1161031&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Tony Meyer (anadelonbrin) >Assigned to: A.M. Kuchling (akuchling) Summary: Neverending warnings from asyncore Initial Comment: Changes in asyncore from 2.3 to 2.4 mean that asyncore.poll() now passes all the sockets in the map to select.select() to be checked for errors, which is probably a good thing. If an error occurs, then handle_expt() is called, which by default logs the error. asyncore.dispatcher creates nonblocking sockets. When connect_ex() is called on a nonblocking socket, it will probably return EWOULDBLOCK (connecting takes time), which may mean the connection is successful, or may not (asyncore dispatcher keeps going assuming all is well). If the connection is not successful, and then asyncore.loop() is called, then select.select() will indicate that there is an error with the socket (can't connect) and the error will be logged. The trouble is that asyncore.loop then keeps going, and will log this error again. My not-that-fast system here gets about 10,000 logged messages per second with a single socket in the asyncore map. There are ways to avoid this: (1) if the socket is blocking when connect()ing (and then nonblocking afterwards) an error is raised if the connect fails. (2) Before calling asyncore.loop(), the caller can run through all the sockets, checking that they are ok. (3) handle_expt() can be overridden with a function that repairs or removes the socket from the map (etc) However, I'm not convinced that this is good behavior for asyncore to have, by default. On Windows, select.select() will only indicate an error when trying to connect (nonblocking) or if SO_OOBINLINE is disabled, but this may not be the case (i.e. errors can occur at other times) with other platforms, right? Unless the error is temporary, asyncore will by default start streaming (extremely fast) a lot of "warning: unhandled exception" (not very helpful an error message, either) messages. Even if the error only lasts a minute, that could easily result in 10,000 warnings being logged. Do any of the python developers agree that this is a flaw? I'm happy to work up a patch for whatever the desired solution is, if so. ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 15:11 Message: Logged In: YES user_id=341410 Originator: NO Can I get any pointers as to a conversion from CVS to SVN version numbers, or does anyone remember which change is the issue? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2005-06-09 09:11 Message: Logged In: YES user_id=31435 My guess is rev 1.57. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2005-06-09 08:41 Message: Logged In: YES user_id=11375 What change to asyncore caused the problem? ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2005-06-02 19:02 Message: Logged In: YES user_id=31435 I agree the change in default behavior here was at least questionable, and spent more than a week of my own "dealing with consequences" of 2.4 asyncore changes in ZODB and Zope. Assigning to Andrew, since it looks like he made the change in question here. Andrew, why did this change? How can it be improved? I don't think Tony has mentioned it here, but when SpamBayes was first released with Python 2.4, it was a disaster because some users found their hard drives literally filled with gigabytes of mysterious "warning: unhandled exception" messages. ---------------------------------------------------------------------- Comment By: Tony Meyer (anadelonbrin) Date: 2005-06-02 18:38 Message: Logged In: YES user_id=552329 I am not at all unwilling (and this isn't a problem for me personally - the issue here is whether this is good for Python in general) to subclass. Deciding to subclass does *not* mean that you should have to replace all functions in the parent class. If you did, then there would be little point in subclassing at all. Sensible default behaviour should be provided as methods of classes. The current behaviour of the handle_expt() method is not sensible. It essentially forces the user to override that method, even if they have no interest in handling errors (e.g. and would normally just override handle_read and handle_write). This is *not* rare. You haven't seen any in years, because this was a change introduced in Python 2.4, which hasn't been released for even one year yet. I agree that the desired behaviour will be application specific. But what is the point of having default behaviour that will essentially crash the program/system running it? Having default behaviour be "pass" would be more useful. At the very least, this is a problem that many people (compared to the number that will use asyncore) will come across and should be reflected as such in the documentation. If you haven't replicated this problem on your system so that you understand it, please do. I am happy to provide a simple script to demonstrate, if necessary. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2005-05-31 12:34 Message: Logged In: YES user_id=341410 You seem to be unwilling to subclass asyncore.dispatcher to extend its functionality, and the only reason you have given as to why you are unwilling is "As much as possible a class should provide sensible methods, so that overriding is kept to a minimum." (I personally subclass dispatcher and its async_chat derivative qutie often) Now, in the case of the other standard socket server and client framework in the Python standard library, namely the SocketServer module and its derivatives, you will find extending the functionality of those classes is via subclassing and overriding methods as necessary. To me, when two 'competing' methods of generating socket servers and clients in the standard library offer the same method of extension of their base functionality, then perhaps that is what should be done. The fact that basically all of the standard library is subclassable (some C modules are exceptions to the rule, but should be fixed in Python 2.5), including types in the base language, further suggests to me that subclassing is the standard mechanism for extending the functionality of a class, regardless of its usefulness in its base state. In regards to the documentation, it seems to be that whenever an object has an error, the handle_expt() method is called (in spending two minutes reading the source). Whether or not those errors are rare, perhaps debatable (I've not seen any in years), but it seems to be application-specific as to what behavior the socket should have in the case of an error (you may want to close, I may want to report the error and reconnect, etc.). ---------------------------------------------------------------------- Comment By: Tony Meyer (anadelonbrin) Date: 2005-05-31 00:42 Message: Logged In: YES user_id=552329 dispatcher is not at all unusable without subclassing. You can get data with recv() and send it with send() etc. It can be treated as a thin wrapper around socket objects. Yes, you will want to subclass it to get more useful behaviour than you can get from a basic socket. I don't see that this means that you should be required to override the handle_expt() function, though. As much as possible a class should provide sensible methods, so that overriding is kept to a minimum. At the very least, this is a documentation error, since the documentation states: """ handle_expt( ) Called when there is out of band (OOB) data for a socket connection. This will almost never happen, as OOB is tenuously supported and rarely used. """ "Almost never" is completely wrong. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2005-05-31 00:31 Message: Logged In: YES user_id=341410 I hate to point out the obvious, but dispatcher is wholly unusable without subclassing. How would you get data to/from a connection without replacing handle_read, handle_write? How do you handle the case when you want to connect to someone else or accept connections from someone else without overloading handle_connect or handle_accept? ---------------------------------------------------------------------- Comment By: Tony Meyer (anadelonbrin) Date: 2005-05-31 00:15 Message: Logged In: YES user_id=552329 Yes this problem is easily solved by subclassing. However I firmly believe that it is a terrible default behaviour, and that it's likely to hit many asyncore users. A class shouldn't have to be subclassed to be usable (ignoring virtual classes and all that), and that really is the case here. The simplest solution would be to change the handler to not log the message. Or log the message once per socket or something. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2005-05-31 00:03 Message: Logged In: YES user_id=341410 Option 1 is not really an option in any case where a large number of connections are opened (so I don't believe should be the default). >From what I understand, certain methods are supposed to be overridden in a subclass if someone using a class wants different behavior. In this case, I believe you can perform either option 2 or 3 in your own code to avoid the thousands of logged lines; either by creating your own loop() function, or by creating a subclass of dispatcher and implementing a handle_expt() method that does as you desire. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1161031&group_id=5470 From noreply at sourceforge.net Sun Jan 7 00:21:18 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 15:21:18 -0800 Subject: [ python-Bugs-953599 ] asyncore misses socket closes when poll is used Message-ID: Bugs item #953599, was opened at 2004-05-13 14:47 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953599&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 6 Private: No Submitted By: Shane Kerr (shane_kerr) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore misses socket closes when poll is used Initial Comment: Problem: If the loop() function of asyncore is invoked with poll rather than select, the function readwrite() is used when I/O is available on a socket. However, this function does not check for hangup - provided by POLLHUP. If a socket is attempting to write, then POLLOUT never gets set, so the socket hangs. Because poll() is returning immediately, but the return value is never used, asyncore busy-loops, consuming all available CPU. Possible solutions: The easy solution is to check for POLLHUP in the readwrite() function: if flags & (select.POLLOUT | select.POLLHUP): obj.handle_write_event() This makes the poll work exactly like the select - the application raises a socket.error set to EPIPE. An alternate solution - possibly more graceful - is to invoke the handle_close() method of the object: if flags & select.POLLHUP: obj.handle_close() else: if flags & select.POLLIN: obj.handle_read_event() if flags & select.pollout: obj.handle_write_event() This is incompatible with the select model, but it means that the read and write logic is now the same for socket hangups - handle_close() is invoked. ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 15:21 Message: Logged In: YES user_id=341410 Originator: NO The solution suggested by klimkin seems to have made it into revision 35513 as a fix to bug #887279. I'm not sure that this is necessarily the right solution to this bug or #887279, as a socket disconnect isn't necessarily an error condition, otherwise .handle_close_event() shouldn't exist for select-based loops, and it should always be an error. Suggest switching to the last if clause of readwrite() to... if flags & (select.POLLERR | select.POLLNVAL): obj.handle_expt_event() if flags & select.POLLHUP: obj.handle_close_event() ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-07-02 06:56 Message: Logged In: YES user_id=410460 Perhaps, it would be better to raise exception: def readwrite(obj, flags): try: if flags & (select.POLLIN | select.POLLPRI): obj.handle_read_event() if flags & select.POLLOUT: obj.handle_write_event() if flags & (select.POLLERR | select.POLLHUP | select.POLLNVAL): obj.handle_expt_event() except ExitNow: raise except: obj.handle_error() ... def handle_expt_event(self): err = self.socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) assert(err != 0) raise socket.error, (err, errorcode[err]) Since asyncore closes socket in handle_error, this solves the problem too. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=953599&group_id=5470 From noreply at sourceforge.net Sun Jan 7 06:49:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 21:49:50 -0800 Subject: [ python-Bugs-1063924 ] asyncore should handle ECONNRESET in send Message-ID: Bugs item #1063924, was opened at 2004-11-10 08:27 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1063924&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Guillaume (efge) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore should handle ECONNRESET in send Initial Comment: asyncore.dispatcher.send doesn't handle ECONNRESET, whereas recv correctly does. When such an error occurs, Zope displays for instance: ERROR(200) ZServer uncaptured python exception, closing channel (socket.error:(104, 'Connection reset by peer') [/usr/local/lib/python2.3/asynchat.py|initiate_send|218] [/usr/local/zope/2.7.2/lib/python/ZServer/medusa/http_server.py|send|417] [/usr/local/lib/python2.3/asyncore.py|send|337]) zhttp_channel is just a subclass of http_channel. The exception is raised by asyncore itself when it receives the unhandled error. A proposed fix would be to do the same kind of handling than is done in recv, but I don't know enough about asyncore to know if it's correct ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 21:49 Message: Logged In: YES user_id=341410 Originator: NO It would seem to me that a connection where sending raises ECONNRESET, ENOTCONN, or ESHUTDOWN, should be closed, as is the case in recv. However, generally send is usually called before recv, so if we close the connection in send, then recv won't get called. In basically all cases, we want recv() to be called so that we get data from the buffers if possible. Usually if there is data in the buffers, an exception won't be raised, so we wouldn't close the connection until the next pass. If we make a change at all, I would change send() to: def send(self, data): try: result = self.socket.send(data) return result except socket.error, why: if why[0] == EWOULDBLOCK: return 0 elif why[0] in [ECONNRESET, ENOTCONN, ESHUTDOWN]: return 0 else: raise I have not yet tested the behavior in Python 2.5 yet, as the test cases for Python 2.5 asyncore are basically nonexistent. If we added portions of the test cases provided in patch #909005, we could more easily test these kinds of things. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1063924&group_id=5470 From noreply at sourceforge.net Sun Jan 7 07:00:14 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 22:00:14 -0800 Subject: [ python-Bugs-539444 ] asyncore file wrapper & os.error Message-ID: Bugs item #539444, was opened at 2002-04-04 12:57 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jeremy Hylton (jhylton) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore file wrapper & os.error Initial Comment: The file wrapper makes a file descriptor look like an asycnore socket. When its recv() method is invoked, it calls os.read(). I use this in an application where os.read() occasionally raises os.error (11, 'Resource temporarily unavailable'). I think that asyncore should catch this error and treat it just like EWOULDBLOCK. But I'd like a second opinion. ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 22:00 Message: Logged In: YES user_id=341410 Originator: NO I don't see an issue with treating EAGAIN as EWOULDBLOCK. In the cases where EAGAIN != EWOULDBLOCK (in terms of constant value), treating them the same would be the right thing. In the case where the values were the same, nothing would change. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-07 01:03 Message: Logged In: YES user_id=21627 I'm still uncertain what precisely was happening here. What system was this on? On many systems, EAGAIN is EWOULDBLOCK; if that is the case, adding EAGAIN to the places that currently handle EWOULDBLOCK won't change anything. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2002-04-05 08:44 Message: Logged In: YES user_id=31392 It happens when the file is a pipe. For details, see the ZEO bug report at https://sourceforge.net/tracker/index.php? func=detail&aid=536416&group_id=15628&atid=115628 I've included the traceback from that bug report, too. error: uncaptured python exception, closing channel <select-trigger (pipe) at 81059cc> (exceptions.OSError:[Errno 11] Resource temporarily unavailable [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|poll|92] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|handle_read_event|386] [/home/zope/opt/Python-2.1.2/lib/python2.1/site- packages/ZEO/trigger.py|handle_read|95] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|338] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|520]) Exception exceptions.OSError: (9, 'Bad file descriptor') in <method trigger.__del__ of trigger instance at 0x81059cc> ignored ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-05 01:00 Message: Logged In: YES user_id=21627 Can you report details of the file that returns EWOULDBLOCK? This is not supposed to happen in applications of the file_wrapper. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 From noreply at sourceforge.net Sun Jan 7 07:10:49 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 22:10:49 -0800 Subject: [ python-Bugs-658749 ] asyncore connect() and winsock errors Message-ID: Bugs item #658749, was opened at 2002-12-26 10:25 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=658749&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Guido van Rossum (gvanrossum) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore connect() and winsock errors Initial Comment: asyncore's connect() method should interpret the winsock errors; these are different from Unix (and different between the Win98 family and the Win2k family). ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 22:10 Message: Logged In: YES user_id=341410 Originator: NO klimkin: Please explain how either of the versions of patch #909005 fix the problem. From what I can tell, the only change you made was to move the accept() handling of errors to the handle_read() method. Guido: In terms of winsock errors, which are actually raised on connection error between win98, win2k, and/or XP, 2003, and Vista? ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 00:24 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=658749&group_id=5470 From noreply at sourceforge.net Sun Jan 7 07:18:17 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 22:18:17 -0800 Subject: [ python-Bugs-654766 ] asyncore.py and "handle_expt" Message-ID: Bugs item #654766, was opened at 2002-12-16 10:42 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=654766&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library >Group: Python 2.2 >Status: Pending >Resolution: Out of Date Priority: 5 Private: No Submitted By: Jes?s Cea Avi?n (jcea) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.py and "handle_expt" Initial Comment: Python 2.2.2 here. Asyncore.py doesn't invoke "handle_expt" ever ("handle_expt" is documented in docs). Managing OOB data is imprescindible to handle "connection refused" errors in Windows, for example. ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 22:18 Message: Logged In: YES user_id=341410 Originator: NO According to the most recent Python trunk, handle_expt() is called when an error is found within a .select() or .poll() call. Is this still an issue for you in Python 2.4 or Python 2.5? Setting status as Pending, Out of Date as I believe this bug is fixed. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 00:24 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=654766&group_id=5470 From noreply at sourceforge.net Sun Jan 7 07:19:11 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 22:19:11 -0800 Subject: [ python-Bugs-777588 ] asyncore is broken for windows if connection is refused Message-ID: Bugs item #777588, was opened at 2003-07-25 07:43 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=777588&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Garth Bushell (garth42) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore is broken for windows if connection is refused Initial Comment: asyncore.poll is broken on windows. If a connection is refused happens it will hang for ever and never raise an exception. The Select statment never checks the exfds. This is needed as this is where windows reports failed connections. The documentation in the microsoft platform SDK mentions this. http://msdn.microsoft.com/library/default.asp?url=/library/en-us/winsock/winsock/select_2.asp The suggested fix is shown below althought this is untested. The correct error number is recived from getsockopt(SOL_SOCKET,SO_ERROR) def poll(timeout=0.0, map=None): if map is None: map = socket_map if map: r = []; w = []; e = [] for fd, obj in map.items(): if obj.readable(): r.append(fd) if obj.writable(): w.append(fd) if sys.platform == 'win32': if not obj.connected: e.append(fd) if [] == r == w == e: time.sleep(timeout) else: try: r, w, e = select.select(r, w, e, timeout) except select.error, err: if err[0] != EINTR: raise else: return if sys.platform == 'win32': for fd in e: obj = map.get(fd) if obj is None: continue errno = fs.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) raise socket.error,(errno,socketerrorTab[error]) for fd in r: obj = map.get(fd) if obj is None: continue read(obj) for fd in w: obj = map.get(fd) if obj is None: continue write(obj) ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 22:19 Message: Logged In: YES user_id=341410 Originator: NO I am looking into applying a variant of portions of #909005 to fix this bug. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 00:23 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- Comment By: John J Smith (johnjsmith) Date: 2003-07-29 05:49 Message: Logged In: YES user_id=830565 I was bitten by the same problem. My workaround (in a Tkinter application) is given below. Would it make sense to modify poll() to simply add the union of r and w to e, and call handle_error() for any fd in e? Workaround: try: self.connect(send_addr) except socket.error: self.handle_error() if sys.platform == 'win32': # Win98 select() doesn't seem to report errors for a # non-blocking connect(). self.__connected = 0 self.__frame.after(2000, self.__win_connect_poll) ... if sys.platform == 'win32': def __win_connect_poll (self): if self.__connected: return e = self.socket.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) if e in (0, errno.EINPROGRESS, errno.WSAEINPROGRESS): self.__frame.after(2000, self.__win_connect_poll) else: try: str = socket.errorTab[e] except KeyError: str = os.strerror(e) try: raise socket.error(e, str) except socket.error: self.handle_error() ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=777588&group_id=5470 From noreply at sourceforge.net Sun Jan 7 10:01:01 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 01:01:01 -0800 Subject: [ python-Bugs-1603424 ] subprocess.py (py2.5) wrongly claims py2.2 compatibility Message-ID: Bugs item #1603424, was opened at 2006-11-27 02:07 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603424&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Tim Wegener (twegener) Assigned to: Peter ?strand (astrand) Summary: subprocess.py (py2.5) wrongly claims py2.2 compatibility Initial Comment: >From the comments in subprocess.py (py2.5): # This module should remain compatible with Python 2.2, see PEP 291. However, using it from Python 2.2 gives: NameError: global name 'set' is not defined (set built-in used on line 1005) The subprocess.py in py2.4 was 2.2 compatible. Either the compatibility comment should be removed/amended or compatibility fixed. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-07 10:01 Message: Logged In: YES user_id=344921 Originator: NO Fixed in revision 53293 (trunk) and 53294 (2.5). ---------------------------------------------------------------------- Comment By: Robert Carr (racarr) Date: 2006-12-05 16:10 Message: Logged In: YES user_id=1649655 Originator: NO Index: subprocess.py =================================================================== --- subprocess.py (revision 52918) +++ subprocess.py (working copy) @@ -1004,8 +1004,8 @@ # Close pipe fds. Make sure we don't close the same # fd more than once, or standard fds. - for fd in set((p2cread, c2pwrite, errwrite))-set((0,1,2)): - if fd: os.close(fd) + for fd in (p2cread,c2pwrite,errwrite): + if fd not in (0,1,2): os.close(fd) # Close all other fds, if asked for if close_fds: Fixed? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603424&group_id=5470 From noreply at sourceforge.net Sun Jan 7 11:45:17 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 02:45:17 -0800 Subject: [ python-Bugs-539444 ] asyncore file wrapper & os.error Message-ID: Bugs item #539444, was opened at 2002-04-04 22:57 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jeremy Hylton (jhylton) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore file wrapper & os.error Initial Comment: The file wrapper makes a file descriptor look like an asycnore socket. When its recv() method is invoked, it calls os.read(). I use this in an application where os.read() occasionally raises os.error (11, 'Resource temporarily unavailable'). I think that asyncore should catch this error and treat it just like EWOULDBLOCK. But I'd like a second opinion. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-07 11:45 Message: Logged In: YES user_id=21627 Originator: NO Notice that the ZODB issue is marked as fixed. I would like to know how that was fixed, and I still would like to know what operating system this problem occurred on. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 07:00 Message: Logged In: YES user_id=341410 Originator: NO I don't see an issue with treating EAGAIN as EWOULDBLOCK. In the cases where EAGAIN != EWOULDBLOCK (in terms of constant value), treating them the same would be the right thing. In the case where the values were the same, nothing would change. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-07 11:03 Message: Logged In: YES user_id=21627 I'm still uncertain what precisely was happening here. What system was this on? On many systems, EAGAIN is EWOULDBLOCK; if that is the case, adding EAGAIN to the places that currently handle EWOULDBLOCK won't change anything. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2002-04-05 18:44 Message: Logged In: YES user_id=31392 It happens when the file is a pipe. For details, see the ZEO bug report at https://sourceforge.net/tracker/index.php? func=detail&aid=536416&group_id=15628&atid=115628 I've included the traceback from that bug report, too. error: uncaptured python exception, closing channel <select-trigger (pipe) at 81059cc> (exceptions.OSError:[Errno 11] Resource temporarily unavailable [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|poll|92] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|handle_read_event|386] [/home/zope/opt/Python-2.1.2/lib/python2.1/site- packages/ZEO/trigger.py|handle_read|95] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|338] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|520]) Exception exceptions.OSError: (9, 'Bad file descriptor') in <method trigger.__del__ of trigger instance at 0x81059cc> ignored ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-05 11:00 Message: Logged In: YES user_id=21627 Can you report details of the file that returns EWOULDBLOCK? This is not supposed to happen in applications of the file_wrapper. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 From noreply at sourceforge.net Sun Jan 7 15:01:38 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 06:01:38 -0800 Subject: [ python-Feature Requests-1615376 ] subprocess doesn\'t handle SIGPIPE Message-ID: Feature Requests item #1615376, was opened at 2006-12-14 01:21 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1615376&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Python Library >Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Mark Diekhans (diekhans) Assigned to: Peter ?strand (astrand) Summary: subprocess doesn\'t handle SIGPIPE Initial Comment: subprocess keeps other side of child pipe open, making use of SIGPIPE to terminate writers in a pipeline not possible. This is probably a matter of documentation or providing a method to link up processes, as the parent end of the pipe must remain open until it is connected to the next process in the pipeline. An option to enable sigpipe in child would be nice. Simple example attached. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-07 15:01 Message: Logged In: YES user_id=344921 Originator: NO One easy solution is to simply close the pipe in the parent after starting both processes, before calling p1.wait(): p1.stdout.close() It's not "perfect", though, p1 will execute a while before recieving SIGPIPE. For a perfect solution, it would be necessary to close the pipe end in the parent after the fork but before the exec in the child. This would require some kind of synchronization. Moving to feature request. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1615376&group_id=5470 From noreply at sourceforge.net Sun Jan 7 15:05:30 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 06:05:30 -0800 Subject: [ python-Bugs-1604851 ] subprocess.Popen closes fds for sys.stdout or sys.stderr Message-ID: Bugs item #1604851, was opened at 2006-11-28 23:17 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1604851&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Nishkar Grover (ngrover) Assigned to: Peter ?strand (astrand) Summary: subprocess.Popen closes fds for sys.stdout or sys.stderr Initial Comment: I found a problem in subprocess.Popen's _execute_child() method for POSIX, where the child process will close the fds for sys.stdout and/or sys.stderr if I use those as stdout and/or stderr when creating a subprocess.Popen object. Here's what I saw by default when using the 2.4.4 version of Python... % ./python Python 2.4.4 (#1, Nov 28 2006, 14:08:29) [GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import sys, subprocess >>> uname = subprocess.Popen('uname -a', shell=True, stdout=sys.stdout) >>> uname: write error: Bad file descriptor >>> Then, I updated subprocess.py and made the following changes... % diff subprocess.py subprocess.py.orig 924c924 < # fd more than once and don't close sys.stdout or sys.stderr. --- > # fd more than once. 927c927 < if c2pwrite and c2pwrite not in (p2cread, sys.stdout.fileno(), sys.stderr.fileno()): --- > if c2pwrite and c2pwrite not in (p2cread,): 929c929 < if errwrite and errwrite not in (p2cread, c2pwrite, sys.stdout.fileno(), sys.stderr.fileno()): --- > if errwrite and errwrite not in (p2cread, c2pwrite): After that, I saw the following... % ./python Python 2.4.4 (#1, Nov 28 2006, 14:08:29) [GCC 3.4.6 20060404 (Red Hat 3.4.6-3)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> >>> import sys, subprocess >>> uname = subprocess.Popen('uname -a', shell=True, stdout=sys.stdout) >>> Linux schnauzer 2.6.9-42.0.2.ELsmp #1 SMP Thu Aug 17 18:00:32 EDT 2006 i686 i686 i386 GNU/Linux >>> I'm attaching the modified version of subprocess.py. Please consider adding this fix to future versions of Python. Thanks! ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-07 15:05 Message: Logged In: YES user_id=344921 Originator: NO Duplicate of 1531862. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1604851&group_id=5470 From noreply at sourceforge.net Sun Jan 7 15:10:51 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 06:10:51 -0800 Subject: [ python-Bugs-1590864 ] subprocess deadlock Message-ID: Bugs item #1590864, was opened at 2006-11-05 17:06 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1590864&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michael Tsai (michaeltsai) Assigned to: Peter ?strand (astrand) Summary: subprocess deadlock Initial Comment: When I use subprocess.py from a child thread, sometimes it deadlocks. I determined that the new process is blocked during an import: #0 0x90024427 in semaphore_wait_signal_trap () #1 0x90028414 in pthread_cond_wait () #2 0x004c77bf in PyThread_acquire_lock (lock=0x3189a0, waitflag=1) at Python/thread_pthread.h:452 #3 0x004ae2a6 in lock_import () at Python/import.c:266 #4 0x004b24be in PyImport_ImportModuleLevel (name=0xaad74 "errno", globals=0xbaed0, locals=0x502aa0, fromlist=0xc1378, level=-1) at Python/import.c:2054 #5 0x0048d2e2 in builtin___import__ (self=0x0, args=0x53724c90, kwds=0x0) at Python/bltinmodule.c:47 #6 0x0040decb in PyObject_Call (func=0xa94b8, arg=0x53724c90, kw=0x0) at Objects/abstract.c:1860 and that the code in question is in os.py: def _execvpe(file, args, env=None): from errno import ENOENT, ENOTDIR I think the problem is that since exec (the C function) hasn't yet been called in the new process, it's inherited from the fork a lock that's already held. The main process will eventually release its copy of the lock, but this will not unlock it in the new process, so it deadlocks. If I change os.py so that it imports the constants outside of _execvpe, the new process no longer blocks in this way. This is on Mac OS X 10.4.8. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-07 15:10 Message: Logged In: YES user_id=344921 Originator: NO Can you provide a test case or sample code that demonstrates this problem? I'm a bit unsure of if this really is a subprocess bug or a more general Python bug. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1590864&group_id=5470 From noreply at sourceforge.net Sun Jan 7 15:36:54 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 06:36:54 -0800 Subject: [ python-Bugs-1598181 ] subprocess.py: O(N**2) bottleneck Message-ID: Bugs item #1598181, was opened at 2006-11-17 07:40 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open >Resolution: Fixed Priority: 5 Private: No Submitted By: Ralf W. Grosse-Kunstleve (rwgk) Assigned to: Peter ?strand (astrand) Summary: subprocess.py: O(N**2) bottleneck Initial Comment: subprocess.py (Python 2.5, current SVN, probably all versions) contains this O(N**2) code: bytes_written = os.write(self.stdin.fileno(), input[:512]) input = input[bytes_written:] For large but reasonable "input" the second line is rate limiting. Luckily, it is very easy to remove this bottleneck. I'll upload a simple patch. Below is a small script that demonstrates the huge speed difference. The output on my machine is: creating input 0.888417959213 slow slicing input 61.1553330421 creating input 0.863168954849 fast slicing input 0.0163860321045 done The numbers are times in seconds. This is the source: import time import sys size = 1000000 t0 = time.time() print "creating input" input = "\n".join([str(i) for i in xrange(size)]) print time.time()-t0 t0 = time.time() print "slow slicing input" n_out_slow = 0 while True: out = input[:512] n_out_slow += 1 input = input[512:] if not input: break print time.time()-t0 t0 = time.time() print "creating input" input = "\n".join([str(i) for i in xrange(size)]) print time.time()-t0 t0 = time.time() print "fast slicing input" n_out_fast = 0 input_done = 0 while True: out = input[input_done:input_done+512] n_out_fast += 1 input_done += 512 if input_done >= len(input): break print time.time()-t0 assert n_out_fast == n_out_slow print "done" ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-07 15:36 Message: Logged In: YES user_id=344921 Originator: NO Fixed in trunk revision 53295. Is this a good candidate for backporting to 25-maint? ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-04 19:20 Message: Logged In: YES user_id=1611720 Originator: NO I reviewed the patch--the proposed fix looks good. Minor comments: - "input_done" sounds like a flag, not a count of written bytes - buffer() could be used to avoid the 512-byte copy created by the slice ---------------------------------------------------------------------- Comment By: Ralf W. Grosse-Kunstleve (rwgk) Date: 2006-11-17 07:43 Message: Logged In: YES user_id=71407 Originator: YES Sorry, I didn't know the tracker would destroy the indentation. I'm uploading the demo source as a separate file. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470 From noreply at sourceforge.net Sun Jan 7 16:15:21 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 07:15:21 -0800 Subject: [ python-Bugs-1598181 ] subprocess.py: O(N**2) bottleneck Message-ID: Bugs item #1598181, was opened at 2006-11-16 22:40 Message generated for change (Comment added) made by rwgk You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: Fixed Priority: 5 Private: No Submitted By: Ralf W. Grosse-Kunstleve (rwgk) Assigned to: Peter ?strand (astrand) Summary: subprocess.py: O(N**2) bottleneck Initial Comment: subprocess.py (Python 2.5, current SVN, probably all versions) contains this O(N**2) code: bytes_written = os.write(self.stdin.fileno(), input[:512]) input = input[bytes_written:] For large but reasonable "input" the second line is rate limiting. Luckily, it is very easy to remove this bottleneck. I'll upload a simple patch. Below is a small script that demonstrates the huge speed difference. The output on my machine is: creating input 0.888417959213 slow slicing input 61.1553330421 creating input 0.863168954849 fast slicing input 0.0163860321045 done The numbers are times in seconds. This is the source: import time import sys size = 1000000 t0 = time.time() print "creating input" input = "\n".join([str(i) for i in xrange(size)]) print time.time()-t0 t0 = time.time() print "slow slicing input" n_out_slow = 0 while True: out = input[:512] n_out_slow += 1 input = input[512:] if not input: break print time.time()-t0 t0 = time.time() print "creating input" input = "\n".join([str(i) for i in xrange(size)]) print time.time()-t0 t0 = time.time() print "fast slicing input" n_out_fast = 0 input_done = 0 while True: out = input[input_done:input_done+512] n_out_fast += 1 input_done += 512 if input_done >= len(input): break print time.time()-t0 assert n_out_fast == n_out_slow print "done" ---------------------------------------------------------------------- >Comment By: Ralf W. Grosse-Kunstleve (rwgk) Date: 2007-01-07 07:15 Message: Logged In: YES user_id=71407 Originator: YES Thanks for the fixes! ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-07 06:36 Message: Logged In: YES user_id=344921 Originator: NO Fixed in trunk revision 53295. Is this a good candidate for backporting to 25-maint? ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-04 10:20 Message: Logged In: YES user_id=1611720 Originator: NO I reviewed the patch--the proposed fix looks good. Minor comments: - "input_done" sounds like a flag, not a count of written bytes - buffer() could be used to avoid the 512-byte copy created by the slice ---------------------------------------------------------------------- Comment By: Ralf W. Grosse-Kunstleve (rwgk) Date: 2006-11-16 22:43 Message: Logged In: YES user_id=71407 Originator: YES Sorry, I didn't know the tracker would destroy the indentation. I'm uploading the demo source as a separate file. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470 From noreply at sourceforge.net Sun Jan 7 18:09:41 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 09:09:41 -0800 Subject: [ python-Bugs-1590864 ] subprocess deadlock Message-ID: Bugs item #1590864, was opened at 2006-11-05 11:06 Message generated for change (Comment added) made by michaeltsai You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1590864&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michael Tsai (michaeltsai) Assigned to: Peter ?strand (astrand) Summary: subprocess deadlock Initial Comment: When I use subprocess.py from a child thread, sometimes it deadlocks. I determined that the new process is blocked during an import: #0 0x90024427 in semaphore_wait_signal_trap () #1 0x90028414 in pthread_cond_wait () #2 0x004c77bf in PyThread_acquire_lock (lock=0x3189a0, waitflag=1) at Python/thread_pthread.h:452 #3 0x004ae2a6 in lock_import () at Python/import.c:266 #4 0x004b24be in PyImport_ImportModuleLevel (name=0xaad74 "errno", globals=0xbaed0, locals=0x502aa0, fromlist=0xc1378, level=-1) at Python/import.c:2054 #5 0x0048d2e2 in builtin___import__ (self=0x0, args=0x53724c90, kwds=0x0) at Python/bltinmodule.c:47 #6 0x0040decb in PyObject_Call (func=0xa94b8, arg=0x53724c90, kw=0x0) at Objects/abstract.c:1860 and that the code in question is in os.py: def _execvpe(file, args, env=None): from errno import ENOENT, ENOTDIR I think the problem is that since exec (the C function) hasn't yet been called in the new process, it's inherited from the fork a lock that's already held. The main process will eventually release its copy of the lock, but this will not unlock it in the new process, so it deadlocks. If I change os.py so that it imports the constants outside of _execvpe, the new process no longer blocks in this way. This is on Mac OS X 10.4.8. ---------------------------------------------------------------------- >Comment By: Michael Tsai (michaeltsai) Date: 2007-01-07 12:09 Message: Logged In: YES user_id=817528 Originator: YES I don't have time at the moment to write sample code that reproduces this. But, FYI, I was using PyObjC to create the threads. It might not happen with "threading" threads. And second, I think it's a bug in os.py, not in subprocess.py. Sorry for the confusion. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-07 09:10 Message: Logged In: YES user_id=344921 Originator: NO Can you provide a test case or sample code that demonstrates this problem? I'm a bit unsure of if this really is a subprocess bug or a more general Python bug. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1590864&group_id=5470 From noreply at sourceforge.net Sun Jan 7 19:03:17 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 10:03:17 -0800 Subject: [ python-Bugs-539444 ] asyncore file wrapper & os.error Message-ID: Bugs item #539444, was opened at 2002-04-04 12:57 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jeremy Hylton (jhylton) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore file wrapper & os.error Initial Comment: The file wrapper makes a file descriptor look like an asycnore socket. When its recv() method is invoked, it calls os.read(). I use this in an application where os.read() occasionally raises os.error (11, 'Resource temporarily unavailable'). I think that asyncore should catch this error and treat it just like EWOULDBLOCK. But I'd like a second opinion. ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 10:03 Message: Logged In: YES user_id=341410 Originator: NO Jeremy Hylton states what he did to fix it in ZEO. In terms of platform, I would guess that this is likely linux, as multiple people seem to be able to reproduce the error, and you can't reliably use signals in Windows without killing the process. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-07 02:45 Message: Logged In: YES user_id=21627 Originator: NO Notice that the ZODB issue is marked as fixed. I would like to know how that was fixed, and I still would like to know what operating system this problem occurred on. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 22:00 Message: Logged In: YES user_id=341410 Originator: NO I don't see an issue with treating EAGAIN as EWOULDBLOCK. In the cases where EAGAIN != EWOULDBLOCK (in terms of constant value), treating them the same would be the right thing. In the case where the values were the same, nothing would change. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-07 01:03 Message: Logged In: YES user_id=21627 I'm still uncertain what precisely was happening here. What system was this on? On many systems, EAGAIN is EWOULDBLOCK; if that is the case, adding EAGAIN to the places that currently handle EWOULDBLOCK won't change anything. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2002-04-05 08:44 Message: Logged In: YES user_id=31392 It happens when the file is a pipe. For details, see the ZEO bug report at https://sourceforge.net/tracker/index.php? func=detail&aid=536416&group_id=15628&atid=115628 I've included the traceback from that bug report, too. error: uncaptured python exception, closing channel <select-trigger (pipe) at 81059cc> (exceptions.OSError:[Errno 11] Resource temporarily unavailable [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|poll|92] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|handle_read_event|386] [/home/zope/opt/Python-2.1.2/lib/python2.1/site- packages/ZEO/trigger.py|handle_read|95] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|338] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|520]) Exception exceptions.OSError: (9, 'Bad file descriptor') in <method trigger.__del__ of trigger instance at 0x81059cc> ignored ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-05 01:00 Message: Logged In: YES user_id=21627 Can you report details of the file that returns EWOULDBLOCK? This is not supposed to happen in applications of the file_wrapper. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 From noreply at sourceforge.net Sun Jan 7 20:18:07 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 11:18:07 -0800 Subject: [ python-Bugs-539444 ] asyncore file wrapper & os.error Message-ID: Bugs item #539444, was opened at 2002-04-04 22:57 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jeremy Hylton (jhylton) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore file wrapper & os.error Initial Comment: The file wrapper makes a file descriptor look like an asycnore socket. When its recv() method is invoked, it calls os.read(). I use this in an application where os.read() occasionally raises os.error (11, 'Resource temporarily unavailable'). I think that asyncore should catch this error and treat it just like EWOULDBLOCK. But I'd like a second opinion. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-07 20:18 Message: Logged In: YES user_id=21627 Originator: NO Ok; still I wonder what the problem is. In the original report, Jeremy said "should catch this error and treat it just like EWOULDBLOCK". Now, EWOULDBLOCK is handled in dispatcher.connect, dispatcher.accept, and dispatcher.send - not in dispatcher.recv. So what would it help to treat EAGAIN the same way? ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 19:03 Message: Logged In: YES user_id=341410 Originator: NO Jeremy Hylton states what he did to fix it in ZEO. In terms of platform, I would guess that this is likely linux, as multiple people seem to be able to reproduce the error, and you can't reliably use signals in Windows without killing the process. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-07 11:45 Message: Logged In: YES user_id=21627 Originator: NO Notice that the ZODB issue is marked as fixed. I would like to know how that was fixed, and I still would like to know what operating system this problem occurred on. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 07:00 Message: Logged In: YES user_id=341410 Originator: NO I don't see an issue with treating EAGAIN as EWOULDBLOCK. In the cases where EAGAIN != EWOULDBLOCK (in terms of constant value), treating them the same would be the right thing. In the case where the values were the same, nothing would change. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-07 11:03 Message: Logged In: YES user_id=21627 I'm still uncertain what precisely was happening here. What system was this on? On many systems, EAGAIN is EWOULDBLOCK; if that is the case, adding EAGAIN to the places that currently handle EWOULDBLOCK won't change anything. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2002-04-05 18:44 Message: Logged In: YES user_id=31392 It happens when the file is a pipe. For details, see the ZEO bug report at https://sourceforge.net/tracker/index.php? func=detail&aid=536416&group_id=15628&atid=115628 I've included the traceback from that bug report, too. error: uncaptured python exception, closing channel <select-trigger (pipe) at 81059cc> (exceptions.OSError:[Errno 11] Resource temporarily unavailable [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|poll|92] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|handle_read_event|386] [/home/zope/opt/Python-2.1.2/lib/python2.1/site- packages/ZEO/trigger.py|handle_read|95] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|338] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|520]) Exception exceptions.OSError: (9, 'Bad file descriptor') in <method trigger.__del__ of trigger instance at 0x81059cc> ignored ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-05 11:00 Message: Logged In: YES user_id=21627 Can you report details of the file that returns EWOULDBLOCK? This is not supposed to happen in applications of the file_wrapper. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 From noreply at sourceforge.net Sun Jan 7 21:36:54 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 12:36:54 -0800 Subject: [ python-Feature Requests-415692 ] smarter temporary file object Message-ID: Feature Requests item #415692, was opened at 2001-04-12 10:37 Message generated for change (Comment added) made by djmitche You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=415692&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Guido van Rossum (gvanrossum) Assigned to: Nobody/Anonymous (nobody) Summary: smarter temporary file object Initial Comment: Jim Fulton suggested the following: I wonder if it would be a good idea to have a new kind of temporary file that stored data in memory unless: - The data exceeds some size, or - Somebody asks for a fileno. Then the cgi module (and other apps) could use this thing in a uniform way. ---------------------------------------------------------------------- Comment By: Dustin J. Mitchell (djmitche) Date: 2007-01-07 14:36 Message: Logged In: YES user_id=7446 Originator: NO Patch is at http://python.org/sf/1630118 ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-02 22:52 Message: Logged In: YES user_id=6380 Originator: YES I've reopened the issue for you. Do try to interest some other core developer in reviewing your code, or it will take a long time... Thanks for remembering! ---------------------------------------------------------------------- Comment By: Dustin J. Mitchell (djmitche) Date: 2007-01-02 22:30 Message: Logged In: YES user_id=7446 Originator: NO I have a potential implementation for this, intended to be included in Lib/tempfile.py. Because the issue is closed, I can't attach it. Let's see if posting to the issue will open that option up. Dustin ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-08-09 11:51 Message: Logged In: YES user_id=6380 Thank you. I've moved this feature request to PEP 42, "Feature Requests". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=415692&group_id=5470 From noreply at sourceforge.net Sun Jan 7 22:00:30 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 13:00:30 -0800 Subject: [ python-Bugs-1628484 ] Python 2.5 64 bit compile fails on Solaris 10/gcc 4.1.1 Message-ID: Bugs item #1628484, was opened at 2007-01-05 00:45 Message generated for change (Comment added) made by bobatkins You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628484&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Bob Atkins (bobatkins) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.5 64 bit compile fails on Solaris 10/gcc 4.1.1 Initial Comment: This looks like a recurring and somewhat sore topic. For those of us that have been fighting the dreaded: ./Include/pyport.h:730:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." when performing a 64 bit compile. I believe I have identified the problems. All of which are directly related to the Makefile(s) that are generated as part of the configure script. There does not seem to be anything wrong with the configure script or anything else once all of the Makefiles are corrected Python will build 64 bit Although it is possible to pass the following environment variables to configure as is typical on most open source software: CC C compiler command CFLAGS C compiler flags LDFLAGS linker flags, e.g. -L if you have libraries in a nonstandard directory CPPFLAGS C/C++ preprocessor flags, e.g. -I if you have headers in a nonstandard directory CPP C preprocessor These flags are *not* being processed through to the generated Makefiles. This is where the problem is. configure is doing everything right and generating all of the necessary stuff for a 64 bit compile but when the compile is actually performed - the necessary CFLAGS are missing and a 32 bit compile is initiated. Taking a close look at the first failure I found the following: gcc -pthread -c -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I./Include -fPIC -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c Where are my CFLAGS??? I ran the configure with: CFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ CXXFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ LDFLAGS="-m64 -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ ./configure --prefix=/opt \ --enable-shared \ --libdir=/opt/lib/sparcv9 Checking the config.log and config.status it was clear that the flags were used properly as the configure script ran however, the failure is in the various Makefiles to actually reference the CFLAGS and LDFLAGS. LDFLAGS is simply not included in any of the link stages in the Makefiles and CFLAGS is overidden by BASECFLAGS, OPT and EXTRA_CFLAGS! Ah! EXTRA_CFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ make Actually got the core parts to compile for the library and then failed to build the library because - LDFLAGS was missing from the Makefile for the library link stage - :-( Close examination suggests that the OPT environment variable could be used to pass the necessary flags through from conifgure but this still did not help the link stage problems. The fixes are pretty minimal to ensure that the configure variables are passed into the Makefile. My patch to the Makefile.pre.in is attached to this bug report. Once these changes are made Python will build properly for both 32 and 64 bit platforms with the correct CFLAGS and LDFLAGS passed into the configure script. BTW, while this bug is reported under a Solaris/gcc build the patches to Makefile.pre.in should fix similar build issues on all platforms. ---------------------------------------------------------------------- >Comment By: Bob Atkins (bobatkins) Date: 2007-01-07 13:00 Message: Logged In: YES user_id=655552 Originator: YES OK, here is the synposis: Run the configure with: $ CFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ CXXFLAGS="-O3 -m64 -mcpu=ultrasparc -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ LDFLAGS="-m64 -L/opt/lib/sparcv9 -R/opt/lib/sparcv9" \ ./configure --prefix=/opt \ --enable-shared \ --libdir=/opt/lib/sparcv9 $ make gcc -pthread -c -fno-strict-aliasing -DNDEBUG -I. -I./Include -fPIC -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c In file included from ./Include/Python.h:57, from ./Modules/python.c:3: ./Include/pyport.h:730:2: error: #error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)." ---- Cause: Makefile.pre.in does not have substitutions that carry through the CFLAGS that were given to the configure script. In addition although LDFLAGS is carried in from the configure script it is not used in any of the link stages in the Makefile. Minor issue. Makefile.pre.in also does not carry through the 'libdir' configure variable and should be referencing the other pre-defined configure variables. Solution: See attached patches to Makefile.pre.in File Added: Makefile.pre.in.patch ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-05 16:52 Message: Logged In: YES user_id=21627 Originator: NO Can you please report what the actual problem is that you got? I doubt it's the #error, as that error is generated by the preprocessor, yet your fix seems to deal with LDFLAGS only. So please explain what command you invoked, what the actual output was, and what the expected output was. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1628484&group_id=5470 From noreply at sourceforge.net Sun Jan 7 22:35:04 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 13:35:04 -0800 Subject: [ python-Bugs-539444 ] asyncore file wrapper & os.error Message-ID: Bugs item #539444, was opened at 2002-04-04 12:57 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jeremy Hylton (jhylton) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore file wrapper & os.error Initial Comment: The file wrapper makes a file descriptor look like an asycnore socket. When its recv() method is invoked, it calls os.read(). I use this in an application where os.read() occasionally raises os.error (11, 'Resource temporarily unavailable'). I think that asyncore should catch this error and treat it just like EWOULDBLOCK. But I'd like a second opinion. ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 13:35 Message: Logged In: YES user_id=341410 Originator: NO I seem to have misread it as being for send. Presumably they would want to handle EAGAIN/EWOULDBLOCK in recv, though the semantic of returning an empty string when it was polled as being readable, is generally seen as a condition to close the socket. I'm leaning towards closing as invalid, as "fixing" the behavior would result in the semantics of recv being ambiguous. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-07 11:18 Message: Logged In: YES user_id=21627 Originator: NO Ok; still I wonder what the problem is. In the original report, Jeremy said "should catch this error and treat it just like EWOULDBLOCK". Now, EWOULDBLOCK is handled in dispatcher.connect, dispatcher.accept, and dispatcher.send - not in dispatcher.recv. So what would it help to treat EAGAIN the same way? ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 10:03 Message: Logged In: YES user_id=341410 Originator: NO Jeremy Hylton states what he did to fix it in ZEO. In terms of platform, I would guess that this is likely linux, as multiple people seem to be able to reproduce the error, and you can't reliably use signals in Windows without killing the process. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-07 02:45 Message: Logged In: YES user_id=21627 Originator: NO Notice that the ZODB issue is marked as fixed. I would like to know how that was fixed, and I still would like to know what operating system this problem occurred on. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 22:00 Message: Logged In: YES user_id=341410 Originator: NO I don't see an issue with treating EAGAIN as EWOULDBLOCK. In the cases where EAGAIN != EWOULDBLOCK (in terms of constant value), treating them the same would be the right thing. In the case where the values were the same, nothing would change. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-07 01:03 Message: Logged In: YES user_id=21627 I'm still uncertain what precisely was happening here. What system was this on? On many systems, EAGAIN is EWOULDBLOCK; if that is the case, adding EAGAIN to the places that currently handle EWOULDBLOCK won't change anything. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2002-04-05 08:44 Message: Logged In: YES user_id=31392 It happens when the file is a pipe. For details, see the ZEO bug report at https://sourceforge.net/tracker/index.php? func=detail&aid=536416&group_id=15628&atid=115628 I've included the traceback from that bug report, too. error: uncaptured python exception, closing channel <select-trigger (pipe) at 81059cc> (exceptions.OSError:[Errno 11] Resource temporarily unavailable [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|poll|92] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|handle_read_event|386] [/home/zope/opt/Python-2.1.2/lib/python2.1/site- packages/ZEO/trigger.py|handle_read|95] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|338] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|520]) Exception exceptions.OSError: (9, 'Bad file descriptor') in <method trigger.__del__ of trigger instance at 0x81059cc> ignored ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-05 01:00 Message: Logged In: YES user_id=21627 Can you report details of the file that returns EWOULDBLOCK? This is not supposed to happen in applications of the file_wrapper. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 From noreply at sourceforge.net Sun Jan 7 22:49:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 13:49:50 -0800 Subject: [ python-Bugs-539444 ] asyncore file wrapper & os.error Message-ID: Bugs item #539444, was opened at 2002-04-04 22:57 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jeremy Hylton (jhylton) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore file wrapper & os.error Initial Comment: The file wrapper makes a file descriptor look like an asycnore socket. When its recv() method is invoked, it calls os.read(). I use this in an application where os.read() occasionally raises os.error (11, 'Resource temporarily unavailable'). I think that asyncore should catch this error and treat it just like EWOULDBLOCK. But I'd like a second opinion. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-07 22:49 Message: Logged In: YES user_id=21627 Originator: NO What still puzzles me is why recv is invoked at all. According to the traceback, it was invoked because poll() indicated a read event for the pipe, yet trying to read from it failed with EAGAIN. Either there still is a bug in asyncore, or there is a bug in the operating system, or the traceback is bogus. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 22:35 Message: Logged In: YES user_id=341410 Originator: NO I seem to have misread it as being for send. Presumably they would want to handle EAGAIN/EWOULDBLOCK in recv, though the semantic of returning an empty string when it was polled as being readable, is generally seen as a condition to close the socket. I'm leaning towards closing as invalid, as "fixing" the behavior would result in the semantics of recv being ambiguous. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-07 20:18 Message: Logged In: YES user_id=21627 Originator: NO Ok; still I wonder what the problem is. In the original report, Jeremy said "should catch this error and treat it just like EWOULDBLOCK". Now, EWOULDBLOCK is handled in dispatcher.connect, dispatcher.accept, and dispatcher.send - not in dispatcher.recv. So what would it help to treat EAGAIN the same way? ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 19:03 Message: Logged In: YES user_id=341410 Originator: NO Jeremy Hylton states what he did to fix it in ZEO. In terms of platform, I would guess that this is likely linux, as multiple people seem to be able to reproduce the error, and you can't reliably use signals in Windows without killing the process. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-07 11:45 Message: Logged In: YES user_id=21627 Originator: NO Notice that the ZODB issue is marked as fixed. I would like to know how that was fixed, and I still would like to know what operating system this problem occurred on. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 07:00 Message: Logged In: YES user_id=341410 Originator: NO I don't see an issue with treating EAGAIN as EWOULDBLOCK. In the cases where EAGAIN != EWOULDBLOCK (in terms of constant value), treating them the same would be the right thing. In the case where the values were the same, nothing would change. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-07 11:03 Message: Logged In: YES user_id=21627 I'm still uncertain what precisely was happening here. What system was this on? On many systems, EAGAIN is EWOULDBLOCK; if that is the case, adding EAGAIN to the places that currently handle EWOULDBLOCK won't change anything. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2002-04-05 18:44 Message: Logged In: YES user_id=31392 It happens when the file is a pipe. For details, see the ZEO bug report at https://sourceforge.net/tracker/index.php? func=detail&aid=536416&group_id=15628&atid=115628 I've included the traceback from that bug report, too. error: uncaptured python exception, closing channel <select-trigger (pipe) at 81059cc> (exceptions.OSError:[Errno 11] Resource temporarily unavailable [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|poll|92] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|handle_read_event|386] [/home/zope/opt/Python-2.1.2/lib/python2.1/site- packages/ZEO/trigger.py|handle_read|95] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|338] [/home/zope/opt/Python- 2.1.2/lib/python2.1/asyncore.py|recv|520]) Exception exceptions.OSError: (9, 'Bad file descriptor') in <method trigger.__del__ of trigger instance at 0x81059cc> ignored ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-04-05 11:00 Message: Logged In: YES user_id=21627 Can you report details of the file that returns EWOULDBLOCK? This is not supposed to happen in applications of the file_wrapper. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=539444&group_id=5470 From noreply at sourceforge.net Mon Jan 8 03:19:37 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 18:19:37 -0800 Subject: [ python-Bugs-1569622 ] Backward incompatibility in logging.py Message-ID: Bugs item #1569622, was opened at 2006-10-02 16:10 Message generated for change (Comment added) made by mklaas You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed Resolution: Fixed >Priority: 9 Private: No Submitted By: Mike Klaas (mklaas) >Assigned to: Neal Norwitz (nnorwitz) >Summary: Backward incompatibility in logging.py Initial Comment: LogRecord.__init__ changed in a backward incompatible way in python 2.5 (added one parameter). There is no mention of this breakage in the release notes, nor has the documentation of the module been updated (http://docs.python.org/lib/node424.html) ---------------------------------------------------------------------- >Comment By: Mike Klaas (mklaas) Date: 2007-01-07 18:19 Message: Logged In: YES user_id=1611720 Originator: YES This fix should be back-ported to 2.5 maint: r52100,52101,52102 perhaps also r52555,52556? ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2006-10-03 11:22 Message: Logged In: YES user_id=308438 Documentation now updated in CVS. Also changed the added "func" parameter to have a default value of None. Sorry for the inconvenience caused. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-03 10:14 Message: Logged In: YES user_id=1611720 It is incompatible as code written for 2.4 will break in 2.5, and vice-versa (this is a required parameter, not an optional parameter, and the change could have been made in a backward-compatible way). You're right that the documentation fix is the important thing. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-10-02 23:11 Message: Logged In: YES user_id=849994 I don't see why adding one parameter is backwards incompatible, but it's true that the docs must be updated. Assigning to Vinay. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 From noreply at sourceforge.net Mon Jan 8 03:20:18 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 07 Jan 2007 18:20:18 -0800 Subject: [ python-Bugs-1569622 ] Backward incompatibility in logging.py Message-ID: Bugs item #1569622, was opened at 2006-10-02 16:10 Message generated for change (Settings changed) made by mklaas You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Open Resolution: Fixed Priority: 9 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Neal Norwitz (nnorwitz) Summary: Backward incompatibility in logging.py Initial Comment: LogRecord.__init__ changed in a backward incompatible way in python 2.5 (added one parameter). There is no mention of this breakage in the release notes, nor has the documentation of the module been updated (http://docs.python.org/lib/node424.html) ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-07 18:19 Message: Logged In: YES user_id=1611720 Originator: YES This fix should be back-ported to 2.5 maint: r52100,52101,52102 perhaps also r52555,52556? ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2006-10-03 11:22 Message: Logged In: YES user_id=308438 Documentation now updated in CVS. Also changed the added "func" parameter to have a default value of None. Sorry for the inconvenience caused. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-03 10:14 Message: Logged In: YES user_id=1611720 It is incompatible as code written for 2.4 will break in 2.5, and vice-versa (this is a required parameter, not an optional parameter, and the change could have been made in a backward-compatible way). You're right that the documentation fix is the important thing. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-10-02 23:11 Message: Logged In: YES user_id=849994 I don't see why adding one parameter is backwards incompatible, but it's true that the docs must be updated. Assigning to Vinay. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 From noreply at sourceforge.net Mon Jan 8 11:13:37 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 02:13:37 -0800 Subject: [ python-Bugs-1569622 ] Backward incompatibility in logging.py Message-ID: Bugs item #1569622, was opened at 2006-10-02 23:10 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed Resolution: Fixed Priority: 9 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Neal Norwitz (nnorwitz) Summary: Backward incompatibility in logging.py Initial Comment: LogRecord.__init__ changed in a backward incompatible way in python 2.5 (added one parameter). There is no mention of this breakage in the release notes, nor has the documentation of the module been updated (http://docs.python.org/lib/node424.html) ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2007-01-08 10:13 Message: Logged In: YES user_id=308438 Originator: NO Done. I'm not sure what you're getting at with those revision numbers - trunk and branches/release25-maint are now up to date; if you think other branches need to be updated, please name those branches explicitly. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-08 02:19 Message: Logged In: YES user_id=1611720 Originator: YES This fix should be back-ported to 2.5 maint: r52100,52101,52102 perhaps also r52555,52556? ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2006-10-03 18:22 Message: Logged In: YES user_id=308438 Documentation now updated in CVS. Also changed the added "func" parameter to have a default value of None. Sorry for the inconvenience caused. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-03 17:14 Message: Logged In: YES user_id=1611720 It is incompatible as code written for 2.4 will break in 2.5, and vice-versa (this is a required parameter, not an optional parameter, and the change could have been made in a backward-compatible way). You're right that the documentation fix is the important thing. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-10-03 06:11 Message: Logged In: YES user_id=849994 I don't see why adding one parameter is backwards incompatible, but it's true that the docs must be updated. Assigning to Vinay. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 From noreply at sourceforge.net Mon Jan 8 11:37:30 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 02:37:30 -0800 Subject: [ python-Bugs-889153 ] asyncore.dispactcher: incorrect connect Message-ID: Bugs item #889153, was opened at 2004-02-02 19:04 Message generated for change (Comment added) made by klimkin You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889153&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Sankov Dmitry Alexandrovich (sankov_da) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.dispactcher: incorrect connect Initial Comment: When i use non-blocking socket, connect() method of asyncore.dispatcher class looks like works incorrect. Example: if connection have not established then socket merely closed and handle_error not called and no exception throwed. One more example: if writable() and readble() methods returns zero than handle_connect() will never be called even if connection will be established. Thanks. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2007-01-08 13:37 Message: Logged In: YES user_id=410460 Originator: NO It's about _non-blocking_ socket. Socket has been created and connect called. However, for non-blocking socket connect returns immediately. The patch allows to use connect in non-blocking manner. I don't see any reason of limiting socket to be connected in blocking manner. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 02:05 Message: Logged In: YES user_id=341410 Originator: NO It sounds as though the original poster is passing a socket that has been created, but which is not yet connected, to the dispatcher constructor. We should update the documentation to state that either the user should pass a completely connected socket (as returned by socket.accept(), or which has connected as the result of a a blocking socket.connect() call), or use the .create_socket() and .connect() methods of the dispatcher. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 11:22 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889153&group_id=5470 From noreply at sourceforge.net Mon Jan 8 13:05:43 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 04:05:43 -0800 Subject: [ python-Bugs-1630511 ] doc error for re.sub Message-ID: Bugs item #1630511, was opened at 2007-01-08 12:05 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630511&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Keith Briggs (kbriggs) Assigned to: Nobody/Anonymous (nobody) Summary: doc error for re.sub Initial Comment: http://www.python.org/doc/2.4.3/lib/node115.html says that repl in sub(pattern, repl, string[, count]) can be a function. This is fact does not work and this facility came in a later version. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630511&group_id=5470 From noreply at sourceforge.net Mon Jan 8 13:10:00 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 04:10:00 -0800 Subject: [ python-Bugs-1630515 ] doc misleading in re.compile Message-ID: Bugs item #1630515, was opened at 2007-01-08 12:09 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630515&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Keith Briggs (kbriggs) Assigned to: Nobody/Anonymous (nobody) Summary: doc misleading in re.compile Initial Comment: http://www.python.org/doc/2.5/lib/node46.html has compile(pattern[, flags]) Compile a regular expression pattern into a regular expression object, which can be used for matching using its match() and search() methods, described below. This could be read as implying that the regular expression object can ONLY be used for matching using the match() and search() methods. In fact, I believe it can be used wherever "pattern" is mentioned. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630515&group_id=5470 From noreply at sourceforge.net Mon Jan 8 18:02:12 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 09:02:12 -0800 Subject: [ python-Bugs-889153 ] asyncore.dispactcher: incorrect connect Message-ID: Bugs item #889153, was opened at 2004-02-02 08:04 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889153&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Sankov Dmitry Alexandrovich (sankov_da) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.dispactcher: incorrect connect Initial Comment: When i use non-blocking socket, connect() method of asyncore.dispatcher class looks like works incorrect. Example: if connection have not established then socket merely closed and handle_error not called and no exception throwed. One more example: if writable() and readble() methods returns zero than handle_connect() will never be called even if connection will be established. Thanks. ---------------------------------------------------------------------- >Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-08 09:02 Message: Logged In: YES user_id=341410 Originator: NO According to my reading, the only change necessary to make the semantics equivalent for a non-blocking socket for which .connect() has been called is to change a portion of the dispatcher's __init__ method to: try: self.addr = sock.getpeername() except socket.error: # if we can't get the peer name, we haven't connected yet self.connected = False ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2007-01-08 02:37 Message: Logged In: YES user_id=410460 Originator: NO It's about _non-blocking_ socket. Socket has been created and connect called. However, for non-blocking socket connect returns immediately. The patch allows to use connect in non-blocking manner. I don't see any reason of limiting socket to be connected in blocking manner. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 15:05 Message: Logged In: YES user_id=341410 Originator: NO It sounds as though the original poster is passing a socket that has been created, but which is not yet connected, to the dispatcher constructor. We should update the documentation to state that either the user should pass a completely connected socket (as returned by socket.accept(), or which has connected as the result of a a blocking socket.connect() call), or use the .create_socket() and .connect() methods of the dispatcher. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 00:22 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889153&group_id=5470 From noreply at sourceforge.net Mon Jan 8 18:51:34 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 09:51:34 -0800 Subject: [ python-Bugs-1630511 ] doc error for re.sub Message-ID: Bugs item #1630511, was opened at 2007-01-08 12:05 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630511&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.4 >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Keith Briggs (kbriggs) >Assigned to: Michael Hudson (mwh) Summary: doc error for re.sub Initial Comment: http://www.python.org/doc/2.4.3/lib/node115.html says that repl in sub(pattern, repl, string[, count]) can be a function. This is fact does not work and this facility came in a later version. ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2007-01-08 17:51 Message: Logged In: YES user_id=6656 Originator: NO Um, what? This has worked since at least 1.5.2, and probably before. Did you think you'd found the docs for 1.4? Closing. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630511&group_id=5470 From noreply at sourceforge.net Mon Jan 8 19:02:53 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 10:02:53 -0800 Subject: [ python-Bugs-1630794 ] Seg fault in readline call. Message-ID: Bugs item #1630794, was opened at 2007-01-08 10:02 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630794&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: gnovak (gnovak) Assigned to: Nobody/Anonymous (nobody) Summary: Seg fault in readline call. Initial Comment: GDL is a free implementation of the IDL programming language that can be built as a Python module to allow one to call IDL code from Python. http://gnudatalanguage.sourceforge.net/ When "enough" of readline has been activated, I get a seg fault with the backtrace listed below when trying to call any GDL code from Python. I've also reported the problem there. One way to initialize enough of readline is to use IPython (http://ipython.scipy.org), an enhanced interactive Python shell (this is how I found the bug). Another way is to follow the instructions from IPython's author (no IPython required) listed below. I am using: OS X 10.4.8 Python 2.4.2 (#1, Mar 22 2006, 21:27:43) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin GDL 0.9 pre 3 readline 5.0 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630794&group_id=5470 From noreply at sourceforge.net Mon Jan 8 19:07:21 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 10:07:21 -0800 Subject: [ python-Bugs-1569622 ] Backward incompatibility in logging.py Message-ID: Bugs item #1569622, was opened at 2006-10-02 16:10 Message generated for change (Comment added) made by mklaas You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed Resolution: Fixed Priority: 9 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Neal Norwitz (nnorwitz) Summary: Backward incompatibility in logging.py Initial Comment: LogRecord.__init__ changed in a backward incompatible way in python 2.5 (added one parameter). There is no mention of this breakage in the release notes, nor has the documentation of the module been updated (http://docs.python.org/lib/node424.html) ---------------------------------------------------------------------- >Comment By: Mike Klaas (mklaas) Date: 2007-01-08 10:07 Message: Logged In: YES user_id=1611720 Originator: YES Sorry - I was just trying to to see if the fix was backported, and I only saw the trunk checkins on python-checkins ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2007-01-08 02:13 Message: Logged In: YES user_id=308438 Originator: NO Done. I'm not sure what you're getting at with those revision numbers - trunk and branches/release25-maint are now up to date; if you think other branches need to be updated, please name those branches explicitly. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-07 18:19 Message: Logged In: YES user_id=1611720 Originator: YES This fix should be back-ported to 2.5 maint: r52100,52101,52102 perhaps also r52555,52556? ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2006-10-03 11:22 Message: Logged In: YES user_id=308438 Documentation now updated in CVS. Also changed the added "func" parameter to have a default value of None. Sorry for the inconvenience caused. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-03 10:14 Message: Logged In: YES user_id=1611720 It is incompatible as code written for 2.4 will break in 2.5, and vice-versa (this is a required parameter, not an optional parameter, and the change could have been made in a backward-compatible way). You're right that the documentation fix is the important thing. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-10-02 23:11 Message: Logged In: YES user_id=849994 I don't see why adding one parameter is backwards incompatible, but it's true that the docs must be updated. Assigning to Vinay. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 From noreply at sourceforge.net Mon Jan 8 19:26:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 10:26:10 -0800 Subject: [ python-Bugs-1569622 ] Backward incompatibility in logging.py Message-ID: Bugs item #1569622, was opened at 2006-10-02 23:10 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed Resolution: Fixed Priority: 9 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Neal Norwitz (nnorwitz) Summary: Backward incompatibility in logging.py Initial Comment: LogRecord.__init__ changed in a backward incompatible way in python 2.5 (added one parameter). There is no mention of this breakage in the release notes, nor has the documentation of the module been updated (http://docs.python.org/lib/node424.html) ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2007-01-08 18:26 Message: Logged In: YES user_id=308438 Originator: NO No need to be sorry, you were absolutely right to re-open the issue for backports - I had not originally checked in the backport. What I meant in my last mail was that I have now checked in the backport, but wasn't sure about the specific revisions you were referring to. Thanks for the reminder about release25-maint. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-08 18:07 Message: Logged In: YES user_id=1611720 Originator: YES Sorry - I was just trying to to see if the fix was backported, and I only saw the trunk checkins on python-checkins ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2007-01-08 10:13 Message: Logged In: YES user_id=308438 Originator: NO Done. I'm not sure what you're getting at with those revision numbers - trunk and branches/release25-maint are now up to date; if you think other branches need to be updated, please name those branches explicitly. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-08 02:19 Message: Logged In: YES user_id=1611720 Originator: YES This fix should be back-ported to 2.5 maint: r52100,52101,52102 perhaps also r52555,52556? ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2006-10-03 18:22 Message: Logged In: YES user_id=308438 Documentation now updated in CVS. Also changed the added "func" parameter to have a default value of None. Sorry for the inconvenience caused. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-03 17:14 Message: Logged In: YES user_id=1611720 It is incompatible as code written for 2.4 will break in 2.5, and vice-versa (this is a required parameter, not an optional parameter, and the change could have been made in a backward-compatible way). You're right that the documentation fix is the important thing. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-10-03 06:11 Message: Logged In: YES user_id=849994 I don't see why adding one parameter is backwards incompatible, but it's true that the docs must be updated. Assigning to Vinay. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 From noreply at sourceforge.net Mon Jan 8 19:30:12 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 10:30:12 -0800 Subject: [ python-Bugs-1569622 ] Backward incompatibility in logging.py Message-ID: Bugs item #1569622, was opened at 2006-10-02 16:10 Message generated for change (Comment added) made by mklaas You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed Resolution: Fixed Priority: 9 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Neal Norwitz (nnorwitz) Summary: Backward incompatibility in logging.py Initial Comment: LogRecord.__init__ changed in a backward incompatible way in python 2.5 (added one parameter). There is no mention of this breakage in the release notes, nor has the documentation of the module been updated (http://docs.python.org/lib/node424.html) ---------------------------------------------------------------------- >Comment By: Mike Klaas (mklaas) Date: 2007-01-08 10:30 Message: Logged In: YES user_id=1611720 Originator: YES Ah... the revisions were simply the fix revisions (the last two were not directly related to this issue, but one was a documentation bug, and the other was a minor performance edit, so I thought those might also be backport candidates). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2007-01-08 10:26 Message: Logged In: YES user_id=308438 Originator: NO No need to be sorry, you were absolutely right to re-open the issue for backports - I had not originally checked in the backport. What I meant in my last mail was that I have now checked in the backport, but wasn't sure about the specific revisions you were referring to. Thanks for the reminder about release25-maint. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-08 10:07 Message: Logged In: YES user_id=1611720 Originator: YES Sorry - I was just trying to to see if the fix was backported, and I only saw the trunk checkins on python-checkins ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2007-01-08 02:13 Message: Logged In: YES user_id=308438 Originator: NO Done. I'm not sure what you're getting at with those revision numbers - trunk and branches/release25-maint are now up to date; if you think other branches need to be updated, please name those branches explicitly. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-07 18:19 Message: Logged In: YES user_id=1611720 Originator: YES This fix should be back-ported to 2.5 maint: r52100,52101,52102 perhaps also r52555,52556? ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2006-10-03 11:22 Message: Logged In: YES user_id=308438 Documentation now updated in CVS. Also changed the added "func" parameter to have a default value of None. Sorry for the inconvenience caused. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-03 10:14 Message: Logged In: YES user_id=1611720 It is incompatible as code written for 2.4 will break in 2.5, and vice-versa (this is a required parameter, not an optional parameter, and the change could have been made in a backward-compatible way). You're right that the documentation fix is the important thing. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-10-02 23:11 Message: Logged In: YES user_id=849994 I don't see why adding one parameter is backwards incompatible, but it's true that the docs must be updated. Assigning to Vinay. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1569622&group_id=5470 From noreply at sourceforge.net Mon Jan 8 19:55:43 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 10:55:43 -0800 Subject: [ python-Bugs-411881 ] Use of "except:" in logging module Message-ID: Bugs item #411881, was opened at 2001-03-28 12:58 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=411881&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Pending >Resolution: Fixed Priority: 2 Private: No Submitted By: Itamar Shtull-Trauring (itamar) Assigned to: Vinay Sajip (vsajip) Summary: Use of "except:" in logging module Initial Comment: A large amount of modules in the standard library use "except:" instead of specifying the exceptions to be caught. In some cases this may be correct, but I think in most cases this not true and this may cause problems. Here's the list of modules, which I got by doing: grep "except:" *.py | cut -f 1 -d " " | sort | uniq Bastion.py CGIHTTPServer.py Cookie.py SocketServer.py anydbm.py asyncore.py bdb.py cgi.py chunk.py cmd.py code.py compileall.py doctest.py fileinput.py formatter.py getpass.py htmllib.py imaplib.py inspect.py locale.py locale.py mailcap.py mhlib.py mimetools.py mimify.py os.py pdb.py popen2.py posixfile.py pre.py pstats.py pty.py pyclbr.py pydoc.py repr.py rexec.py rfc822.py shelve.py shutil.py tempfile.py threading.py traceback.py types.py unittest.py urllib.py zipfile.py ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2007-01-08 18:55 Message: Logged In: YES user_id=308438 Originator: NO The following changes have been checked into trunk: logging.handlers: bare except clause removed from SMTPHandler.emit. Now, only ImportError is trapped. logging.handlers: bare except clause removed from SocketHandler.createSocket. Now, only socket.error is trapped. logging: bare except clause removed from LogRecord.__init__. Now, only ValueError, TypeError and AttributeError are trapped. I'm marking this as Pending; please submit a change if you think these changes are insufficient. With the default setting of raiseExceptions, all exceptions caused by programmer error should be re-thrown by logging. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2006-12-22 12:52 Message: Logged In: YES user_id=44345 Originator: NO Vinay, In LogRecord.__init__ what exceptions do you expect to catch? Looking at the code for basename and splitext in os.py it's pretty hard to see how they would raise an exception unless they were passed something besides string or unicode objects. I think all you are doing here is masking programmer error. In StreamHandler.emit what might you get besides ValueError (if self.stream is closed)? I don't have time to go through each of the cases, but in general, it seems like the set of possible exceptions that could be raised at any given point in the code is generally pretty small. You should catch those exceptions and let the other stuff go. They are generally going to be programmer's errors and shouldn't be silently squashed. Skip ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2006-12-22 07:42 Message: Logged In: YES user_id=308438 Originator: NO The reason for the fair number of bare excepts in logging is this: in many cases (e.g. long-running processes like Zope servers) users don't want their application to change behaviour just because of some exception thrown in logging. So, logging aims to be very quiet indeed and swallows exceptions, except SystemExit and KeyboardInterrupt in certain situations. Also, logging is one of the modules which is (meant to be) 1.5.2 compatible, and string exceptions are not that uncommon in older code. I've looked at bare excepts in logging and here's my summary on them: logging/__init__.py: ==================== currentframe(): Backward compatibility only, sys._getframe is used where available so currentframe() will only be called on rare occasions. LogRecord.__init__(): There's a try/bare except around calls to os.path.basename() and os.path.splitext(). I could add a raise clause for SystemExit/KeyboardInterrupt. StreamHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). shutdown(): Normally only called at system exit, and will re-raise everything if raiseExceptions is set (the default). logging/handlers.py: ==================== BaseRotatingHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). SocketHandler.createSocket(): I could add a raise clause for SystemExit/KeyboardInterrupt. SocketHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). SysLogHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). SMTPHandler.emit(): Should change bare except to ImportError for the formatdate import. Elsewhere, reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). NTEventLogHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). HTTPHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). logging/config.py: ==================== listen.ConfigStreamHandler.handle(): Reraises SystemExit and KeyboardInterrupt, prints everything else and continues - seems OK for a long-running thread. What do you think? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-21 14:09 Message: Logged In: YES user_id=11375 Originator: NO Raymond said (in 2003) most of the remaining except: statements looked reasonable, so I'm changing this bug's summary to refer to the logging module and reassigning to vsajip. PEP 8 doesn't say anything about bare excepts; I'll bring this up on python-dev. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2003-12-13 11:21 Message: Logged In: YES user_id=80475 Hold-off on logging for a bit. Vinay Sajip has other patches already under review. I'll ask him to fix-up the bare excepts in conjuction with those patches. For the other modules, patches are welcome. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2003-12-11 20:54 Message: Logged In: YES user_id=6380 You're right. The logging module uses more blank except: clauses than I'm comfortable with. Anyone want to upload a patch set? ---------------------------------------------------------------------- Comment By: Grant Monroe (gmonroe) Date: 2003-12-11 20:50 Message: Logged In: YES user_id=929204 A good example of an incorrect use of a blanket "except:" clause is in __init__.py in the logging module. The emit method of the StreamHandler class should special case KeyboardInterrupt. Something like this: try: .... except KeyboardInterrupt: raise except: self.handleError(record) ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2003-09-02 02:47 Message: Logged In: YES user_id=80475 Some efforts were made to remove many bare excepts prior to Py2.3a1. Briefly scanning those that remain, it looks like many of them are appropriate or best left alone. I recommend that this bug be closed unless someone sees something specific that demands a change. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-05-16 23:30 Message: Logged In: YES user_id=357491 threading.py is clear. It's blanket exceptions are for printing debug output since exceptions in threads don't get passed back to the original frame anyway. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2002-08-14 03:15 Message: Logged In: YES user_id=44345 checked in fileinput.py (v 1.15) with three except:'s tightened up. The comment in the code about IOError notwithstanding, I don't see how any of the three situations would have caught anything other than OSError. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2002-08-12 19:58 Message: Logged In: YES user_id=44345 Note that this particular item was expected to be an ongoing item, with no obvious closure. Some of the bare excepts will have subtle ramifications, and it's not always obvious what specific exceptions should be caught. I've made a few changes to my local source tree which I should check in. Rather than opening new tracker items, I believe those with checkin privileges should correct those flaws they identify and attach a comment which will alert those monitoring the item. Those people without checkin privileges should just attach a patch with a note. Skip ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-08-12 07:22 Message: Logged In: YES user_id=21627 My proposal would be to track this under a different issue: Terry, if you volunteer, please produce a new list of offenders (perhaps in an attachment to the report so it can be updated), and attach any fixes that you have to that report. People with CVS write access can then apply those patches and delete them from the report. If you do so, please post the new issue number in this report, so we have a link. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2002-08-11 18:16 Message: Logged In: YES user_id=593130 Remove types.py from the list. As distributed with 2.2.1, it has 5 'except xxxError:' statements but no offending bare except:'s. Skip (or anyone else): if/when you pursue this, I volunteer to do occasional sleuthing and send reports with suggestions and/or questions. Example: getpass.py has one 'offense': try: fd = sys.stdin.fileno() except: return default_getpass(prompt) According to lib doc 2.2.8 File Objects (as I interpret) fileno () should either work without exception or *not* be implemented. Suggestion: insert AttributeError . Question: do we protect against pseudofile objects that ignore doc and have fake .fileno() that raises NotImplementedError or whatever? ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2002-03-23 06:02 Message: Logged In: YES user_id=44345 as partial fix, checked in changes for the following modules: mimetools.py (1.24) popen2.py (1.23) quopripy (1.19) CGIHTTPServer.py (1.22) ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2002-03-20 21:24 Message: Logged In: YES user_id=44345 Here is a context diff with proposed changes for the following modules: CGIHTTPServer, cgi, cmd, code, fileinput, httplib, inspect, locale, mimetools, popen2, quopri ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2001-08-11 15:06 Message: Logged In: YES user_id=21627 Fixed urllib in 1.131 and types in 1.19. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-07-04 07:11 Message: Logged In: YES user_id=3066 Fixed modules mhlib and rfc822 (SF is having a problem generating the checkin emails, though). ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-05-11 19:40 Message: Logged In: YES user_id=3066 OK, I've fixed up a few more modules: anydbm chunk formatter htmllib mailcap pre pty I made one change to asyncore as well, but other bare except clauses remain there; I'm not sufficiently familiar with that code to just go digging into those. I also fixed an infraction in pstats, but left others for now. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-04-23 08:14 Message: Logged In: YES user_id=31435 Ping's intent is that pydoc work under versions of Python as early as 1.5.2, so that sys._getframe is off-limits in pydoc and its supporting code (like inspect.py). ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2001-04-23 07:32 Message: Logged In: YES user_id=21627 For inspect.py, why is it necessary to keep the old code at all? My proposal: remove currentframe altogether, and do currentframe = sys._getframe unconditionally. ---------------------------------------------------------------------- Comment By: Itamar Shtull-Trauring (itamar) Date: 2001-04-22 14:52 Message: Logged In: YES user_id=32065 I submitted a 4th patch. I'm starting to run out of easy cases... ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2001-04-19 09:15 Message: Logged In: YES user_id=44345 I believe the following patch is correct for the try/except in inspect.currentframe. Note that it fixes two problems. One, it avoids a bare except. Two, it gets rid of a string argument to the raise statement (string exceptions are now deprecated, right?). *** /tmp/skip/inspect.py Thu Apr 19 04:13:36 2001 --- /tmp/skip/inspect.py.~1.16~ Thu Apr 19 04:13:36 2001 *************** *** 643,650 **** def currentframe(): """Return the frame object for the caller's stack frame.""" try: ! 1/0 ! except ZeroDivisionError: return sys.exc_traceback.tb_frame.f_back if hasattr(sys, '_getframe'): currentframe = sys._getframe --- 643,650 ---- def currentframe(): """Return the frame object for the caller's stack frame.""" try: ! raise 'catch me' ! except: return sys.exc_traceback.tb_frame.f_back if hasattr(sys, '_getframe'): currentframe = sys._getframe ---------------------------------------------------------------------- Comment By: Itamar Shtull-Trauring (itamar) Date: 2001-04-17 15:27 Message: Logged In: YES user_id=32065 inspect.py uses sys_getframe if it's there, the other code is for backwards compatibility. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-11 17:24 Message: Logged In: YES user_id=6380 Actually, inspect.py should use sys._getframe()! And yes, KeyboardError is definitely one of the reasons why this is such a bad idiom... ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2001-04-11 17:15 Message: Logged In: YES user_id=89016 > Can you identify modules where catching everything > is incorrect If "everything" includes KeyboardInterrupt, it's definitely incorrect, even in inspect.py's simple try: raise 'catch me' except: return sys.exc_traceback.tb_frame.f_back which should probably be: try: raise 'catch me' except KeyboardInterrupt: raise except: return sys.exc_traceback.tb_frame.f_back ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2001-04-11 17:13 Message: Logged In: YES user_id=89016 > Can you identify modules where catching everything > is incorrect If "everything" includes KeyboardInterrupt, it's definitely incorrect, even in inspect.py's simple try: raise 'catch me' except: return sys.exc_traceback.tb_frame.f_back which should probably be: try: raise 'catch me' except KeyboardInterrupt: raise except: return sys.exc_traceback.tb_frame.f_back ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-10 15:45 Message: Logged In: YES user_id=6380 I've applied the three patches you supplied. I agree with Martin that to do this right we'll have to tread carefully. But please go on! (No way more of this will find its way into 2.1 though.) ---------------------------------------------------------------------- Comment By: Itamar Shtull-Trauring (itamar) Date: 2001-03-30 10:54 Message: Logged In: YES user_id=32065 inspect.py should be removed from this list, the use is correct. In general, I just submitted this bug so that when people are editing a module they'll notice these things, since in some cases only someone who knows the code very well can know if the "expect:" is needed or not. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2001-03-30 06:59 Message: Logged In: YES user_id=21627 Can you identify modules where catching everything is incorrect, and propose changes to correct them. This should be done one-by-one, with careful analysis in each case, and may take well months or years to complete. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=411881&group_id=5470 From noreply at sourceforge.net Mon Jan 8 20:29:06 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 11:29:06 -0800 Subject: [ python-Bugs-1630844 ] fnmatch.translate undocumented Message-ID: Bugs item #1630844, was opened at 2007-01-08 16:29 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630844&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Gabriel Genellina (gagenellina) Assigned to: Nobody/Anonymous (nobody) Summary: fnmatch.translate undocumented Initial Comment: fnmatch.translate is not documented, but it is menctioned on the module docstring and in __all__, so it appears to be public. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630844&group_id=5470 From noreply at sourceforge.net Mon Jan 8 20:55:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 11:55:13 -0800 Subject: [ python-Bugs-1574217 ] isinstance swallows exceptions Message-ID: Bugs item #1574217, was opened at 2006-10-09 21:55 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1574217&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Brian Harring (ferringb) >Assigned to: Raymond Hettinger (rhettinger) Summary: isinstance swallows exceptions Initial Comment: Attached is a simple example; yes, a bit contrived, but it's exactly what bit me in the ass for a week or two :) nestled within abstract.c's recursive_isinstance, is this lil nugget- icls = PyObject_GetAttr(inst, __class__); if (icls == NULL) { PyErr_Clear(); retval = 0; } else { No surrouding comments to indicate *why* it's swallowing exceptions, but best explanation I've heard was that it was attempting to swallow just AttributeError... which would make sense. So the question is, whats the purpose of it swallowing exceptions there? Bad form of AttributeError catching, or some unstated reason? ---------------------------------------------------------------------- Comment By: Brian Harring (ferringb) Date: 2006-11-04 23:06 Message: Logged In: YES user_id=874085 quicky patch for this; basically, wipe the exception only if it's AttributeError, else let it bubble it's way up. ---------------------------------------------------------------------- Comment By: Brian Harring (ferringb) Date: 2006-10-09 21:56 Message: Logged In: YES user_id=874085 addition note; this is both 2.5 and 2.4, probably stretches bit further back also. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1574217&group_id=5470 From noreply at sourceforge.net Mon Jan 8 21:06:05 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 12:06:05 -0800 Subject: [ python-Bugs-1630863 ] PyLong_AsLong doesn't check tp_as_number Message-ID: Bugs item #1630863, was opened at 2007-01-08 15:06 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630863&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Roger Upole (rupole) Assigned to: Nobody/Anonymous (nobody) Summary: PyLong_AsLong doesn't check tp_as_number Initial Comment: Both PyInt_AsLong and PyLong_AsLongLong check if an object's type has PyNumberMethods defined. However, PyLong_AsLong does not, causing conversion to fail for objects which can legitimately be converted to a long. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630863&group_id=5470 From noreply at sourceforge.net Mon Jan 8 21:40:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 12:40:57 -0800 Subject: [ python-Bugs-1630894 ] Garbage output to file of specific size Message-ID: Bugs item #1630894, was opened at 2007-01-08 15:40 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630894&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michael Culbertson (mculbert) Assigned to: Nobody/Anonymous (nobody) Summary: Garbage output to file of specific size Initial Comment: The attached script inexplicably fills the output file with garbage using the input file available at: http://cs.wheaton.edu/~mculbert/StdDetVol_Scaled_SMDS.dat (4.6Mb) If the string outputed in line 26 is changed to f.write("bla "), the output file is legible. If the expression is changed from f.write("%g " % k) to f.write("%f " % k) or f.write("%e " % k), the file is legible. If, however, the expression is changed to f.write('x'*len(str(k))+" "), the file remains illegible. Adding a print statement: print "%g " % k before line 26 indicates that k is assuming the correct values and that the string interpolation is functioning properly. This suggests that the problem causing the garbage may be related to the specific file size created with this particular set of data. The problem occurs with Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)] under Windows XP. The problem doesn't occur with the same script and input file using Python 2.3.5 on Mac OS 10.4.8. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630894&group_id=5470 From noreply at sourceforge.net Mon Jan 8 21:52:03 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 12:52:03 -0800 Subject: [ python-Bugs-889153 ] asyncore.dispactcher: incorrect connect Message-ID: Bugs item #889153, was opened at 2004-02-02 19:04 Message generated for change (Comment added) made by klimkin You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889153&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Sankov Dmitry Alexandrovich (sankov_da) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.dispactcher: incorrect connect Initial Comment: When i use non-blocking socket, connect() method of asyncore.dispatcher class looks like works incorrect. Example: if connection have not established then socket merely closed and handle_error not called and no exception throwed. One more example: if writable() and readble() methods returns zero than handle_connect() will never be called even if connection will be established. Thanks. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2007-01-08 23:52 Message: Logged In: YES user_id=410460 Originator: NO I was working with Dmitry desk by desk :), but I don't remember, what did he really mean with his "broken" english :). The main problem was in impossibility to connect to another peer in non-blocking manner. The patch contains the workaround code, which changes original behaviour significantly. You may seek for your own way to fix blocking connect there. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-08 20:02 Message: Logged In: YES user_id=341410 Originator: NO According to my reading, the only change necessary to make the semantics equivalent for a non-blocking socket for which .connect() has been called is to change a portion of the dispatcher's __init__ method to: try: self.addr = sock.getpeername() except socket.error: # if we can't get the peer name, we haven't connected yet self.connected = False ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2007-01-08 13:37 Message: Logged In: YES user_id=410460 Originator: NO It's about _non-blocking_ socket. Socket has been created and connect called. However, for non-blocking socket connect returns immediately. The patch allows to use connect in non-blocking manner. I don't see any reason of limiting socket to be connected in blocking manner. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 02:05 Message: Logged In: YES user_id=341410 Originator: NO It sounds as though the original poster is passing a socket that has been created, but which is not yet connected, to the dispatcher constructor. We should update the documentation to state that either the user should pass a completely connected socket (as returned by socket.accept(), or which has connected as the result of a a blocking socket.connect() call), or use the .create_socket() and .connect() methods of the dispatcher. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 11:22 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=889153&group_id=5470 From noreply at sourceforge.net Mon Jan 8 21:59:26 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 12:59:26 -0800 Subject: [ python-Bugs-658749 ] asyncore connect() and winsock errors Message-ID: Bugs item #658749, was opened at 2002-12-26 21:25 Message generated for change (Comment added) made by klimkin You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=658749&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Guido van Rossum (gvanrossum) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore connect() and winsock errors Initial Comment: asyncore's connect() method should interpret the winsock errors; these are different from Unix (and different between the Win98 family and the Win2k family). ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2007-01-08 23:59 Message: Logged In: YES user_id=410460 Originator: NO Sorry, but 2 years ago we were developing this for Linux and XP only;). Even they say to be POSIX, they behave a little differently. As I remember, we have added handling of some E* return codes, which was appearing for non-blocking connect on XP. If you connect while in blocking state, you won't get those values. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 09:10 Message: Logged In: YES user_id=341410 Originator: NO klimkin: Please explain how either of the versions of patch #909005 fix the problem. From what I can tell, the only change you made was to move the accept() handling of errors to the handle_read() method. Guido: In terms of winsock errors, which are actually raised on connection error between win98, win2k, and/or XP, 2003, and Vista? ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 11:24 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=658749&group_id=5470 From noreply at sourceforge.net Mon Jan 8 22:09:34 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 13:09:34 -0800 Subject: [ python-Bugs-1630794 ] Seg fault in readline call. Message-ID: Bugs item #1630794, was opened at 2007-01-08 18:02 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630794&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: gnovak (gnovak) Assigned to: Nobody/Anonymous (nobody) Summary: Seg fault in readline call. Initial Comment: GDL is a free implementation of the IDL programming language that can be built as a Python module to allow one to call IDL code from Python. http://gnudatalanguage.sourceforge.net/ When "enough" of readline has been activated, I get a seg fault with the backtrace listed below when trying to call any GDL code from Python. I've also reported the problem there. One way to initialize enough of readline is to use IPython (http://ipython.scipy.org), an enhanced interactive Python shell (this is how I found the bug). Another way is to follow the instructions from IPython's author (no IPython required) listed below. I am using: OS X 10.4.8 Python 2.4.2 (#1, Mar 22 2006, 21:27:43) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin GDL 0.9 pre 3 readline 5.0 ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2007-01-08 21:09 Message: Logged In: YES user_id=6656 Originator: NO You don't really provide enough information for us to be able to help you. A self-contained test case would be best, failing that a backtrace from gdb might help. Also, Python 2.5 and readline 5.1 are out now, maybe you could try with them? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630794&group_id=5470 From noreply at sourceforge.net Mon Jan 8 22:34:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 13:34:28 -0800 Subject: [ python-Bugs-1349106 ] email.Generators does not separates headers with "\r\n" Message-ID: Bugs item #1349106, was opened at 2005-11-05 17:50 Message generated for change (Comment added) made by t-v You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1349106&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Closed Resolution: Wont Fix Priority: 5 Private: No Submitted By: Manlio Perillo (manlioperillo) Assigned to: Barry A. Warsaw (bwarsaw) Summary: email.Generators does not separates headers with "\r\n" Initial Comment: Regards. The email.Generator module does not separates headers with "\r\n". Manlio Perillo ---------------------------------------------------------------------- Comment By: Thomas Viehmann (t-v) Date: 2007-01-08 22:34 Message: Logged In: YES user_id=680463 Originator: NO Hi, could you please reconsider closing this bug and consider fixing it or at least providing an option for standard behaviour? Leaving aside the question of performance impact of postprocessing in longer mails (for those, email may not a be good optionin the first place), the module as is renders the email.Generator mostly useless for multipart messages with binary data that needs to be standards compliant, e.g. Multipart-Messages containing images, possibly signed or uploading (with httplib) multipart/form-data. Thank you for your consideration. Kind regards Thomas ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2006-01-20 11:05 Message: Logged In: YES user_id=1054957 But the generator does not output in native line endings! On Windows: >>> from email.Message import Message >>> msg = Message() >>> msg["From"] = "me" >>> msg["To"] = "you" >>> print repr(msg.as_string()) 'From: me\nTo: you\n\n' ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2006-01-18 00:47 Message: Logged In: YES user_id=12800 I hear what you're saying, but so far, it has been more convenient for developers when the generator outputs native line endings. I can see a case for a flag or other switch on the Generator instance to force RFC 2822 line endings. I would suggest joining the email-sig and posting a request there so the issue can be discussed as an RFE. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2006-01-17 17:26 Message: Logged In: YES user_id=1054957 I do not agree here (but I'm not an expert). First - the documentation says: """The email package attempts to be as RFC-compliant as possible, supporting in addition to RFC 2822, such MIME-related RFCs as RFC 2045, RFC 2046, RFC 2047, and RFC 2231. """ But, as I can see, the generated email does not conform to RFC 2822. Second - I use email package as a "filter". read raw email text, do some processing, generate raw email text. Really, I don't understand why generated headers don't are separed by '\r\n' and one must rely on an external tool for the right conversion. Thanks. ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2006-01-17 13:54 Message: Logged In: YES user_id=12800 The module that speaks the wire protocol should do the conversion. IMO, there's no other way to guarantee that you're RFC compliant. You could be getting your data from the email package, but you could be getting it from anywhere else, and /that/ source may not be RFC line ended either. Since you can't change every possible source of data for NNTP or SMTP, your network interface must guarantee conformance. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2006-01-17 10:20 Message: Logged In: YES user_id=1054957 Ok, thanks. But what if I don't use the smtplib module? I discovered the bug because I have written a small NNTP server with twisted, using email module for parsing... ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2006-01-17 06:35 Message: Logged In: YES user_id=12800 Correct; this is by design. If you're worried about protocols such as RFC 2821 requiring \r\n line endings, don't. The smtplib module automatically ensures proper line endings for the on-the-wire communication. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1349106&group_id=5470 From noreply at sourceforge.net Mon Jan 8 22:46:23 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 13:46:23 -0800 Subject: [ python-Bugs-1630794 ] Seg fault in readline call. Message-ID: Bugs item #1630794, was opened at 2007-01-08 10:02 Message generated for change (Comment added) made by gnovak You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630794&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: gnovak (gnovak) Assigned to: Nobody/Anonymous (nobody) Summary: Seg fault in readline call. Initial Comment: GDL is a free implementation of the IDL programming language that can be built as a Python module to allow one to call IDL code from Python. http://gnudatalanguage.sourceforge.net/ When "enough" of readline has been activated, I get a seg fault with the backtrace listed below when trying to call any GDL code from Python. I've also reported the problem there. One way to initialize enough of readline is to use IPython (http://ipython.scipy.org), an enhanced interactive Python shell (this is how I found the bug). Another way is to follow the instructions from IPython's author (no IPython required) listed below. I am using: OS X 10.4.8 Python 2.4.2 (#1, Mar 22 2006, 21:27:43) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin GDL 0.9 pre 3 readline 5.0 ---------------------------------------------------------------------- >Comment By: gnovak (gnovak) Date: 2007-01-08 13:46 Message: Logged In: YES user_id=1037806 Originator: YES The GDB backtrace is (and was) in the attached text file extra.txt. Also in extra.txt are instructions for causing Python to crash using plain Python and GDL. Unfortunately I don't know a way to cause the seg fault without installing GDL. I'm working on trying it with Python2.5 and newer readlines. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2007-01-08 13:09 Message: Logged In: YES user_id=6656 Originator: NO You don't really provide enough information for us to be able to help you. A self-contained test case would be best, failing that a backtrace from gdb might help. Also, Python 2.5 and readline 5.1 are out now, maybe you could try with them? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630794&group_id=5470 From noreply at sourceforge.net Mon Jan 8 23:10:37 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 08 Jan 2007 14:10:37 -0800 Subject: [ python-Bugs-1349106 ] email.Generators does not separates headers with "\r\n" Message-ID: Bugs item #1349106, was opened at 2005-11-05 11:50 Message generated for change (Comment added) made by bwarsaw You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1349106&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library >Group: Feature Request >Status: Open >Resolution: None Priority: 5 Private: No Submitted By: Manlio Perillo (manlioperillo) Assigned to: Barry A. Warsaw (bwarsaw) Summary: email.Generators does not separates headers with "\r\n" Initial Comment: Regards. The email.Generator module does not separates headers with "\r\n". Manlio Perillo ---------------------------------------------------------------------- >Comment By: Barry A. Warsaw (bwarsaw) Date: 2007-01-08 17:10 Message: Logged In: YES user_id=12800 Originator: NO I am reopening this as a feature request. I still think it's better for protocols that require these line endings to ensure that their data is standards compliant, but I can see that there may be other use cases where you'd want to generate protocol required line endings. I'm not totally convinced, but it's worth opening the issue for now and discussing this on the email-sig. ---------------------------------------------------------------------- Comment By: Thomas Viehmann (t-v) Date: 2007-01-08 16:34 Message: Logged In: YES user_id=680463 Originator: NO Hi, could you please reconsider closing this bug and consider fixing it or at least providing an option for standard behaviour? Leaving aside the question of performance impact of postprocessing in longer mails (for those, email may not a be good optionin the first place), the module as is renders the email.Generator mostly useless for multipart messages with binary data that needs to be standards compliant, e.g. Multipart-Messages containing images, possibly signed or uploading (with httplib) multipart/form-data. Thank you for your consideration. Kind regards Thomas ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2006-01-20 05:05 Message: Logged In: YES user_id=1054957 But the generator does not output in native line endings! On Windows: >>> from email.Message import Message >>> msg = Message() >>> msg["From"] = "me" >>> msg["To"] = "you" >>> print repr(msg.as_string()) 'From: me\nTo: you\n\n' ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2006-01-17 18:47 Message: Logged In: YES user_id=12800 I hear what you're saying, but so far, it has been more convenient for developers when the generator outputs native line endings. I can see a case for a flag or other switch on the Generator instance to force RFC 2822 line endings. I would suggest joining the email-sig and posting a request there so the issue can be discussed as an RFE. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2006-01-17 11:26 Message: Logged In: YES user_id=1054957 I do not agree here (but I'm not an expert). First - the documentation says: """The email package attempts to be as RFC-compliant as possible, supporting in addition to RFC 2822, such MIME-related RFCs as RFC 2045, RFC 2046, RFC 2047, and RFC 2231. """ But, as I can see, the generated email does not conform to RFC 2822. Second - I use email package as a "filter". read raw email text, do some processing, generate raw email text. Really, I don't understand why generated headers don't are separed by '\r\n' and one must rely on an external tool for the right conversion. Thanks. ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2006-01-17 07:54 Message: Logged In: YES user_id=12800 The module that speaks the wire protocol should do the conversion. IMO, there's no other way to guarantee that you're RFC compliant. You could be getting your data from the email package, but you could be getting it from anywhere else, and /that/ source may not be RFC line ended either. Since you can't change every possible source of data for NNTP or SMTP, your network interface must guarantee conformance. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2006-01-17 04:20 Message: Logged In: YES user_id=1054957 Ok, thanks. But what if I don't use the smtplib module? I discovered the bug because I have written a small NNTP server with twisted, using email module for parsing... ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2006-01-17 00:35 Message: Logged In: YES user_id=12800 Correct; this is by design. If you're worried about protocols such as RFC 2821 requiring \r\n line endings, don't. The smtplib module automatically ensures proper line endings for the on-the-wire communication. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1349106&group_id=5470 From noreply at sourceforge.net Tue Jan 9 10:40:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 09 Jan 2007 01:40:28 -0800 Subject: [ python-Bugs-1630794 ] Seg fault in readline call. Message-ID: Bugs item #1630794, was opened at 2007-01-08 18:02 Message generated for change (Comment added) made by mwh You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630794&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: gnovak (gnovak) Assigned to: Nobody/Anonymous (nobody) Summary: Seg fault in readline call. Initial Comment: GDL is a free implementation of the IDL programming language that can be built as a Python module to allow one to call IDL code from Python. http://gnudatalanguage.sourceforge.net/ When "enough" of readline has been activated, I get a seg fault with the backtrace listed below when trying to call any GDL code from Python. I've also reported the problem there. One way to initialize enough of readline is to use IPython (http://ipython.scipy.org), an enhanced interactive Python shell (this is how I found the bug). Another way is to follow the instructions from IPython's author (no IPython required) listed below. I am using: OS X 10.4.8 Python 2.4.2 (#1, Mar 22 2006, 21:27:43) [GCC 4.0.1 (Apple Computer, Inc. build 5247)] on darwin GDL 0.9 pre 3 readline 5.0 ---------------------------------------------------------------------- >Comment By: Michael Hudson (mwh) Date: 2007-01-09 09:40 Message: Logged In: YES user_id=6656 Originator: NO Ah, I didn't see extra.txt, sorry about that. I'd advise a debug build, and continuing to try newer versions of Python and readline. The crashing line seems to be: line = history_get(state->length)->line; which suggests that readline has gotten confused somehow. But it could be Python misuse, of course. Or a memory scribbling bug in some extension module, that's always fun ---------------------------------------------------------------------- Comment By: gnovak (gnovak) Date: 2007-01-08 21:46 Message: Logged In: YES user_id=1037806 Originator: YES The GDB backtrace is (and was) in the attached text file extra.txt. Also in extra.txt are instructions for causing Python to crash using plain Python and GDL. Unfortunately I don't know a way to cause the seg fault without installing GDL. I'm working on trying it with Python2.5 and newer readlines. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2007-01-08 21:09 Message: Logged In: YES user_id=6656 Originator: NO You don't really provide enough information for us to be able to help you. A self-contained test case would be best, failing that a backtrace from gdb might help. Also, Python 2.5 and readline 5.1 are out now, maybe you could try with them? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630794&group_id=5470 From noreply at sourceforge.net Tue Jan 9 15:56:25 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 09 Jan 2007 06:56:25 -0800 Subject: [ python-Bugs-1627575 ] RotatingFileHandler cannot recover from failed doRollover() Message-ID: Bugs item #1627575, was opened at 2007-01-04 06:08 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627575&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Forest Wilkinson (forest) Assigned to: Vinay Sajip (vsajip) Summary: RotatingFileHandler cannot recover from failed doRollover() Initial Comment: When RotatingFileHandler.doRollover() raises an exception, it puts the handler object in a permanently failing state, with no way to recover using RotatingFileHandler methods. From that point on, the handler object raises an exception every time a message is logged, which renders logging in an application practically useless. Furthermore, a handleError() method has no good way of correcting the problem, because the API does not expose any way to re-open the file after doRollover() has closed it. Unfortunately, this is a common occurrence on Windows, because doRollover() will fail if someone is running tail -f on the log file. Suggestions: - Make doRollover() always leave the handler object in a usable state, even if the rollover fails. - Add a reOpen() method to FileHandler, which an error handler could use to recover from problems like this. (It would also be useful for applications that want to re-open log files in response to a SIGHUP.) ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2007-01-09 14:56 Message: Logged In: YES user_id=308438 Originator: NO I've added an _open() method to logging.FileHandler [checked into trunk]. This facilitates reopening by derived class error handlers. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-04 06:27 Message: Logged In: YES user_id=33168 Originator: NO Vinay, was this addressed? I thought there was a similar issue. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627575&group_id=5470 From noreply at sourceforge.net Tue Jan 9 19:31:38 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 09 Jan 2007 10:31:38 -0800 Subject: [ python-Bugs-1630894 ] Garbage output to file of specific size Message-ID: Bugs item #1630894, was opened at 2007-01-08 21:40 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630894&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michael Culbertson (mculbert) Assigned to: Nobody/Anonymous (nobody) Summary: Garbage output to file of specific size Initial Comment: The attached script inexplicably fills the output file with garbage using the input file available at: http://cs.wheaton.edu/~mculbert/StdDetVol_Scaled_SMDS.dat (4.6Mb) If the string outputed in line 26 is changed to f.write("bla "), the output file is legible. If the expression is changed from f.write("%g " % k) to f.write("%f " % k) or f.write("%e " % k), the file is legible. If, however, the expression is changed to f.write('x'*len(str(k))+" "), the file remains illegible. Adding a print statement: print "%g " % k before line 26 indicates that k is assuming the correct values and that the string interpolation is functioning properly. This suggests that the problem causing the garbage may be related to the specific file size created with this particular set of data. The problem occurs with Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)] under Windows XP. The problem doesn't occur with the same script and input file using Python 2.3.5 on Mac OS 10.4.8. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-09 19:31 Message: Logged In: YES user_id=21627 Originator: NO Can you please report what the expected output is? Mine (created on Linux) starts with 40 40 32 64 followed by many "0.0 " values. Also, can you please report what the actual output is that you get? In what way is it "illegible"? What version of Numeric are you using? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630894&group_id=5470 From noreply at sourceforge.net Tue Jan 9 21:26:01 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 09 Jan 2007 12:26:01 -0800 Subject: [ python-Bugs-1631769 ] Discrepancy between iterating empty and non-empty deques Message-ID: Bugs item #1631769, was opened at 2007-01-09 22:26 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1631769&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Christos Georgiou (tzot) Assigned to: Nobody/Anonymous (nobody) Summary: Discrepancy between iterating empty and non-empty deques Initial Comment: >>> from collections import deque >>> empty= deque() >>> nonempty= deque([None]) >>> iter_empty= iter(empty) >>> iter_nonempty= iter(nonempty) >>> empty.append(1) >>> nonempty.append(1) >>> iter_empty.next() Traceback (most recent call last): File "", line 1, in iter_empty.next() StopIteration >>> iter_nonempty.next() Traceback (most recent call last): File "", line 1, in iter_nonempty.next() RuntimeError: deque mutated during iteration >>> If the RuntimeError is the intended behaviour for a modified deque after its iterator has been created, then iter_empty.next() should also raise the same RuntimeError. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1631769&group_id=5470 From noreply at sourceforge.net Tue Jan 9 21:29:06 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 09 Jan 2007 12:29:06 -0800 Subject: [ python-Bugs-1631769 ] Discrepancy between iterating empty and non-empty deques Message-ID: Bugs item #1631769, was opened at 2007-01-09 22:26 Message generated for change (Comment added) made by tzot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1631769&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Christos Georgiou (tzot) >Assigned to: Raymond Hettinger (rhettinger) Summary: Discrepancy between iterating empty and non-empty deques Initial Comment: >>> from collections import deque >>> empty= deque() >>> nonempty= deque([None]) >>> iter_empty= iter(empty) >>> iter_nonempty= iter(nonempty) >>> empty.append(1) >>> nonempty.append(1) >>> iter_empty.next() Traceback (most recent call last): File "", line 1, in iter_empty.next() StopIteration >>> iter_nonempty.next() Traceback (most recent call last): File "", line 1, in iter_nonempty.next() RuntimeError: deque mutated during iteration >>> If the RuntimeError is the intended behaviour for a modified deque after its iterator has been created, then iter_empty.next() should also raise the same RuntimeError. ---------------------------------------------------------------------- >Comment By: Christos Georgiou (tzot) Date: 2007-01-09 22:29 Message: Logged In: YES user_id=539787 Originator: YES Assigned to Raymond as requested in http://mail.python.org/pipermail/python-dev/2007-January/070528.html ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1631769&group_id=5470 From noreply at sourceforge.net Tue Jan 9 21:32:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 09 Jan 2007 12:32:57 -0800 Subject: [ python-Bugs-1630515 ] doc misleading in re.compile Message-ID: Bugs item #1630515, was opened at 2007-01-08 14:09 Message generated for change (Comment added) made by tzot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630515&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Keith Briggs (kbriggs) Assigned to: Nobody/Anonymous (nobody) Summary: doc misleading in re.compile Initial Comment: http://www.python.org/doc/2.5/lib/node46.html has compile(pattern[, flags]) Compile a regular expression pattern into a regular expression object, which can be used for matching using its match() and search() methods, described below. This could be read as implying that the regular expression object can ONLY be used for matching using the match() and search() methods. In fact, I believe it can be used wherever "pattern" is mentioned. ---------------------------------------------------------------------- Comment By: Christos Georgiou (tzot) Date: 2007-01-09 22:32 Message: Logged In: YES user_id=539787 Originator: NO I like exact wording too, but I don't think this is a serious issue. I would suggest, unless you (kbriggs) offers a suitable patch, that this be dropped as a non-bug (it's a RFE, anyway). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630515&group_id=5470 From noreply at sourceforge.net Tue Jan 9 21:46:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 09 Jan 2007 12:46:28 -0800 Subject: [ python-Bugs-1631769 ] Discrepancy between iterating empty and non-empty deques Message-ID: Bugs item #1631769, was opened at 2007-01-09 15:26 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1631769&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Christos Georgiou (tzot) Assigned to: Raymond Hettinger (rhettinger) Summary: Discrepancy between iterating empty and non-empty deques Initial Comment: >>> from collections import deque >>> empty= deque() >>> nonempty= deque([None]) >>> iter_empty= iter(empty) >>> iter_nonempty= iter(nonempty) >>> empty.append(1) >>> nonempty.append(1) >>> iter_empty.next() Traceback (most recent call last): File "", line 1, in iter_empty.next() StopIteration >>> iter_nonempty.next() Traceback (most recent call last): File "", line 1, in iter_nonempty.next() RuntimeError: deque mutated during iteration >>> If the RuntimeError is the intended behaviour for a modified deque after its iterator has been created, then iter_empty.next() should also raise the same RuntimeError. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-09 15:46 Message: Logged In: YES user_id=80475 Originator: NO Fixed in rev 53299, ---------------------------------------------------------------------- Comment By: Christos Georgiou (tzot) Date: 2007-01-09 15:29 Message: Logged In: YES user_id=539787 Originator: YES Assigned to Raymond as requested in http://mail.python.org/pipermail/python-dev/2007-January/070528.html ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1631769&group_id=5470 From noreply at sourceforge.net Tue Jan 9 21:50:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 09 Jan 2007 12:50:28 -0800 Subject: [ python-Bugs-1574593 ] ctypes: Returning c_void_p from callback doesn\'t work Message-ID: Bugs item #1574593, was opened at 2006-10-10 17:18 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1574593&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Works For Me Priority: 5 Private: No Submitted By: Albert Strasheim (albertstrasheim) Assigned to: Thomas Heller (theller) Summary: ctypes: Returning c_void_p from callback doesn\'t work Initial Comment: C code: extern CALLBACK_API void* foo(void*(*callback)()) { printf("foo calling callback\n"); callback(); printf("callback returned in foo\n"); } callback.py contains: import sys print sys.version from ctypes import * def callback(*args): return c_void_p() #lib = cdll['libcallback.so'] lib = cdll['callback.dll'] lib.foo.argtypes = [CFUNCTYPE(c_void_p)] lib.foo(lib.foo.argtypes[0](callback)) With Python 2.4.3 and ctypes 1.0.0 + Thomas Heller's patch for another issue (which doesn't seem to affect this situation, but anyway) I get the following error: 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)] foo calling callback Traceback (most recent call last): File "source/callbacks.c", line 216, in 'converting callback result' TypeError: cannot be converted to pointer Exception in None ignored callback returned in foo With Python 2.5 and ctypes 1.0.1 I get: 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] foo calling callback Traceback (most recent call last): File "\loewis\25\python\Modules\_ctypes\callbacks.c", line 216, in 'converting callback result' TypeError: cannot be converted to pointer Exception in ignored callback returned in foo Returning a Python integer from callback() doesn't cause an error to be raised. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2007-01-09 21:50 Message: Logged In: YES user_id=11105 Originator: NO Sorry for the late reply, and I hope you are still interested in this. Basically, when you return something from the callback, ctypes does the same as if you would do this "c_void_p(something)". Now, you cannot construct a c_void_p instance from a c_void_p instance, you get exactly the same error as you mention above: >>> c_void_p(c_void_p(42)) Traceback (most recent call last): File "", line 1, in ? TypeError: cannot be converted to pointer >>> I'm not sure if this should be considered a bug or not, anyway there is an easy workaround: Return an integer from the callback, or, if you have a c_void_p instance, the .value attribute thereof. I think this should not be fixed. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1574593&group_id=5470 From noreply at sourceforge.net Wed Jan 10 13:56:56 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 10 Jan 2007 04:56:56 -0800 Subject: [ python-Bugs-1632328 ] logging.config.fileConfig doesn't clear logging._handlerList Message-ID: Bugs item #1632328, was opened at 2007-01-10 13:56 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1632328&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Stefan H. Holek (shh42) Assigned to: Nobody/Anonymous (nobody) Summary: logging.config.fileConfig doesn't clear logging._handlerList Initial Comment: logging.config.fileConfig resets logging._handlers but not logging._handlerList, resulting in tracebacks on shutdown. e.g. Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/local/python2.4/lib/python2.4/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/local/python2.4/lib/python2.4/logging/__init__.py", line 1333, in shutdown h.close() File "/usr/local/python2.4/lib/python2.4/logging/__init__.py", line 674, in close del _handlers[self] KeyError: AFAICT this is fixed in Python 2.5 but has not been backported. Zope cannot use 2.5 as of yet. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1632328&group_id=5470 From noreply at sourceforge.net Wed Jan 10 18:00:08 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 10 Jan 2007 09:00:08 -0800 Subject: [ python-Feature Requests-1630515 ] doc misleading in re.compile Message-ID: Feature Requests item #1630515, was opened at 2007-01-08 12:09 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1630515&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Documentation >Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Keith Briggs (kbriggs) Assigned to: Nobody/Anonymous (nobody) Summary: doc misleading in re.compile Initial Comment: http://www.python.org/doc/2.5/lib/node46.html has compile(pattern[, flags]) Compile a regular expression pattern into a regular expression object, which can be used for matching using its match() and search() methods, described below. This could be read as implying that the regular expression object can ONLY be used for matching using the match() and search() methods. In fact, I believe it can be used wherever "pattern" is mentioned. ---------------------------------------------------------------------- Comment By: Christos Georgiou (tzot) Date: 2007-01-09 20:32 Message: Logged In: YES user_id=539787 Originator: NO I like exact wording too, but I don't think this is a serious issue. I would suggest, unless you (kbriggs) offers a suitable patch, that this be dropped as a non-bug (it's a RFE, anyway). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1630515&group_id=5470 From noreply at sourceforge.net Wed Jan 10 18:01:36 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 10 Jan 2007 09:01:36 -0800 Subject: [ python-Feature Requests-1349106 ] email.Generators does not separates headers with "\r\n" Message-ID: Feature Requests item #1349106, was opened at 2005-11-05 16:50 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1349106&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Python Library >Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Manlio Perillo (manlioperillo) Assigned to: Barry A. Warsaw (bwarsaw) Summary: email.Generators does not separates headers with "\r\n" Initial Comment: Regards. The email.Generator module does not separates headers with "\r\n". Manlio Perillo ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2007-01-08 22:10 Message: Logged In: YES user_id=12800 Originator: NO I am reopening this as a feature request. I still think it's better for protocols that require these line endings to ensure that their data is standards compliant, but I can see that there may be other use cases where you'd want to generate protocol required line endings. I'm not totally convinced, but it's worth opening the issue for now and discussing this on the email-sig. ---------------------------------------------------------------------- Comment By: Thomas Viehmann (t-v) Date: 2007-01-08 21:34 Message: Logged In: YES user_id=680463 Originator: NO Hi, could you please reconsider closing this bug and consider fixing it or at least providing an option for standard behaviour? Leaving aside the question of performance impact of postprocessing in longer mails (for those, email may not a be good optionin the first place), the module as is renders the email.Generator mostly useless for multipart messages with binary data that needs to be standards compliant, e.g. Multipart-Messages containing images, possibly signed or uploading (with httplib) multipart/form-data. Thank you for your consideration. Kind regards Thomas ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2006-01-20 10:05 Message: Logged In: YES user_id=1054957 But the generator does not output in native line endings! On Windows: >>> from email.Message import Message >>> msg = Message() >>> msg["From"] = "me" >>> msg["To"] = "you" >>> print repr(msg.as_string()) 'From: me\nTo: you\n\n' ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2006-01-17 23:47 Message: Logged In: YES user_id=12800 I hear what you're saying, but so far, it has been more convenient for developers when the generator outputs native line endings. I can see a case for a flag or other switch on the Generator instance to force RFC 2822 line endings. I would suggest joining the email-sig and posting a request there so the issue can be discussed as an RFE. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2006-01-17 16:26 Message: Logged In: YES user_id=1054957 I do not agree here (but I'm not an expert). First - the documentation says: """The email package attempts to be as RFC-compliant as possible, supporting in addition to RFC 2822, such MIME-related RFCs as RFC 2045, RFC 2046, RFC 2047, and RFC 2231. """ But, as I can see, the generated email does not conform to RFC 2822. Second - I use email package as a "filter". read raw email text, do some processing, generate raw email text. Really, I don't understand why generated headers don't are separed by '\r\n' and one must rely on an external tool for the right conversion. Thanks. ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2006-01-17 12:54 Message: Logged In: YES user_id=12800 The module that speaks the wire protocol should do the conversion. IMO, there's no other way to guarantee that you're RFC compliant. You could be getting your data from the email package, but you could be getting it from anywhere else, and /that/ source may not be RFC line ended either. Since you can't change every possible source of data for NNTP or SMTP, your network interface must guarantee conformance. ---------------------------------------------------------------------- Comment By: Manlio Perillo (manlioperillo) Date: 2006-01-17 09:20 Message: Logged In: YES user_id=1054957 Ok, thanks. But what if I don't use the smtplib module? I discovered the bug because I have written a small NNTP server with twisted, using email module for parsing... ---------------------------------------------------------------------- Comment By: Barry A. Warsaw (bwarsaw) Date: 2006-01-17 05:35 Message: Logged In: YES user_id=12800 Correct; this is by design. If you're worried about protocols such as RFC 2821 requiring \r\n line endings, don't. The smtplib module automatically ensures proper line endings for the on-the-wire communication. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1349106&group_id=5470 From noreply at sourceforge.net Wed Jan 10 22:30:45 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 10 Jan 2007 13:30:45 -0800 Subject: [ python-Bugs-1486663 ] Over-zealous keyword-arguments check for built-in set class Message-ID: Bugs item #1486663, was opened at 2006-05-11 16:17 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 6 Private: No Submitted By: dib (dib_at_work) >Assigned to: Raymond Hettinger (rhettinger) Summary: Over-zealous keyword-arguments check for built-in set class Initial Comment: The fix for bug #1119418 (xrange() builtin accepts keyword arg silently) included in Python 2.4.2c1+ breaks code that passes keyword argument(s) into classes derived from the built-in set class, even if those derived classes explictly accept those keyword arguments and avoid passing them down to the built-in base class. Simplified version of code in attached BuiltinSetKeywordArgumentsCheckBroken.py fails at (G) due to bug #1119418 if version < 2.4.2c1; if version >= 2.4.2c1 (G) passes thanks to that bug fix, but instead (H) incorrectly-in-my-view fails. [Presume similar cases would fail for xrange and the other classes mentioned in #1119418.] -- David Bruce (Tested on 2.4, 2.4.2, 2.5a2 on linux2, win32.) ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-10 21:30 Message: Logged In: YES user_id=849994 Originator: NO I'll do that, only in set_init, you have if (!PyArg_UnpackTuple(args, self->ob_type->tp_name, 0, 1, &iterable)) Changing this to use PyArg_ParseTupleAndKeywords would require a format string of "|O:" + self->ob_type->tp_name Is it worth constructing that string each time set_init() is called or should it just be "|O:set" for sets and frozensets? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-06 02:26 Message: Logged In: YES user_id=80475 Originator: NO I prefer the approach used by list(). ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-20 01:19 Message: Logged In: YES user_id=1326842 See patch #1491939 ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 20:02 Message: Logged In: YES user_id=1326842 This bug was introduced as part of the fix for bug #1119418. It also affects collections.deque. Can't the _PyArg_NoKeywords check simply be moved to set_init and deque_init as it was done for zipimport.zipimporter? array.array doesn't need to be changed, since it already does all of its initialization in its __new__ method. The rest of the types changed in that fix should not be affected, since they are immutable. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-05-11 17:23 Message: Logged In: YES user_id=849994 Raymond, what to do in this case? Note that other built-in types, such as list(), do accept keyword arguments. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 From noreply at sourceforge.net Thu Jan 11 01:49:58 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 10 Jan 2007 16:49:58 -0800 Subject: [ python-Bugs-1486663 ] Over-zealous keyword-arguments check for built-in set class Message-ID: Bugs item #1486663, was opened at 2006-05-11 11:17 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 6 Private: No Submitted By: dib (dib_at_work) Assigned to: Raymond Hettinger (rhettinger) Summary: Over-zealous keyword-arguments check for built-in set class Initial Comment: The fix for bug #1119418 (xrange() builtin accepts keyword arg silently) included in Python 2.4.2c1+ breaks code that passes keyword argument(s) into classes derived from the built-in set class, even if those derived classes explictly accept those keyword arguments and avoid passing them down to the built-in base class. Simplified version of code in attached BuiltinSetKeywordArgumentsCheckBroken.py fails at (G) due to bug #1119418 if version < 2.4.2c1; if version >= 2.4.2c1 (G) passes thanks to that bug fix, but instead (H) incorrectly-in-my-view fails. [Presume similar cases would fail for xrange and the other classes mentioned in #1119418.] -- David Bruce (Tested on 2.4, 2.4.2, 2.5a2 on linux2, win32.) ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-10 19:49 Message: Logged In: YES user_id=80475 Originator: NO My proposed solution: - if(!PyArg_NoKeywords("set()", kwds) + if(type == &PySet_Type && !PyArg_NoKeywords("set()", kwds) ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-10 16:30 Message: Logged In: YES user_id=849994 Originator: NO I'll do that, only in set_init, you have if (!PyArg_UnpackTuple(args, self->ob_type->tp_name, 0, 1, &iterable)) Changing this to use PyArg_ParseTupleAndKeywords would require a format string of "|O:" + self->ob_type->tp_name Is it worth constructing that string each time set_init() is called or should it just be "|O:set" for sets and frozensets? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-05 21:26 Message: Logged In: YES user_id=80475 Originator: NO I prefer the approach used by list(). ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 20:19 Message: Logged In: YES user_id=1326842 See patch #1491939 ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 15:02 Message: Logged In: YES user_id=1326842 This bug was introduced as part of the fix for bug #1119418. It also affects collections.deque. Can't the _PyArg_NoKeywords check simply be moved to set_init and deque_init as it was done for zipimport.zipimporter? array.array doesn't need to be changed, since it already does all of its initialization in its __new__ method. The rest of the types changed in that fix should not be affected, since they are immutable. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-05-11 12:23 Message: Logged In: YES user_id=849994 Raymond, what to do in this case? Note that other built-in types, such as list(), do accept keyword arguments. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 From noreply at sourceforge.net Thu Jan 11 02:04:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 10 Jan 2007 17:04:50 -0800 Subject: [ python-Bugs-1619060 ] bisect on presorted list Message-ID: Bugs item #1619060, was opened at 2006-12-19 16:14 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619060&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jeffrey C. Jacobs (timehorse) >Assigned to: Raymond Hettinger (rhettinger) Summary: bisect on presorted list Initial Comment: The python and c implementation of bisect do not support custom-sorted lists using the list.sort method. In order to support an arbitrarily sorted list via sort(cmp, key, reverse), I have added 3 corresponding parameters to the bisect methods for bisection and insort (insert-sorted) corresponding to the parameters in sort. This would be useful if a list is initially sorted by its sort method and then the client wishes to maintain the sort order (or reverse-sort order) while inserting an element. In this case, being able to use the same, arbitrary binary function cmp, unary function key and boolean reverse flag to preserve the list order. The change imposes 3 new branch conditions and potential no-op function calls for when key is None. I have here implemented and partially tested the python implementation and if someone besides me would find this useful, I will update the _bisectmodule.c for this change as well. The Heap functions may also find use of an arbitrary predicate function so I may look at that later, but because bisect goes hand in hand with sorting, I wanted to tackle that first. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2006-12-19 16:43 Message: Logged In: YES user_id=80475 Originator: NO I'm -1 on this patch. At first blush it would seem nice to progagate sort's notion of a key= function; however, sort() is an all at once operation that can guarantee the function gets called only once per key. In contrast, bisect() is more granualar so consecutive calls may need to invoke the same key= function again and again. This is almost always the the-wrong-way-to-do-it (the key function should be precomputed and either stored separately or follow a decorate-sort pattern). By including custom sorting in bisect's API we would be diverting users away from better approaches. A better idea would be to create a recipe for a SortedList class that performed pre-calculated custom keys upon insertion and maintained an internal, decorated list. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619060&group_id=5470 From noreply at sourceforge.net Thu Jan 11 02:04:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 10 Jan 2007 17:04:50 -0800 Subject: [ python-Bugs-1619060 ] bisect on presorted list Message-ID: Bugs item #1619060, was opened at 2006-12-19 16:14 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619060&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jeffrey C. Jacobs (timehorse) >Assigned to: Raymond Hettinger (rhettinger) Summary: bisect on presorted list Initial Comment: The python and c implementation of bisect do not support custom-sorted lists using the list.sort method. In order to support an arbitrarily sorted list via sort(cmp, key, reverse), I have added 3 corresponding parameters to the bisect methods for bisection and insort (insert-sorted) corresponding to the parameters in sort. This would be useful if a list is initially sorted by its sort method and then the client wishes to maintain the sort order (or reverse-sort order) while inserting an element. In this case, being able to use the same, arbitrary binary function cmp, unary function key and boolean reverse flag to preserve the list order. The change imposes 3 new branch conditions and potential no-op function calls for when key is None. I have here implemented and partially tested the python implementation and if someone besides me would find this useful, I will update the _bisectmodule.c for this change as well. The Heap functions may also find use of an arbitrary predicate function so I may look at that later, but because bisect goes hand in hand with sorting, I wanted to tackle that first. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2006-12-19 16:43 Message: Logged In: YES user_id=80475 Originator: NO I'm -1 on this patch. At first blush it would seem nice to progagate sort's notion of a key= function; however, sort() is an all at once operation that can guarantee the function gets called only once per key. In contrast, bisect() is more granualar so consecutive calls may need to invoke the same key= function again and again. This is almost always the the-wrong-way-to-do-it (the key function should be precomputed and either stored separately or follow a decorate-sort pattern). By including custom sorting in bisect's API we would be diverting users away from better approaches. A better idea would be to create a recipe for a SortedList class that performed pre-calculated custom keys upon insertion and maintained an internal, decorated list. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619060&group_id=5470 From noreply at sourceforge.net Thu Jan 11 02:04:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 10 Jan 2007 17:04:50 -0800 Subject: [ python-Bugs-1619060 ] bisect on presorted list Message-ID: Bugs item #1619060, was opened at 2006-12-19 16:14 Message generated for change (Settings changed) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619060&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jeffrey C. Jacobs (timehorse) >Assigned to: Raymond Hettinger (rhettinger) Summary: bisect on presorted list Initial Comment: The python and c implementation of bisect do not support custom-sorted lists using the list.sort method. In order to support an arbitrarily sorted list via sort(cmp, key, reverse), I have added 3 corresponding parameters to the bisect methods for bisection and insort (insert-sorted) corresponding to the parameters in sort. This would be useful if a list is initially sorted by its sort method and then the client wishes to maintain the sort order (or reverse-sort order) while inserting an element. In this case, being able to use the same, arbitrary binary function cmp, unary function key and boolean reverse flag to preserve the list order. The change imposes 3 new branch conditions and potential no-op function calls for when key is None. I have here implemented and partially tested the python implementation and if someone besides me would find this useful, I will update the _bisectmodule.c for this change as well. The Heap functions may also find use of an arbitrary predicate function so I may look at that later, but because bisect goes hand in hand with sorting, I wanted to tackle that first. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2006-12-19 16:43 Message: Logged In: YES user_id=80475 Originator: NO I'm -1 on this patch. At first blush it would seem nice to progagate sort's notion of a key= function; however, sort() is an all at once operation that can guarantee the function gets called only once per key. In contrast, bisect() is more granualar so consecutive calls may need to invoke the same key= function again and again. This is almost always the the-wrong-way-to-do-it (the key function should be precomputed and either stored separately or follow a decorate-sort pattern). By including custom sorting in bisect's API we would be diverting users away from better approaches. A better idea would be to create a recipe for a SortedList class that performed pre-calculated custom keys upon insertion and maintained an internal, decorated list. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619060&group_id=5470 From noreply at sourceforge.net Thu Jan 11 10:13:39 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 01:13:39 -0800 Subject: [ python-Bugs-793764 ] pyconfig.h defines _POSIX_C_SOURCE, conflicting with feature Message-ID: Bugs item #793764, was opened at 2003-08-23 16:19 Message generated for change (Comment added) made by doko You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=793764&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Installation Group: Python 2.3 Status: Closed Resolution: Invalid Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: pyconfig.h defines _POSIX_C_SOURCE, conflicting with feature Initial Comment: [forwarded from http://bugs.debian.org/206805] the installed include/python2.3/pyconfig.h defines _POSIX_C_SOURCE, which leaks down into packages built against python-2.3. AFAIK, _POSIX_C_SOURCE is reserved for use by the C library, and is of course defined in /usr/include/features.h. Example excerpt from a build log: In file included from /usr/include/python2.3/Python.h:8, from sg_config.h:22, from sg.h:29, from sg_project_autosave.c:26: /usr/include/python2.3/pyconfig.h:844:1: Warnung: "_POSIX_C_SOURCE" redefined In file included from /usr/include/stdlib.h:25, from sg_project_autosave.c:19: /usr/include/features.h:171:1: Warnung: this is the location of the previous definition ---------------------------------------------------------------------- >Comment By: Matthias Klose (doko) Date: 2007-01-11 10:13 Message: Logged In: YES user_id=60903 Originator: YES 8. For the C programming language, shall define _POSIX_C_SOURCE to be 200112L before any header is included _POSIX_C_SOURCE should be defined "before any header is included". That phrase was taken from the following comments: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=793764&group_id=5470 It's described at: Single Unix Specification 3: "2.2.1 Strictly Conforming POSIX Application" 8. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-08-31 19:26 Message: Logged In: YES user_id=21627 _POSIX_C_SOURCE is not reserved for the C library, but for the application, see http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap02.html (section "Strictly Conforming POSIX Application") A conforming POSIX application *must* define _POSIX_C_SOURCE, so if your C library also defines it, it is a bug in the C library. Most likely, the author failed to include Python.h before other system headers, as required per http://www.python.org/doc/current/api/includes.html ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=793764&group_id=5470 From noreply at sourceforge.net Thu Jan 11 13:27:21 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 04:27:21 -0800 Subject: [ python-Bugs-1632328 ] logging.config.fileConfig doesn't clear logging._handlerList Message-ID: Bugs item #1632328, was opened at 2007-01-10 12:56 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1632328&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Stefan H. Holek (shh42) >Assigned to: Vinay Sajip (vsajip) Summary: logging.config.fileConfig doesn't clear logging._handlerList Initial Comment: logging.config.fileConfig resets logging._handlers but not logging._handlerList, resulting in tracebacks on shutdown. e.g. Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/local/python2.4/lib/python2.4/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/local/python2.4/lib/python2.4/logging/__init__.py", line 1333, in shutdown h.close() File "/usr/local/python2.4/lib/python2.4/logging/__init__.py", line 674, in close del _handlers[self] KeyError: AFAICT this is fixed in Python 2.5 but has not been backported. Zope cannot use 2.5 as of yet. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-11 12:27 Message: Logged In: YES user_id=849994 Originator: NO Does a backport to 2.4 make sense? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1632328&group_id=5470 From noreply at sourceforge.net Thu Jan 11 16:41:41 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 07:41:41 -0800 Subject: [ python-Feature Requests-1627266 ] optparse "store" action should not gobble up next option Message-ID: Feature Requests item #1627266, was opened at 2007-01-03 11:46 Message generated for change (Comment added) made by bediviere You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Raghuram Devarakonda (draghuram) Assigned to: Greg Ward (gward) Summary: optparse "store" action should not gobble up next option Initial Comment: Hi, Check the following code: --------------opttest.py---------- from optparse import OptionParser def process_options(): global options, args, parser parser = OptionParser() parser.add_option("--test", action="store_true") parser.add_option("-m", metavar="COMMENT", dest="comment", default=None) (options, args) = parser.parse_args() return process_options() print "comment (%r)" % options.comment --------------------- $ ./opttest.py -m --test comment ('--test') I was expecting this to give an error as "--test" is an option. But it looks like even C library's getopt() behaves similarly. It will be nice if optparse can report error in this case. ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2007-01-11 08:41 Message: Logged In: YES user_id=945502 Originator: NO For what it's worth, argparse_ gives an error here: >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('--test', action='store_true') >>> parser.add_argument('-m', dest='comment') >>> parser.parse_args(['-m', '--test']) usage: PROG [-h] [--test] [-m COMMENT] PROG: error: argument -m: expected one argument That's because argparse assumes that anything that looks like "--foo" is an option (unless it's after the pseudo-argument "--" on the command line). .. _argparse: http://argparse.python-hosting.com/ ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-05 10:58 Message: Logged In: YES user_id=984087 Originator: YES It is possible to deduce "--test" as an option because it is in the list of options given to optparse. But your point about what if the user really wants "--test" as an option argument is valid. I guess this request can be closed. Thanks, Raghu. ---------------------------------------------------------------------- Comment By: David Goodger (goodger) Date: 2007-01-05 09:28 Message: Logged In: YES user_id=7733 Originator: NO I think what you're asking for is ambiguous at best. In your example, how could optparse possibly decide that the "--test" is a second option, not an option argument? What if you *do* want "--test" as an argument? Assigning to Greg Ward. Recommend closing as invalid. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-05 08:19 Message: Logged In: YES user_id=984087 Originator: YES I am attaching the code fragment as a file since the indentation got all messed up in the original post. File Added: opttest.py ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 From noreply at sourceforge.net Thu Jan 11 17:16:54 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 08:16:54 -0800 Subject: [ python-Feature Requests-1627266 ] optparse "store" action should not gobble up next option Message-ID: Feature Requests item #1627266, was opened at 2007-01-03 18:46 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Raghuram Devarakonda (draghuram) Assigned to: Greg Ward (gward) Summary: optparse "store" action should not gobble up next option Initial Comment: Hi, Check the following code: --------------opttest.py---------- from optparse import OptionParser def process_options(): global options, args, parser parser = OptionParser() parser.add_option("--test", action="store_true") parser.add_option("-m", metavar="COMMENT", dest="comment", default=None) (options, args) = parser.parse_args() return process_options() print "comment (%r)" % options.comment --------------------- $ ./opttest.py -m --test comment ('--test') I was expecting this to give an error as "--test" is an option. But it looks like even C library's getopt() behaves similarly. It will be nice if optparse can report error in this case. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-11 16:16 Message: Logged In: YES user_id=849994 Originator: NO So how does one give option arguments starting with - to argparse? ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2007-01-11 15:41 Message: Logged In: YES user_id=945502 Originator: NO For what it's worth, argparse_ gives an error here: >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('--test', action='store_true') >>> parser.add_argument('-m', dest='comment') >>> parser.parse_args(['-m', '--test']) usage: PROG [-h] [--test] [-m COMMENT] PROG: error: argument -m: expected one argument That's because argparse assumes that anything that looks like "--foo" is an option (unless it's after the pseudo-argument "--" on the command line). .. _argparse: http://argparse.python-hosting.com/ ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-05 17:58 Message: Logged In: YES user_id=984087 Originator: YES It is possible to deduce "--test" as an option because it is in the list of options given to optparse. But your point about what if the user really wants "--test" as an option argument is valid. I guess this request can be closed. Thanks, Raghu. ---------------------------------------------------------------------- Comment By: David Goodger (goodger) Date: 2007-01-05 16:28 Message: Logged In: YES user_id=7733 Originator: NO I think what you're asking for is ambiguous at best. In your example, how could optparse possibly decide that the "--test" is a second option, not an option argument? What if you *do* want "--test" as an argument? Assigning to Greg Ward. Recommend closing as invalid. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-05 15:19 Message: Logged In: YES user_id=984087 Originator: YES I am attaching the code fragment as a file since the indentation got all messed up in the original post. File Added: opttest.py ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 From noreply at sourceforge.net Thu Jan 11 19:01:52 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 10:01:52 -0800 Subject: [ python-Bugs-1504333 ] sgmllib should allow angle brackets in quoted values Message-ID: Bugs item #1504333, was opened at 2006-06-11 08:58 Message generated for change (Comment added) made by haepal You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1504333&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Sam Ruby (rubys) Assigned to: Nobody/Anonymous (nobody) Summary: sgmllib should allow angle brackets in quoted values Initial Comment: Real live example (search for "other
corrections") http://latticeqcd.blogspot.com/2006/05/non-relativistic-qcd.html This addresses the following (included in the file): # XXX The following should skip matching quotes (' or ") ---------------------------------------------------------------------- Comment By: Haejoong Lee (haepal) Date: 2007-01-11 13:01 Message: Logged In: YES user_id=135609 Originator: NO Could someone check if the following patch fixes the problem? This patch was made against revision 51854. --- sgmllib.py.org 2006-11-06 02:31:12.000000000 -0500 +++ sgmllib.py 2007-01-11 12:39:30.000000000 -0500 @@ -16,6 +16,35 @@ # Regular expressions used for parsing +class MyMatch: + def __init__(self, i): + self._i = i + def start(self, i): + return self._i + +class EndBracket: + def search(self, data, index): + s = data[index:] + bs = None + quote = None + for i,c in enumerate(s): + if bs: + bs = False + else: + if c == '<' or c == '>': + if quote is None: + break + elif c == "'" or c == '"': + if c == quote: + quote = None + else: + quote = c + elif c == '\\': + bs = True + else: + return None + return MyMatch(i+index) + interesting = re.compile('[&<]') incomplete = re.compile('&([a-zA-Z][a-zA-Z0-9]*|#[0-9]*)?|' '<([a-zA-Z][^<>]*|' @@ -29,7 +58,8 @@ shorttagopen = re.compile('<[a-zA-Z][-.a-zA-Z0-9]*/') shorttag = re.compile('<([a-zA-Z][-.a-zA-Z0-9]*)/([^/]*)/') piclose = re.compile('>') -endbracket = re.compile('[<>]') +#endbracket = re.compile('[<>]') +endbracket = EndBracket() tagfind = re.compile('[a-zA-Z][-_.a-zA-Z0-9]*') attrfind = re.compile( r'\s*([a-zA-Z_][-:.a-zA-Z_0-9]*)(\s*=\s*' ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-09-11 00:26 Message: Logged In: YES user_id=33168 I reverted the patch and added the test case for sgml so the infinite loop doesn't recur. This was mentioned several times on python-dev. Committed revision 51854. (head) Committed revision 51850. (2.5) Committed revision 51853. (2.4) ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2006-06-29 13:17 Message: Logged In: YES user_id=3066 I checked in a modified version of this patch: changed to use separate REs for start and end tags to reduce matching cost for end tags; extended tests; updated to avoid breaking previous changes to support IPv6 addresses in unquoted attribute values. Committed as revisions 47154 (trunk) and 47155 (release24-maint). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1504333&group_id=5470 From noreply at sourceforge.net Thu Jan 11 19:19:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 10:19:13 -0800 Subject: [ python-Feature Requests-1627266 ] optparse "store" action should not gobble up next option Message-ID: Feature Requests item #1627266, was opened at 2007-01-03 11:46 Message generated for change (Comment added) made by bediviere You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Raghuram Devarakonda (draghuram) Assigned to: Greg Ward (gward) Summary: optparse "store" action should not gobble up next option Initial Comment: Hi, Check the following code: --------------opttest.py---------- from optparse import OptionParser def process_options(): global options, args, parser parser = OptionParser() parser.add_option("--test", action="store_true") parser.add_option("-m", metavar="COMMENT", dest="comment", default=None) (options, args) = parser.parse_args() return process_options() print "comment (%r)" % options.comment --------------------- $ ./opttest.py -m --test comment ('--test') I was expecting this to give an error as "--test" is an option. But it looks like even C library's getopt() behaves similarly. It will be nice if optparse can report error in this case. ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2007-01-11 11:19 Message: Logged In: YES user_id=945502 Originator: NO At the moment, you generally can't: http://argparse.python-hosting.com/ticket/25 though the simple value "-" is valid. I do plan to address this in the not-so-distant future (though no one yet has complained about it). For optparse module, I think the OP's problem could likely be fixed by editing _process_long_opt() and _process_short_opts() to do some checks around the code: elif nargs == 1: value = rargs.pop(0) else: value = tuple(rargs[0:nargs]) del rargs[0:nargs] You could make sure that the option arguments (the "value" objects in the code above) were not already existing options with a check like: all(not self._match_long_opt(arg) and not self._short_opt.get(arg) for arg in rargs[0:nargs]) ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-11 09:16 Message: Logged In: YES user_id=849994 Originator: NO So how does one give option arguments starting with - to argparse? ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2007-01-11 08:41 Message: Logged In: YES user_id=945502 Originator: NO For what it's worth, argparse_ gives an error here: >>> parser = argparse.ArgumentParser(prog='PROG') >>> parser.add_argument('--test', action='store_true') >>> parser.add_argument('-m', dest='comment') >>> parser.parse_args(['-m', '--test']) usage: PROG [-h] [--test] [-m COMMENT] PROG: error: argument -m: expected one argument That's because argparse assumes that anything that looks like "--foo" is an option (unless it's after the pseudo-argument "--" on the command line). .. _argparse: http://argparse.python-hosting.com/ ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-05 10:58 Message: Logged In: YES user_id=984087 Originator: YES It is possible to deduce "--test" as an option because it is in the list of options given to optparse. But your point about what if the user really wants "--test" as an option argument is valid. I guess this request can be closed. Thanks, Raghu. ---------------------------------------------------------------------- Comment By: David Goodger (goodger) Date: 2007-01-05 09:28 Message: Logged In: YES user_id=7733 Originator: NO I think what you're asking for is ambiguous at best. In your example, how could optparse possibly decide that the "--test" is a second option, not an option argument? What if you *do* want "--test" as an argument? Assigning to Greg Ward. Recommend closing as invalid. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-05 08:19 Message: Logged In: YES user_id=984087 Originator: YES I am attaching the code fragment as a file since the indentation got all messed up in the original post. File Added: opttest.py ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1627266&group_id=5470 From noreply at sourceforge.net Thu Jan 11 19:30:18 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 10:30:18 -0800 Subject: [ python-Bugs-1486663 ] Over-zealous keyword-arguments check for built-in set class Message-ID: Bugs item #1486663, was opened at 2006-05-11 11:17 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None >Priority: 7 Private: No Submitted By: dib (dib_at_work) >Assigned to: Georg Brandl (gbrandl) Summary: Over-zealous keyword-arguments check for built-in set class Initial Comment: The fix for bug #1119418 (xrange() builtin accepts keyword arg silently) included in Python 2.4.2c1+ breaks code that passes keyword argument(s) into classes derived from the built-in set class, even if those derived classes explictly accept those keyword arguments and avoid passing them down to the built-in base class. Simplified version of code in attached BuiltinSetKeywordArgumentsCheckBroken.py fails at (G) due to bug #1119418 if version < 2.4.2c1; if version >= 2.4.2c1 (G) passes thanks to that bug fix, but instead (H) incorrectly-in-my-view fails. [Presume similar cases would fail for xrange and the other classes mentioned in #1119418.] -- David Bruce (Tested on 2.4, 2.4.2, 2.5a2 on linux2, win32.) ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 13:30 Message: Logged In: YES user_id=80475 Originator: NO I fixed setobject.c in revisions 53380 and 53381. Please apply similar fixes to all the other places being bitten my the pervasive NoKeywords tests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-10 19:49 Message: Logged In: YES user_id=80475 Originator: NO My proposed solution: - if(!PyArg_NoKeywords("set()", kwds) + if(type == &PySet_Type && !PyArg_NoKeywords("set()", kwds) ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-10 16:30 Message: Logged In: YES user_id=849994 Originator: NO I'll do that, only in set_init, you have if (!PyArg_UnpackTuple(args, self->ob_type->tp_name, 0, 1, &iterable)) Changing this to use PyArg_ParseTupleAndKeywords would require a format string of "|O:" + self->ob_type->tp_name Is it worth constructing that string each time set_init() is called or should it just be "|O:set" for sets and frozensets? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-05 21:26 Message: Logged In: YES user_id=80475 Originator: NO I prefer the approach used by list(). ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 20:19 Message: Logged In: YES user_id=1326842 See patch #1491939 ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 15:02 Message: Logged In: YES user_id=1326842 This bug was introduced as part of the fix for bug #1119418. It also affects collections.deque. Can't the _PyArg_NoKeywords check simply be moved to set_init and deque_init as it was done for zipimport.zipimporter? array.array doesn't need to be changed, since it already does all of its initialization in its __new__ method. The rest of the types changed in that fix should not be affected, since they are immutable. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-05-11 12:23 Message: Logged In: YES user_id=849994 Raymond, what to do in this case? Note that other built-in types, such as list(), do accept keyword arguments. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 From noreply at sourceforge.net Thu Jan 11 20:56:30 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 11:56:30 -0800 Subject: [ python-Bugs-1486663 ] Over-zealous keyword-arguments check for built-in set class Message-ID: Bugs item #1486663, was opened at 2006-05-11 16:17 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 7 Private: No Submitted By: dib (dib_at_work) Assigned to: Georg Brandl (gbrandl) Summary: Over-zealous keyword-arguments check for built-in set class Initial Comment: The fix for bug #1119418 (xrange() builtin accepts keyword arg silently) included in Python 2.4.2c1+ breaks code that passes keyword argument(s) into classes derived from the built-in set class, even if those derived classes explictly accept those keyword arguments and avoid passing them down to the built-in base class. Simplified version of code in attached BuiltinSetKeywordArgumentsCheckBroken.py fails at (G) due to bug #1119418 if version < 2.4.2c1; if version >= 2.4.2c1 (G) passes thanks to that bug fix, but instead (H) incorrectly-in-my-view fails. [Presume similar cases would fail for xrange and the other classes mentioned in #1119418.] -- David Bruce (Tested on 2.4, 2.4.2, 2.5a2 on linux2, win32.) ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-11 19:56 Message: Logged In: YES user_id=849994 Originator: NO Attaching patch. File Added: nokeywordchecks.diff ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 18:30 Message: Logged In: YES user_id=80475 Originator: NO I fixed setobject.c in revisions 53380 and 53381. Please apply similar fixes to all the other places being bitten my the pervasive NoKeywords tests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 00:49 Message: Logged In: YES user_id=80475 Originator: NO My proposed solution: - if(!PyArg_NoKeywords("set()", kwds) + if(type == &PySet_Type && !PyArg_NoKeywords("set()", kwds) ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-10 21:30 Message: Logged In: YES user_id=849994 Originator: NO I'll do that, only in set_init, you have if (!PyArg_UnpackTuple(args, self->ob_type->tp_name, 0, 1, &iterable)) Changing this to use PyArg_ParseTupleAndKeywords would require a format string of "|O:" + self->ob_type->tp_name Is it worth constructing that string each time set_init() is called or should it just be "|O:set" for sets and frozensets? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-06 02:26 Message: Logged In: YES user_id=80475 Originator: NO I prefer the approach used by list(). ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-20 01:19 Message: Logged In: YES user_id=1326842 See patch #1491939 ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 20:02 Message: Logged In: YES user_id=1326842 This bug was introduced as part of the fix for bug #1119418. It also affects collections.deque. Can't the _PyArg_NoKeywords check simply be moved to set_init and deque_init as it was done for zipimport.zipimporter? array.array doesn't need to be changed, since it already does all of its initialization in its __new__ method. The rest of the types changed in that fix should not be affected, since they are immutable. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-05-11 17:23 Message: Logged In: YES user_id=849994 Raymond, what to do in this case? Note that other built-in types, such as list(), do accept keyword arguments. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 From noreply at sourceforge.net Thu Jan 11 21:28:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 12:28:47 -0800 Subject: [ python-Bugs-1632328 ] logging.config.fileConfig doesn't clear logging._handlerList Message-ID: Bugs item #1632328, was opened at 2007-01-10 12:56 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1632328&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Stefan H. Holek (shh42) Assigned to: Vinay Sajip (vsajip) Summary: logging.config.fileConfig doesn't clear logging._handlerList Initial Comment: logging.config.fileConfig resets logging._handlers but not logging._handlerList, resulting in tracebacks on shutdown. e.g. Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/usr/local/python2.4/lib/python2.4/atexit.py", line 24, in _run_exitfuncs func(*targs, **kargs) File "/usr/local/python2.4/lib/python2.4/logging/__init__.py", line 1333, in shutdown h.close() File "/usr/local/python2.4/lib/python2.4/logging/__init__.py", line 674, in close del _handlers[self] KeyError: AFAICT this is fixed in Python 2.5 but has not been backported. Zope cannot use 2.5 as of yet. ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2007-01-11 20:28 Message: Logged In: YES user_id=308438 Originator: NO Yes - fix checked into release24-maint. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-11 12:27 Message: Logged In: YES user_id=849994 Originator: NO Does a backport to 2.4 make sense? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1632328&group_id=5470 From noreply at sourceforge.net Thu Jan 11 21:30:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 12:30:57 -0800 Subject: [ python-Bugs-1534765 ] logging's fileConfig causes KeyError on shutdown Message-ID: Bugs item #1534765, was opened at 2006-08-04 19:58 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1534765&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: mdbeachy (mdbeachy) Assigned to: Vinay Sajip (vsajip) Summary: logging's fileConfig causes KeyError on shutdown Initial Comment: If logging.config.fileConfig() is called after logging handlers already exist, a KeyError is thrown in the atexit call to logging.shutdown(). This looks like it's fixed in the 2.5 branch but since I've bothered to figure out what was going on I'm sending this in anyway. There still might be a 2.4.4, right? (Also, my fix looks better than what was done for 2.5, but I suppose the flush/close I added may not be necessary.) Attached is a demo and a patch against 2.4.3. Thanks, Mike ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2007-01-11 20:30 Message: Logged In: YES user_id=308438 Originator: NO Fix for SF #1632328 should cover this - checked into release24-maint. ---------------------------------------------------------------------- Comment By: Matt Fleming (splitscreen) Date: 2006-08-09 14:10 Message: Logged In: YES user_id=1126061 Bug confirmed in release24-maint. Patch looks good to me, although I think the developers prefer unified diffs, not contextual, just to keep in mind for the future. And also, I had to manually patch the Lib/logging/config.py file because for some reason, the paths in your patch all use lowercase letters. Thanks for the patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1534765&group_id=5470 From noreply at sourceforge.net Thu Jan 11 21:43:27 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 12:43:27 -0800 Subject: [ python-Bugs-1486663 ] Over-zealous keyword-arguments check for built-in set class Message-ID: Bugs item #1486663, was opened at 2006-05-11 11:17 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 7 Private: No Submitted By: dib (dib_at_work) Assigned to: Georg Brandl (gbrandl) Summary: Over-zealous keyword-arguments check for built-in set class Initial Comment: The fix for bug #1119418 (xrange() builtin accepts keyword arg silently) included in Python 2.4.2c1+ breaks code that passes keyword argument(s) into classes derived from the built-in set class, even if those derived classes explictly accept those keyword arguments and avoid passing them down to the built-in base class. Simplified version of code in attached BuiltinSetKeywordArgumentsCheckBroken.py fails at (G) due to bug #1119418 if version < 2.4.2c1; if version >= 2.4.2c1 (G) passes thanks to that bug fix, but instead (H) incorrectly-in-my-view fails. [Presume similar cases would fail for xrange and the other classes mentioned in #1119418.] -- David Bruce (Tested on 2.4, 2.4.2, 2.5a2 on linux2, win32.) ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 15:43 Message: Logged In: YES user_id=80475 Originator: NO That looks about right. Please add test cases that fail without the patch and succeed with the patch. Also, put a comment in Misc/NEWS. If the whole test suite passes, go ahead and check-in to Py2.5.1 and the head. Thanks, Raymond ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-11 14:56 Message: Logged In: YES user_id=849994 Originator: NO Attaching patch. File Added: nokeywordchecks.diff ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 13:30 Message: Logged In: YES user_id=80475 Originator: NO I fixed setobject.c in revisions 53380 and 53381. Please apply similar fixes to all the other places being bitten my the pervasive NoKeywords tests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-10 19:49 Message: Logged In: YES user_id=80475 Originator: NO My proposed solution: - if(!PyArg_NoKeywords("set()", kwds) + if(type == &PySet_Type && !PyArg_NoKeywords("set()", kwds) ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-10 16:30 Message: Logged In: YES user_id=849994 Originator: NO I'll do that, only in set_init, you have if (!PyArg_UnpackTuple(args, self->ob_type->tp_name, 0, 1, &iterable)) Changing this to use PyArg_ParseTupleAndKeywords would require a format string of "|O:" + self->ob_type->tp_name Is it worth constructing that string each time set_init() is called or should it just be "|O:set" for sets and frozensets? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-05 21:26 Message: Logged In: YES user_id=80475 Originator: NO I prefer the approach used by list(). ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 20:19 Message: Logged In: YES user_id=1326842 See patch #1491939 ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 15:02 Message: Logged In: YES user_id=1326842 This bug was introduced as part of the fix for bug #1119418. It also affects collections.deque. Can't the _PyArg_NoKeywords check simply be moved to set_init and deque_init as it was done for zipimport.zipimporter? array.array doesn't need to be changed, since it already does all of its initialization in its __new__ method. The rest of the types changed in that fix should not be affected, since they are immutable. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-05-11 12:23 Message: Logged In: YES user_id=849994 Raymond, what to do in this case? Note that other built-in types, such as list(), do accept keyword arguments. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 From noreply at sourceforge.net Thu Jan 11 22:31:37 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 13:31:37 -0800 Subject: [ python-Bugs-793764 ] pyconfig.h defines _POSIX_C_SOURCE, conflicting with feature Message-ID: Bugs item #793764, was opened at 2003-08-23 16:19 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=793764&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Installation Group: Python 2.3 Status: Closed Resolution: Invalid Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: pyconfig.h defines _POSIX_C_SOURCE, conflicting with feature Initial Comment: [forwarded from http://bugs.debian.org/206805] the installed include/python2.3/pyconfig.h defines _POSIX_C_SOURCE, which leaks down into packages built against python-2.3. AFAIK, _POSIX_C_SOURCE is reserved for use by the C library, and is of course defined in /usr/include/features.h. Example excerpt from a build log: In file included from /usr/include/python2.3/Python.h:8, from sg_config.h:22, from sg.h:29, from sg_project_autosave.c:26: /usr/include/python2.3/pyconfig.h:844:1: Warnung: "_POSIX_C_SOURCE" redefined In file included from /usr/include/stdlib.h:25, from sg_project_autosave.c:19: /usr/include/features.h:171:1: Warnung: this is the location of the previous definition ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-11 22:31 Message: Logged In: YES user_id=21627 Originator: NO Sure, and python.h does that: it defines _POSIX_C_SOURCE before any (system) header is included. The problem is rather in SciGraphica, which include Python.h in sg_config.h *after* including a system header. This is in violation of the Python API, as described in my initial message. ---------------------------------------------------------------------- Comment By: Matthias Klose (doko) Date: 2007-01-11 10:13 Message: Logged In: YES user_id=60903 Originator: YES 8. For the C programming language, shall define _POSIX_C_SOURCE to be 200112L before any header is included _POSIX_C_SOURCE should be defined "before any header is included". That phrase was taken from the following comments: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=793764&group_id=5470 It's described at: Single Unix Specification 3: "2.2.1 Strictly Conforming POSIX Application" 8. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-08-31 19:26 Message: Logged In: YES user_id=21627 _POSIX_C_SOURCE is not reserved for the C library, but for the application, see http://www.opengroup.org/onlinepubs/007904975/basedefs/xbd_chap02.html (section "Strictly Conforming POSIX Application") A conforming POSIX application *must* define _POSIX_C_SOURCE, so if your C library also defines it, it is a bug in the C library. Most likely, the author failed to include Python.h before other system headers, as required per http://www.python.org/doc/current/api/includes.html ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=793764&group_id=5470 From noreply at sourceforge.net Thu Jan 11 23:16:23 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 14:16:23 -0800 Subject: [ python-Bugs-1630863 ] PyLong_AsLong doesn't check tp_as_number Message-ID: Bugs item #1630863, was opened at 2007-01-08 20:06 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630863&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Roger Upole (rupole) >Assigned to: Martin v. L?wis (loewis) Summary: PyLong_AsLong doesn't check tp_as_number Initial Comment: Both PyInt_AsLong and PyLong_AsLongLong check if an object's type has PyNumberMethods defined. However, PyLong_AsLong does not, causing conversion to fail for objects which can legitimately be converted to a long. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630863&group_id=5470 From noreply at sourceforge.net Thu Jan 11 23:40:24 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 14:40:24 -0800 Subject: [ python-Bugs-1633583 ] Hangs with 100% CPU load for certain regexes Message-ID: Bugs item #1633583, was opened at 2007-01-11 23:40 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633583&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Regular Expressions Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Gustavo Niemeyer (niemeyer) Summary: Hangs with 100% CPU load for certain regexes Initial Comment: [forwarded from http://bugs.debian.org/401676] seen with 2.4.4 and 2.5 20061209; bug submitter writes: Hi, https://develop.participatoryculture.org/democracy/attachment/ticket/3947/crash.py is a small test program which causes a complete hangup for at least minutes (I aborted after a while) on my system, with 100% CPU load. The regex code seems to run into some endless loop or something... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633583&group_id=5470 From noreply at sourceforge.net Thu Jan 11 23:59:32 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 14:59:32 -0800 Subject: [ python-Bugs-1633600 ] using locale does not display the intended behavior Message-ID: Bugs item #1633600, was opened at 2007-01-11 23:59 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633600&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: using locale does not display the intended behavior Initial Comment: [forwarded from http://bugs.debian.org/405618] the locales are available on the system; the string.lowercase constant doesn't change. bug submitter writes: Hello, if I interpret correctly http://docs.python.org/lib/node746.html the characters '?', '?' and so on should be members of string.lowercase when the locale is set on a french one. But as you can see here this is not the case: % python Python 2.4.4 (#2, Oct 20 2006, 00:23:25) [GCC 4.1.2 20061015 (prerelease) (Debian 4.1.1-16.1)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import locale >>> locale.setlocale(locale.LC_ALL, '') 'LC_CTYPE=fr_BE.UTF-8;LC_NUMERIC=fr_BE.UTF-8;LC_TIME=fr_BE.UTF-8;LC_COLLATE=C;LC_MONETARY=fr_BE.UTF-8;LC_MESSAGES=fr_BE.UTF-8;LC_PAPER=fr_BE.UTF-8;LC_NAME=fr_BE.UTF-8;LC_ADDRESS=fr_BE.UTF-8;LC_TELEPHONE=fr_BE.UTF-8;LC_MEASUREMENT=fr_BE.UTF-8;LC_IDENTIFICATION=fr_BE.UTF-8' >>> import string >>> string.lowercase 'abcdefghijklmnopqrstuvwxyz' I also tried to import string before the setlocale call or before the import locale call but it did not work either. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633600&group_id=5470 From noreply at sourceforge.net Fri Jan 12 00:06:53 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 15:06:53 -0800 Subject: [ python-Bugs-1633605 ] logging module / wrong bytecode? Message-ID: Bugs item #1633605, was opened at 2007-01-12 00:06 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633605&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: logging module / wrong bytecode? Initial Comment: [forwarded from http://bugs.debian.org/390152] seen with python2.4 and python2.5 on debian unstable import logging logging.basicConfig(level=logging.DEBUG, format='%(pathname)s:%(lineno)d') logging.info('whoops') The output when the logging/__init__.pyc file exists is: logging/__init__.py:1072 and when the __init__.pyc is deleted the output becomes: tst.py:5 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633605&group_id=5470 From noreply at sourceforge.net Fri Jan 12 00:20:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 15:20:33 -0800 Subject: [ python-Bugs-1630863 ] PyLong_AsLong doesn't check tp_as_number Message-ID: Bugs item #1630863, was opened at 2007-01-08 21:06 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630863&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Roger Upole (rupole) Assigned to: Martin v. L?wis (loewis) Summary: PyLong_AsLong doesn't check tp_as_number Initial Comment: Both PyInt_AsLong and PyLong_AsLongLong check if an object's type has PyNumberMethods defined. However, PyLong_AsLong does not, causing conversion to fail for objects which can legitimately be converted to a long. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-12 00:20 Message: Logged In: YES user_id=21627 Originator: NO I fail to see the problem. If you want to convert arbitrary objects to long, use PyInt_AsLong. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630863&group_id=5470 From noreply at sourceforge.net Fri Jan 12 00:38:27 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 15:38:27 -0800 Subject: [ python-Bugs-1633621 ] curses should reset curses.{COLS, LINES} when term. resized Message-ID: Bugs item #1633621, was opened at 2007-01-12 00:38 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633621&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 3 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: curses should reset curses.{COLS,LINES} when term. resized Initial Comment: [forwarded from http://bugs.debian.org/366698] The curses module does not reset curses.COLS and curses.LINES when the terminal is resized. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633621&group_id=5470 From noreply at sourceforge.net Fri Jan 12 00:44:16 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 15:44:16 -0800 Subject: [ python-Bugs-1633628 ] time.strftime() accepts format which time.strptime doesnt Message-ID: Bugs item #1633628, was opened at 2007-01-12 00:44 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633628&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: time.strftime() accepts format which time.strptime doesnt Initial Comment: [forwarded from http://bugs.debian.org/354636] time.strftime() accepts '%F %T' as format but time.strptime() doesn't, if the rule is "all what strftime accepts strptime must also" then that is bad. Check this: darwin:~# python2.4 Python 2.4.2 (#2, Nov 20 2005, 17:04:48) [GCC 4.0.3 20051111 (prerelease) (Debian 4.0.2-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> format = '%F %T' >>> t = time.strftime(format) >>> t '2006-02-27 18:09:37' >>> time.strptime(t,format) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/_strptime.py", line 287, in strptime format_regex = time_re.compile(format) File "/usr/lib/python2.4/_strptime.py", line 264, in compile return re_compile(self.pattern(format), IGNORECASE) File "/usr/lib/python2.4/_strptime.py", line 256, in pattern processed_format = "%s%s%s" % (processed_format, KeyError: 'F' >>> ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633628&group_id=5470 From noreply at sourceforge.net Fri Jan 12 00:49:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 15:49:10 -0800 Subject: [ python-Bugs-1633630 ] class derived from float evaporates under += Message-ID: Bugs item #1633630, was opened at 2007-01-12 00:49 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633630&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Type/class unification Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: class derived from float evaporates under += Initial Comment: [forwarded from http://bugs.debian.org/345373] There seems to be a bug in classes derived from float. For instance, consider the following: >>> class Float(float): ... def __init__(self, v): ... float.__init__(self, v) ... self.x = 1 ... >>> a = Float(2.0) >>> b = Float(3.0) >>> type(a) >>> type(b) >>> a += b >>> type(a) Now, the type of a has silently changed. It was a Float, a derived class with all kinds of properties, and it became a float -- a plain vanilla number. My understanding is that this is incorrect, and certainly unexpected. If it *is* correct, it certainly deserves mention somewhere in the documentation. It seems that Float.__iadd__(a, b) should be called. This defaults to float.__iadd__(a, b), which should increment the float part of the object while leaving the rest intact. A possible explanation for this problem is that float.__iadd__ is not actually defined, and so it falls through to a = float.__add__(a, b), which assigns a float to a. This interpretation seems to be correct, as one can add a destructor to the Float class: >>> class FloatD(float): ... def __init__(self, v): ... float.__init__(self, v) ... self.x = 1 ... def __del__(self): ... print 'Deleting FloatD class, losing x=', self.x ... >>> a = FloatD(2.0) >>> b = FloatD(3.0) >>> a += b Deleting FloatD class, losing x= 1 >>> ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633630&group_id=5470 From noreply at sourceforge.net Fri Jan 12 01:18:26 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 16:18:26 -0800 Subject: [ python-Bugs-1633648 ] incomplete numerical comparisons Message-ID: Bugs item #1633648, was opened at 2007-01-12 01:18 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633648&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: incomplete numerical comparisons Initial Comment: [forwarded from http://bugs.debian.org/334022] bug submitter writes: I've been tracking down the regression failure in python-pgsql under python2.[45], and here's what it comes down to. Python-pgsql includes a short int type named PgInt2, which allows itself to be coerced into all of the usual numeric types. The regression that fails is when a PgInt2 is compared with a float. In this case python determines that the comparison is not implemented. The problem is this: - Under python2.[45], the float type includes tp_richcompare but not tp_compare. - When calling try_rich_to_3way_compare(), python does not try any kind of numeric coercion, and so the comparison fails. - When calling try_3way_compare(), python successfully coerces the PgInt2 into a float, but then the comparison fails because the float type has no tp_compare routine. Presumably what would fix things would be one of the following: - Bring back the trivial float_compare() routine, which was removed with python2.[45] when they brought in the new float_richcompare() instead; - In try_3way_compare(), if the coercion succeeds and neither object has a tp_compare routine, try tp_richcompare before failing completely. Does either of these solutions seem feasible? Thanks - Ben. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633648&group_id=5470 From noreply at sourceforge.net Fri Jan 12 01:50:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 16:50:33 -0800 Subject: [ python-Bugs-1633665 ] file(file) should succeed Message-ID: Bugs item #1633665, was opened at 2007-01-12 01:50 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633665&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 3 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: file(file) should succeed Initial Comment: [forwarded from http://bugs.debian.org/327060] Many types in Python are idempotent, so that int(1) works as expected, float(2.34)==2.34, ''.join('hello')=='hello' et cetera. Why not file()? Currently, file(open(something, 'r')) fails with "TypeError: coercing to Unicode: need string or buffer, file found." Semantically, file(fd) should be equivalent to os.fdopen(fd.fileno()) or the proposed file.fromfd() (Jp Calderone, Python-dev, 2003). You should get another independent file object that accesses the same file. What would be gained? Primarily, it would allow you to derive classes from file more easily. At present, if you want to derive like so, you're class can only work when passed a file name or buffer. class file_with_caching(file): def __init__(self, something): file.__init__(self, something) def etcetera... For instance, you have no way of creating a file_with_caching() object from the file descriptors returned from os.fork(). Also, you have no way of taking a file that is already open, and creating a file_with_caching() object from it. So, you can't use classes derived from file() on the standard input or standard output. This breaks the nice Linux OS-level definition of a file descriptor. At the Linux level, you have a nice uniform interface where all file descriptors are equally good. At the python level, some are better than others. It's a case where Python unnecessarily restricts what you can do. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633665&group_id=5470 From noreply at sourceforge.net Fri Jan 12 02:14:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 17:14:09 -0800 Subject: [ python-Bugs-1633678 ] mailbox.py _fromlinepattern regexp does not support positive Message-ID: Bugs item #1633678, was opened at 2007-01-12 02:14 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633678&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: mailbox.py _fromlinepattern regexp does not support positive Initial Comment: [forwarded from http://bugs.debian.org/254757] mailbox.py _fromlinepattern regexp does not support positive GMT offsets. the pattern didn't change in 2.5. bug submitter writes: archivemail incorrectly splits up messages in my mbox-format mail archvies. I use Squirrelmail, which seems to create mbox lines that look like this: >From mangled at clarke.tinyplanet.ca Mon Jan 26 12:29:24 2004 -0400 The "-0400" appears to be throwing it off. If the first message of an mbox file has such a line on it, archivemail flat out stops, saying the file is not mbox. If the later messages in an mbox file are in this style, they are not counted, and archivemail thinks that the preceding message is just kind of long, and the decision to archive or not is broken. I have stumbled on this bug when I wanted to archive my mails on a Sarge system. And since my TZ is positive, the regexp did not work. I think the correct regexp for /usr/lib/python2.3/mailbox.py should be: _fromlinepattern = r"From \s*[^\s]+\s+\w\w\w\s+\w\w\w\s+\d?\d\s+" \ r"\d?\d:\d\d(:\d\d)?(\s+[^\s]+)?\s+\d\d\d\d\s*((\+|-)\d\d\d\d)?\s*$" This should handle positive and negative timezones in From lines. I have tested it successfully with an email beginning with this line: >From fred at athena.olympe.fr Mon May 31 13:24:50 2004 +0200 as well as one withouth TZ reference. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633678&group_id=5470 From noreply at sourceforge.net Fri Jan 12 02:42:43 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 17:42:43 -0800 Subject: [ python-Bugs-1633583 ] Hangs with 100% CPU load for certain regexes Message-ID: Bugs item #1633583, was opened at 2007-01-11 22:40 Message generated for change (Comment added) made by niemeyer You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633583&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Regular Expressions Group: Python 2.4 >Status: Closed >Resolution: Wont Fix Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Gustavo Niemeyer (niemeyer) Summary: Hangs with 100% CPU load for certain regexes Initial Comment: [forwarded from http://bugs.debian.org/401676] seen with 2.4.4 and 2.5 20061209; bug submitter writes: Hi, https://develop.participatoryculture.org/democracy/attachment/ticket/3947/crash.py is a small test program which causes a complete hangup for at least minutes (I aborted after a while) on my system, with 100% CPU load. The regex code seems to run into some endless loop or something... ---------------------------------------------------------------------- >Comment By: Gustavo Niemeyer (niemeyer) Date: 2007-01-12 01:42 Message: Logged In: YES user_id=7887 Originator: NO Hello Matthias, It's well known that certain regular expressions can match in exponential time. Try that for instance: re.match("(((a+?)+?)+?b)", "a"*100+"c") There are ways to optimize simple cases like this (which aren't present in Python right now), but there isn't a way to truly "solve" the exponential time backtracking for all cases while still offering the current feature set. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633583&group_id=5470 From noreply at sourceforge.net Fri Jan 12 02:58:20 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 17:58:20 -0800 Subject: [ python-Bugs-1025525 ] asyncore.file_dispatcher should not take fd as argument Message-ID: Bugs item #1025525, was opened at 2004-09-10 12:14 Message generated for change (Comment added) made by dhoulder You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1025525&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: david houlder (dhoulder) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.file_dispatcher should not take fd as argument Initial Comment: Only relevant to posix. asyncore.file_dispatcher closes the file descriptor behind the file object, and not the file object itself. When another file gets opened, it gets the next available fd, which on posix, is the one just released by the close. Tested on python 2.2.3 on RedHat Enterprise Linux 3 and python 2.2.1 on HP Tru64 unix. See attached script for details and a solution. 'case 1' should show the problem regardless of the garbage collection strategy in python. 'case 2' relies on the file object being closed as soon as the last reference to it disappears, which seems to be the (current?) behaviour. [djh900 at dh djh900]$ python file_dispatcher_bug.py case 1: (Read 'I am the first pipe\n' from pipe) (pipe closing. fd== 3 ) (Read '' from pipe) firstPipe.read() says 'I am the second pipe\n' firstPipe.fileno()== 3 secondPipe.fileno()== 3 case 2: (Read 'I am the first pipe\n' from pipe) (pipe closing. fd== 3 ) (Read '' from pipe) secondPipe.fileno()== 3 dispatcher.secondPipe.read() says Traceback (most recent call last): File "file_dispatcher_bug.py", line 77, in ? print "dispatcher.secondPipe.read() says", repr(dispatcher.secondPipe.read()) IOError: [Errno 9] Bad file descriptor [djh900 at dh djh900]$ ---------------------------------------------------------------------- >Comment By: david houlder (dhoulder) Date: 2007-01-12 12:58 Message: Logged In: YES user_id=1119185 Originator: YES Yep, dup()ing the fd and using that for the lifetime of the object sounds like a good, simple fix. Wish I'd thought of it :-) ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-07 09:48 Message: Logged In: YES user_id=341410 Originator: NO I believe that asyncore.file_dispatcher taking a file descriptor is fine. The problem is that the documentation doesn't suggest that you os.dup() the file handle so that both the original handle (from a pipe, file, etc.) can be closed independently from the one being used by the file_dispatcher. In the case of socket.makefile(), the duplication is done automatically, so there isn't the same problem. My suggested fix would be to accept a file or a file handle. For files, we first get its file number via the standard f.fileno(), and with that, or the handle we are provided, we os.dup() the handle. ---------------------------------------------------------------------- Comment By: david houlder (dhoulder) Date: 2004-11-18 10:43 Message: Logged In: YES user_id=1119185 In an ideal world I'd propose replacing the guts of file_wrapper() and file_dispatcher() by my pipe_wrapper() and PipeDispatcher(), since the general problem of closing the file descriptor behind the python object applies to all python objects that are based on a file descriptor, not just pipes. So, yes, probably best not to call it pipe_dispatcher(). And I guess file_dispatcher() may be in use by other peoples' code and changing it to take a file object rather than an fd will break that. Maybe file_dispatcher.__init__() could be changed to take either an integer file descriptor or a file object as it's argument, and behave like the current file_dispatcher() when given an fd, and like pipe_dispatcher() when given a file-like object (i.e. any object with fileno() and close() methods will probably be enough). I'm happy to whip up an example if people think that's a good idea. ---------------------------------------------------------------------- Comment By: Jeremy Hylton (jhylton) Date: 2004-11-08 02:23 Message: Logged In: YES user_id=31392 I'm not sure whether you propose a change to asyncore or are describing a pattern that allows you to use a pipe with it safely. And, looking at your code more closely, I think pipe is confusing, because you're not talking about os.pipe() right? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1025525&group_id=5470 From noreply at sourceforge.net Fri Jan 12 03:11:55 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 18:11:55 -0800 Subject: [ python-Bugs-1630863 ] PyLong_AsLong doesn't check tp_as_number Message-ID: Bugs item #1630863, was opened at 2007-01-08 15:06 Message generated for change (Comment added) made by rupole You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630863&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Roger Upole (rupole) Assigned to: Martin v. L?wis (loewis) Summary: PyLong_AsLong doesn't check tp_as_number Initial Comment: Both PyInt_AsLong and PyLong_AsLongLong check if an object's type has PyNumberMethods defined. However, PyLong_AsLong does not, causing conversion to fail for objects which can legitimately be converted to a long. ---------------------------------------------------------------------- >Comment By: Roger Upole (rupole) Date: 2007-01-11 21:11 Message: Logged In: YES user_id=771074 Originator: YES The problem is that the conversion fails when it should succeed. The place I ran into this was in PyLong_AsVoidPtr, which I can't change. Are you saying that PyLong_AsLong is deprecated, and should never be used ? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-11 18:20 Message: Logged In: YES user_id=21627 Originator: NO I fail to see the problem. If you want to convert arbitrary objects to long, use PyInt_AsLong. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630863&group_id=5470 From noreply at sourceforge.net Fri Jan 12 07:04:16 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 22:04:16 -0800 Subject: [ python-Bugs-1504333 ] sgmllib should allow angle brackets in quoted values Message-ID: Bugs item #1504333, was opened at 2006-06-11 05:58 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1504333&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Sam Ruby (rubys) Assigned to: Nobody/Anonymous (nobody) Summary: sgmllib should allow angle brackets in quoted values Initial Comment: Real live example (search for "other
corrections") http://latticeqcd.blogspot.com/2006/05/non-relativistic-qcd.html This addresses the following (included in the file): # XXX The following should skip matching quotes (' or ") ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-11 22:04 Message: Logged In: YES user_id=33168 Originator: NO You should be able to check yourself. Use the current version of Python, apply the test case from the original patch and your patch to the code. If the test passes, I'll be happy to check in the fix. If that does work, please create a new patch with your code and the test case from the original patch. ---------------------------------------------------------------------- Comment By: Haejoong Lee (haepal) Date: 2007-01-11 10:01 Message: Logged In: YES user_id=135609 Originator: NO Could someone check if the following patch fixes the problem? This patch was made against revision 51854. --- sgmllib.py.org 2006-11-06 02:31:12.000000000 -0500 +++ sgmllib.py 2007-01-11 12:39:30.000000000 -0500 @@ -16,6 +16,35 @@ # Regular expressions used for parsing +class MyMatch: + def __init__(self, i): + self._i = i + def start(self, i): + return self._i + +class EndBracket: + def search(self, data, index): + s = data[index:] + bs = None + quote = None + for i,c in enumerate(s): + if bs: + bs = False + else: + if c == '<' or c == '>': + if quote is None: + break + elif c == "'" or c == '"': + if c == quote: + quote = None + else: + quote = c + elif c == '\\': + bs = True + else: + return None + return MyMatch(i+index) + interesting = re.compile('[&<]') incomplete = re.compile('&([a-zA-Z][a-zA-Z0-9]*|#[0-9]*)?|' '<([a-zA-Z][^<>]*|' @@ -29,7 +58,8 @@ shorttagopen = re.compile('<[a-zA-Z][-.a-zA-Z0-9]*/') shorttag = re.compile('<([a-zA-Z][-.a-zA-Z0-9]*)/([^/]*)/') piclose = re.compile('>') -endbracket = re.compile('[<>]') +#endbracket = re.compile('[<>]') +endbracket = EndBracket() tagfind = re.compile('[a-zA-Z][-_.a-zA-Z0-9]*') attrfind = re.compile( r'\s*([a-zA-Z_][-:.a-zA-Z_0-9]*)(\s*=\s*' ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-09-10 21:26 Message: Logged In: YES user_id=33168 I reverted the patch and added the test case for sgml so the infinite loop doesn't recur. This was mentioned several times on python-dev. Committed revision 51854. (head) Committed revision 51850. (2.5) Committed revision 51853. (2.4) ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2006-06-29 10:17 Message: Logged In: YES user_id=3066 I checked in a modified version of this patch: changed to use separate REs for start and end tags to reduce matching cost for end tags; extended tests; updated to avoid breaking previous changes to support IPv6 addresses in unquoted attribute values. Committed as revisions 47154 (trunk) and 47155 (release24-maint). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1504333&group_id=5470 From noreply at sourceforge.net Fri Jan 12 08:46:04 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 11 Jan 2007 23:46:04 -0800 Subject: [ python-Bugs-1630863 ] PyLong_AsLong doesn't check tp_as_number Message-ID: Bugs item #1630863, was opened at 2007-01-08 21:06 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630863&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Roger Upole (rupole) Assigned to: Martin v. L?wis (loewis) Summary: PyLong_AsLong doesn't check tp_as_number Initial Comment: Both PyInt_AsLong and PyLong_AsLongLong check if an object's type has PyNumberMethods defined. However, PyLong_AsLong does not, causing conversion to fail for objects which can legitimately be converted to a long. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-12 08:46 Message: Logged In: YES user_id=21627 Originator: NO No, I'm saying that PyLong_AsVoidPtr is guaranteed to convert ints and longs, nothing else. Likewise, PyLong_AsVoidPtr is only supported for int and long objects (read the documentation). It's not deprecated - but it should only be used for the cases which it is documented to support. If, for some reason, you want to convert an object that is not an int or long into a void*, by converting it to an int first, you need to invoke the number methods first yourself. Closing this report as invalid. ---------------------------------------------------------------------- Comment By: Roger Upole (rupole) Date: 2007-01-12 03:11 Message: Logged In: YES user_id=771074 Originator: YES The problem is that the conversion fails when it should succeed. The place I ran into this was in PyLong_AsVoidPtr, which I can't change. Are you saying that PyLong_AsLong is deprecated, and should never be used ? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-12 00:20 Message: Logged In: YES user_id=21627 Originator: NO I fail to see the problem. If you want to convert arbitrary objects to long, use PyInt_AsLong. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630863&group_id=5470 From noreply at sourceforge.net Fri Jan 12 09:46:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 00:46:57 -0800 Subject: [ python-Bugs-1633863 ] AIX: configure ignores $CC; problems mit C++ comments Message-ID: Bugs item #1633863, was opened at 2007-01-12 09:46 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633863&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: AIX: configure ignores $CC; problems mit C++ comments Initial Comment: CC=xlc_r ./configure does not work on AIX-5.1, because configure unconditionally sets $CC to "cc_r": case $ac_sys_system in AIX*) CC=cc_r without_gcc=;; It would be better to leave $CC and just add "-qthreaded" to $CFLAGS. Furthermore, much of the C source code of Python uses C++ /C99 comments. This is an error with the standard AIX compiler. Please add the compiler flag "-qcpluscmt". An alternative would be to use a default of "xlc_r" for CC on AIX. This calls the compiler in a mode that both accepts C++ comments and generates reentrant code. Regards, Johannes ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633863&group_id=5470 From noreply at sourceforge.net Fri Jan 12 10:08:40 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 01:08:40 -0800 Subject: [ python-Bugs-1534765 ] logging's fileConfig causes KeyError on shutdown Message-ID: Bugs item #1534765, was opened at 2006-08-05 05:58 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1534765&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: mdbeachy (mdbeachy) Assigned to: Vinay Sajip (vsajip) Summary: logging's fileConfig causes KeyError on shutdown Initial Comment: If logging.config.fileConfig() is called after logging handlers already exist, a KeyError is thrown in the atexit call to logging.shutdown(). This looks like it's fixed in the 2.5 branch but since I've bothered to figure out what was going on I'm sending this in anyway. There still might be a 2.4.4, right? (Also, my fix looks better than what was done for 2.5, but I suppose the flush/close I added may not be necessary.) Attached is a demo and a patch against 2.4.3. Thanks, Mike ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2007-01-12 20:08 Message: Logged In: YES user_id=29957 Originator: NO Note, though, that there's no planned "next release" of 2.4. 2.4 is in "emergency security fix mode" - that is, unless someone finds a critical security problem, I don't plan to make further 2.4.x releases. If someone else wants to volunteer, of course, that's entirely fine. ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2007-01-12 07:30 Message: Logged In: YES user_id=308438 Originator: NO Fix for SF #1632328 should cover this - checked into release24-maint. ---------------------------------------------------------------------- Comment By: Matt Fleming (splitscreen) Date: 2006-08-10 00:10 Message: Logged In: YES user_id=1126061 Bug confirmed in release24-maint. Patch looks good to me, although I think the developers prefer unified diffs, not contextual, just to keep in mind for the future. And also, I had to manually patch the Lib/logging/config.py file because for some reason, the paths in your patch all use lowercase letters. Thanks for the patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1534765&group_id=5470 From noreply at sourceforge.net Fri Jan 12 10:10:26 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 01:10:26 -0800 Subject: [ python-Bugs-1467929 ] %-formatting and dicts Message-ID: Bugs item #1467929, was opened at 2006-04-11 05:39 Message generated for change (Comment added) made by anthonybaxter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1467929&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 8 Private: No Submitted By: M.-A. Lemburg (lemburg) Assigned to: Anthony Baxter (anthonybaxter) Summary: %-formatting and dicts Initial Comment: This looks like a bug in the way the %-formatting code works or is it a feature ? >>> '%s %(a)s' % {'a': 'xyz'} "{'a': 'xyz'} xyz" >>> u'%s %(a)s' % {'a': 'xyz'} u"{'a': 'xyz'} xyz" Note that both strings and Unicode are affected. Python 2.3 and 2.4 also show this behavior. I would have expected an exception or the %-formatter simply ignoring the first %s. ---------------------------------------------------------------------- >Comment By: Anthony Baxter (anthonybaxter) Date: 2007-01-12 20:10 Message: Logged In: YES user_id=29957 Originator: NO I'm happy for this to be applied for 2.5.1. I don't have time to do it myself for a few days, though, so feel free to beat me to it. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-08-19 15:25 Message: Logged In: YES user_id=849994 I'd say before 2.5 final... ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2006-08-19 08:17 Message: Logged In: YES user_id=38388 Should this patch be applied to the 2.5 branch ? And if so, before or after the release of 2.5 ? ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2006-08-19 08:01 Message: Logged In: YES user_id=764593 Just a bit of encouragement for checking consistency like this; the explicit error message would have helped with a mistake I made earlier today. For one of several keys, I mistyped it as "(%key)s", and a message about "not enough values" just didn't make sense. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2006-08-04 23:26 Message: Logged In: YES user_id=38388 The patch looks OK. I'd make it a TypeError and use "cannot use positional and named formatting parameters at the same time" as message. Thanks. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2006-08-04 23:21 Message: Logged In: YES user_id=44345 Looks okay to me, though why is the FORMAT_TYPE_UNKNOWN test necessary in the second case but not the first? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-07-25 22:35 Message: Logged In: YES user_id=11375 So, what should '%s' % {} do? Should it be 1) '{}' or 2) an error because the argument is a mapping but the format specifier doesn't have a '(key)'? I've attached a draft patch that fixes stringobject.c; if the approach is deemed OK, I'll apply it to unicodeobject.c, too. PyString_Format() records the type of argument being processed (a tuple or a mapping) and raises ValueError if you mix them, at the cost of two extra comparisons for each format specifier processed. This preserves the current behaviour of '%s' % dictionary. Questions: 1) is the approach reasonably clear? 2) are the additional two comparisons unacceptably slow? 3) Is ValueError the right exception? 4) can someone come up with a better exception message than "both keyed and unkeyed format specifiers used"? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-07-25 06:07 Message: Logged In: YES user_id=21627 IMO, it's correct to break backwards compatibility, as the current behaviour clearly violates the spec; I'm not sure whether it's good to break the behaviour *now* (i.e. with no further betas before the release of 2.5). Deferring to the release manager. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-07-24 23:37 Message: Logged In: YES user_id=849994 The library ref specifies that if a dict is supplied, the format specifiers MUST include a mapping key, so the right thing to do would be to raise an exception. Is it worth breaking backwards compatibility, Martin? ---------------------------------------------------------------------- Comment By: Hasan Diwan (hdiwan650) Date: 2006-04-14 18:33 Message: Logged In: YES user_id=1185570 It looks as though it's a feature... The first %s will print the whole dictionary as a string, the second, only that value looked up by the key. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1467929&group_id=5470 From noreply at sourceforge.net Fri Jan 12 11:34:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 02:34:13 -0800 Subject: [ python-Bugs-1633941 ] for line in sys.stdin: doesn't notice EOF the first time Message-ID: Bugs item #1633941, was opened at 2007-01-12 11:34 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: for line in sys.stdin: doesn't notice EOF the first time Initial Comment: [forwarded from http://bugs.debian.org/315888] for line in sys.stdin: doesn't notice EOF the first time when reading from tty. The test program: import sys for line in sys.stdin: print line, print "eof" A sample session: liw at esme$ python foo.py foo <--- I pressed Enter and then Ctrl-D foo <--- then this appeared, but not more eof <--- this only came when I pressed Ctrl-D a second time liw at esme$ Seems to me that there is some buffering issue where Python needs to read end-of-file twice to notice it on all levels. Once should be enough. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 From noreply at sourceforge.net Fri Jan 12 11:45:14 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 02:45:14 -0800 Subject: [ python-Bugs-1633953 ] re.compile("(.*$){1,4}", re.MULTILINE) fails Message-ID: Bugs item #1633953, was opened at 2007-01-12 11:45 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633953&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Regular Expressions Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Gustavo Niemeyer (niemeyer) Summary: re.compile("(.*$){1,4}", re.MULTILINE) fails Initial Comment: [forwarded from http://bugs.debian.org/289603] Trying to match 1-4 lines of arbitrary content (as part of a larger regex) using the expression (.*$){1,4} and re.MULTILINE. This caused the re module to raise the error "nothing to repeat". $ python2.5 Python 2.5 (release25-maint, Dec 13 2006, 16:21:45) [GCC 4.1.2 20061212 (prerelease) (Ubuntu 4.1.1-21ubuntu2)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import re >>> re.compile("(.*$){1,4}", re.MULTILINE) Traceback (most recent call last): File "", line 1, in File "/usr/lib/python2.5/re.py", line 180, in compile return _compile(pattern, flags) File "/usr/lib/python2.5/re.py", line 233, in _compile raise error, v # invalid expression sre_constants.error: nothing to repeat ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633953&group_id=5470 From noreply at sourceforge.net Fri Jan 12 12:47:58 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 03:47:58 -0800 Subject: [ python-Bugs-1633863 ] AIX: configure ignores $CC; problems with C++ style comments Message-ID: Bugs item #1633863, was opened at 2007-01-12 09:46 Message generated for change (Settings changed) made by jabt You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633863&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) >Summary: AIX: configure ignores $CC; problems with C++ style comments Initial Comment: CC=xlc_r ./configure does not work on AIX-5.1, because configure unconditionally sets $CC to "cc_r": case $ac_sys_system in AIX*) CC=cc_r without_gcc=;; It would be better to leave $CC and just add "-qthreaded" to $CFLAGS. Furthermore, much of the C source code of Python uses C++ /C99 comments. This is an error with the standard AIX compiler. Please add the compiler flag "-qcpluscmt". An alternative would be to use a default of "xlc_r" for CC on AIX. This calls the compiler in a mode that both accepts C++ comments and generates reentrant code. Regards, Johannes ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633863&group_id=5470 From noreply at sourceforge.net Fri Jan 12 14:01:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 05:01:28 -0800 Subject: [ python-Bugs-1634033 ] configure problem for sem_init() on HP-UX Message-ID: Bugs item #1634033, was opened at 2007-01-12 14:01 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634033&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: configure problem for sem_init() on HP-UX Initial Comment: On HP-UX 11.00, sem_init is in librt. As configure calls the linker without -lrt, linking the sem_init test program fails. I suggest adding -lrt to the linker flags when linking the sem_init test program in configure on HP-UX. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634033&group_id=5470 From noreply at sourceforge.net Fri Jan 12 14:03:27 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 05:03:27 -0800 Subject: [ python-Feature Requests-1634034 ] Show "expected" token on syntax error Message-ID: Feature Requests item #1634034, was opened at 2007-01-12 13:03 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1634034&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Parser/Compiler Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Oliver Gramberg (oliver_gramberg) Assigned to: Nobody/Anonymous (nobody) Summary: Show "expected" token on syntax error Initial Comment: I suggest that the parser, when reporting a syntax error, should make use of its knowlegde of which token type is expected at the position where the error occurred. This results in more helpful error messages: ----------------------------------------------------- >>> for a in (8,9) File "", line 1 for a in (8,9) ^ SyntaxError: invalid syntax - COLON expected ----------------------------------------------------- >>> for a in (8,9: print a, File "", line 1 for a in (8,9: print a, ^ SyntaxError: invalid syntax: RPAR expected ----------------------------------------------------- I tried the following patch (for pythonrun.c). It works well in the shell both interactively and in scripts, as well as in IDLE. But it's not complete: - It doesn't always print useful messages (only for fixed-size terminal token types, I assume.) - There sure are cases where more than one token type is allowed in a position. I believe I have seen that this information is available too somewhere in the parser, but it is not forwarded to the err_input routine. It's even nicer to show "')'" instead of "RPAR"... ----------------------------------------------------- /* Set the error appropriate to the given input error code (see errcode.h) */ static void err_input(perrdetail *err) { PyObject *v, *w, *errtype; PyObject* u = NULL; char *msg = NULL; errtype = PyExc_SyntaxError; switch (err->error) { case E_SYNTAX: errtype = PyExc_IndentationError; if (err->expected == INDENT) msg = "expected an indented block"; else if (err->token == INDENT) msg = "unexpected indent"; else if (err->token == DEDENT) msg = "unexpected unindent"; else { char buf[50]; errtype = PyExc_SyntaxError; if(err->expected != -1) { snprintf(buf, 48, "invalid syntax - %.16s expected\0", _PyParser_TokenNames[err->expected]); msg = buf; } else { msg = "invalid syntax"; } } break; ... ----------------------------------------------------- I am willing to help work on this. Regards -Oliver ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1634034&group_id=5470 From noreply at sourceforge.net Fri Jan 12 14:35:36 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 05:35:36 -0800 Subject: [ python-Bugs-1634033 ] configure problem for sem_init() on HP-UX Message-ID: Bugs item #1634033, was opened at 2007-01-12 14:01 Message generated for change (Settings changed) made by jabt You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634033&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 >Status: Deleted Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: configure problem for sem_init() on HP-UX Initial Comment: On HP-UX 11.00, sem_init is in librt. As configure calls the linker without -lrt, linking the sem_init test program fails. I suggest adding -lrt to the linker flags when linking the sem_init test program in configure on HP-UX. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634033&group_id=5470 From noreply at sourceforge.net Fri Jan 12 15:51:04 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 06:51:04 -0800 Subject: [ python-Bugs-1634105 ] AIX: wrong flags for ld when linking standard .so modules Message-ID: Bugs item #1634105, was opened at 2007-01-12 15:51 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634105&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: AIX: wrong flags for ld when linking standard .so modules Initial Comment: The build process on my AIX 5.1 (using the native compiler suite) does not work for the standard .so modules (like _locale, unicodedata, fcntl, ...) [..] creating build/lib.hp-ux-B.11.00-9000-785-2.5 ld -b -L/usr/local/python/lib -Wl,+b,/usr/local/python/2.5/lib:/usr/local/ssl/lib,[...]-o build/lib.hp-ux-B.11.00-9000-785-2.5/_struct.sl ld: Unrecognized argument: -Wl,+b[...] ld: Usage: ld [options] [flags] files You can pass "-Wl,+b...." to the compiler, but for the linker, you have to drop the "-Wl,". What's even worse: Even though "ld" aborts with an error, the build process ignories this. I have to idea of where to start looking for the bug(s). Bye, Johannes ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634105&group_id=5470 From noreply at sourceforge.net Fri Jan 12 15:57:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 06:57:28 -0800 Subject: [ python-Bugs-1467929 ] %-formatting and dicts Message-ID: Bugs item #1467929, was opened at 2006-04-10 15:39 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1467929&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 8 Private: No Submitted By: M.-A. Lemburg (lemburg) Assigned to: Anthony Baxter (anthonybaxter) Summary: %-formatting and dicts Initial Comment: This looks like a bug in the way the %-formatting code works or is it a feature ? >>> '%s %(a)s' % {'a': 'xyz'} "{'a': 'xyz'} xyz" >>> u'%s %(a)s' % {'a': 'xyz'} u"{'a': 'xyz'} xyz" Note that both strings and Unicode are affected. Python 2.3 and 2.4 also show this behavior. I would have expected an exception or the %-formatter simply ignoring the first %s. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 09:57 Message: Logged In: YES user_id=11375 Originator: NO The patch shouldn't be applied as it stands, though, because it's not complete; similiar changes need to be made to the Unicode type, for a start. To answer Skip's question: I don't remember the logic of the format code. I think the FORMAT_TYPE_UNKNOWN check may be unnecessary; the code could just always do format_type = _TUPLE, occasionally doing a redundant assignment (but who cares)? I don't think I'll have any chance to work on this; PyCon is keeping me busy, and the mailbox bugs will take priority for me. ---------------------------------------------------------------------- Comment By: Anthony Baxter (anthonybaxter) Date: 2007-01-12 04:10 Message: Logged In: YES user_id=29957 Originator: NO I'm happy for this to be applied for 2.5.1. I don't have time to do it myself for a few days, though, so feel free to beat me to it. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-08-19 01:25 Message: Logged In: YES user_id=849994 I'd say before 2.5 final... ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2006-08-18 18:17 Message: Logged In: YES user_id=38388 Should this patch be applied to the 2.5 branch ? And if so, before or after the release of 2.5 ? ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2006-08-18 18:01 Message: Logged In: YES user_id=764593 Just a bit of encouragement for checking consistency like this; the explicit error message would have helped with a mistake I made earlier today. For one of several keys, I mistyped it as "(%key)s", and a message about "not enough values" just didn't make sense. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2006-08-04 09:26 Message: Logged In: YES user_id=38388 The patch looks OK. I'd make it a TypeError and use "cannot use positional and named formatting parameters at the same time" as message. Thanks. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2006-08-04 09:21 Message: Logged In: YES user_id=44345 Looks okay to me, though why is the FORMAT_TYPE_UNKNOWN test necessary in the second case but not the first? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-07-25 08:35 Message: Logged In: YES user_id=11375 So, what should '%s' % {} do? Should it be 1) '{}' or 2) an error because the argument is a mapping but the format specifier doesn't have a '(key)'? I've attached a draft patch that fixes stringobject.c; if the approach is deemed OK, I'll apply it to unicodeobject.c, too. PyString_Format() records the type of argument being processed (a tuple or a mapping) and raises ValueError if you mix them, at the cost of two extra comparisons for each format specifier processed. This preserves the current behaviour of '%s' % dictionary. Questions: 1) is the approach reasonably clear? 2) are the additional two comparisons unacceptably slow? 3) Is ValueError the right exception? 4) can someone come up with a better exception message than "both keyed and unkeyed format specifiers used"? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-07-24 16:07 Message: Logged In: YES user_id=21627 IMO, it's correct to break backwards compatibility, as the current behaviour clearly violates the spec; I'm not sure whether it's good to break the behaviour *now* (i.e. with no further betas before the release of 2.5). Deferring to the release manager. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-07-24 09:37 Message: Logged In: YES user_id=849994 The library ref specifies that if a dict is supplied, the format specifiers MUST include a mapping key, so the right thing to do would be to raise an exception. Is it worth breaking backwards compatibility, Martin? ---------------------------------------------------------------------- Comment By: Hasan Diwan (hdiwan650) Date: 2006-04-14 04:33 Message: Logged In: YES user_id=1185570 It looks as though it's a feature... The first %s will print the whole dictionary as a string, the second, only that value looked up by the key. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1467929&group_id=5470 From noreply at sourceforge.net Fri Jan 12 17:48:30 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 08:48:30 -0800 Subject: [ python-Bugs-1634105 ] AIX: wrong flags for ld when linking standard .so modules Message-ID: Bugs item #1634105, was opened at 2007-01-12 15:51 Message generated for change (Settings changed) made by jabt You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634105&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: None >Status: Deleted Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: AIX: wrong flags for ld when linking standard .so modules Initial Comment: The build process on my AIX 5.1 (using the native compiler suite) does not work for the standard .so modules (like _locale, unicodedata, fcntl, ...) [..] creating build/lib.hp-ux-B.11.00-9000-785-2.5 ld -b -L/usr/local/python/lib -Wl,+b,/usr/local/python/2.5/lib:/usr/local/ssl/lib,[...]-o build/lib.hp-ux-B.11.00-9000-785-2.5/_struct.sl ld: Unrecognized argument: -Wl,+b[...] ld: Usage: ld [options] [flags] files You can pass "-Wl,+b...." to the compiler, but for the linker, you have to drop the "-Wl,". What's even worse: Even though "ld" aborts with an error, the build process ignories this. I have to idea of where to start looking for the bug(s). Bye, Johannes ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634105&group_id=5470 From noreply at sourceforge.net Fri Jan 12 18:12:44 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 09:12:44 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Fri Jan 12 18:16:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 09:16:13 -0800 Subject: [ python-Bugs-1633678 ] mailbox.py _fromlinepattern regexp does not support positive Message-ID: Bugs item #1633678, was opened at 2007-01-11 20:14 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633678&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) >Assigned to: A.M. Kuchling (akuchling) Summary: mailbox.py _fromlinepattern regexp does not support positive Initial Comment: [forwarded from http://bugs.debian.org/254757] mailbox.py _fromlinepattern regexp does not support positive GMT offsets. the pattern didn't change in 2.5. bug submitter writes: archivemail incorrectly splits up messages in my mbox-format mail archvies. I use Squirrelmail, which seems to create mbox lines that look like this: >From mangled at clarke.tinyplanet.ca Mon Jan 26 12:29:24 2004 -0400 The "-0400" appears to be throwing it off. If the first message of an mbox file has such a line on it, archivemail flat out stops, saying the file is not mbox. If the later messages in an mbox file are in this style, they are not counted, and archivemail thinks that the preceding message is just kind of long, and the decision to archive or not is broken. I have stumbled on this bug when I wanted to archive my mails on a Sarge system. And since my TZ is positive, the regexp did not work. I think the correct regexp for /usr/lib/python2.3/mailbox.py should be: _fromlinepattern = r"From \s*[^\s]+\s+\w\w\w\s+\w\w\w\s+\d?\d\s+" \ r"\d?\d:\d\d(:\d\d)?(\s+[^\s]+)?\s+\d\d\d\d\s*((\+|-)\d\d\d\d)?\s*$" This should handle positive and negative timezones in From lines. I have tested it successfully with an email beginning with this line: >From fred at athena.olympe.fr Mon May 31 13:24:50 2004 +0200 as well as one withouth TZ reference. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633678&group_id=5470 From noreply at sourceforge.net Fri Jan 12 19:31:35 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 10:31:35 -0800 Subject: [ python-Feature Requests-1633665 ] file(file) should succeed Message-ID: Feature Requests item #1633665, was opened at 2007-01-11 19:50 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1633665&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Python Interpreter Core >Group: None >Status: Closed >Resolution: Wont Fix Priority: 3 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: file(file) should succeed Initial Comment: [forwarded from http://bugs.debian.org/327060] Many types in Python are idempotent, so that int(1) works as expected, float(2.34)==2.34, ''.join('hello')=='hello' et cetera. Why not file()? Currently, file(open(something, 'r')) fails with "TypeError: coercing to Unicode: need string or buffer, file found." Semantically, file(fd) should be equivalent to os.fdopen(fd.fileno()) or the proposed file.fromfd() (Jp Calderone, Python-dev, 2003). You should get another independent file object that accesses the same file. What would be gained? Primarily, it would allow you to derive classes from file more easily. At present, if you want to derive like so, you're class can only work when passed a file name or buffer. class file_with_caching(file): def __init__(self, something): file.__init__(self, something) def etcetera... For instance, you have no way of creating a file_with_caching() object from the file descriptors returned from os.fork(). Also, you have no way of taking a file that is already open, and creating a file_with_caching() object from it. So, you can't use classes derived from file() on the standard input or standard output. This breaks the nice Linux OS-level definition of a file descriptor. At the Linux level, you have a nice uniform interface where all file descriptors are equally good. At the python level, some are better than others. It's a case where Python unnecessarily restricts what you can do. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:31 Message: Logged In: YES user_id=11375 Originator: NO Reclassifying as feature request. Response from GvR is at http://mail.python.org/pipermail/python-dev/2007-January/070591.html This proposal probably won't be implemented; closing as "Won't fix". Thanks for the suggestion, though. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1633665&group_id=5470 From noreply at sourceforge.net Fri Jan 12 19:41:07 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 10:41:07 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Fri Jan 12 20:41:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 11:41:48 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 14:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Fri Jan 12 21:40:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 12:40:47 -0800 Subject: [ python-Bugs-1634343 ] subprocess swallows empty arguments under win32 Message-ID: Bugs item #1634343, was opened at 2007-01-12 21:40 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634343&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Patrick M?zard (pmezard) Assigned to: Nobody/Anonymous (nobody) Summary: subprocess swallows empty arguments under win32 Initial Comment: Hello, empty arguments are not quoted by subprocess.list2cmdline. Therefore nothing is concatenated with other arguments. To reproduce it: test-empty.py """ import sys print sys.argv """ then: """ Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import subprocess >>> p = subprocess.Popen(['python', 'test-empty.py', ''], stdout=subprocess.PIPE) >>> p.communicate() ("['test-empty.py']\r\n", None) """ To solve it: """ --- a\subprocess.py 2007-01-12 21:38:57.734375000 +0100 +++ b\subprocess.py 2007-01-12 21:34:08.406250000 +0100 @@ -499,7 +499,7 @@ if result: result.append(' ') - needquote = (" " in arg) or ("\t" in arg) + needquote = (" " in arg) or ("\t" in arg) or not arg if needquote: result.append('"') """ Regard, Patrick M?zard ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634343&group_id=5470 From noreply at sourceforge.net Fri Jan 12 22:22:00 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 13:22:00 -0800 Subject: [ python-Bugs-1633605 ] logging module / wrong bytecode? Message-ID: Bugs item #1633605, was opened at 2007-01-11 18:06 Message generated for change (Comment added) made by jimjjewett You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633605&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: logging module / wrong bytecode? Initial Comment: [forwarded from http://bugs.debian.org/390152] seen with python2.4 and python2.5 on debian unstable import logging logging.basicConfig(level=logging.DEBUG, format='%(pathname)s:%(lineno)d') logging.info('whoops') The output when the logging/__init__.pyc file exists is: logging/__init__.py:1072 and when the __init__.pyc is deleted the output becomes: tst.py:5 ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2007-01-12 16:22 Message: Logged In: YES user_id=764593 Originator: NO Does debian by any chance (try to?) store the .py and .pyc files in different directories? The second result is correct; the second suggests that it somehow got confused about which frames to ignore. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633605&group_id=5470 From noreply at sourceforge.net Fri Jan 12 22:26:04 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 13:26:04 -0800 Subject: [ python-Bugs-1633630 ] class derived from float evaporates under += Message-ID: Bugs item #1633630, was opened at 2007-01-11 18:49 Message generated for change (Comment added) made by jimjjewett You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633630&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Type/class unification Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: class derived from float evaporates under += Initial Comment: [forwarded from http://bugs.debian.org/345373] There seems to be a bug in classes derived from float. For instance, consider the following: >>> class Float(float): ... def __init__(self, v): ... float.__init__(self, v) ... self.x = 1 ... >>> a = Float(2.0) >>> b = Float(3.0) >>> type(a) >>> type(b) >>> a += b >>> type(a) Now, the type of a has silently changed. It was a Float, a derived class with all kinds of properties, and it became a float -- a plain vanilla number. My understanding is that this is incorrect, and certainly unexpected. If it *is* correct, it certainly deserves mention somewhere in the documentation. It seems that Float.__iadd__(a, b) should be called. This defaults to float.__iadd__(a, b), which should increment the float part of the object while leaving the rest intact. A possible explanation for this problem is that float.__iadd__ is not actually defined, and so it falls through to a = float.__add__(a, b), which assigns a float to a. This interpretation seems to be correct, as one can add a destructor to the Float class: >>> class FloatD(float): ... def __init__(self, v): ... float.__init__(self, v) ... self.x = 1 ... def __del__(self): ... print 'Deleting FloatD class, losing x=', self.x ... >>> a = FloatD(2.0) >>> b = FloatD(3.0) >>> a += b Deleting FloatD class, losing x= 1 >>> ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2007-01-12 16:26 Message: Logged In: YES user_id=764593 Originator: NO Python float objects are immutable and can be shared. Therefore, their values cannot be modified -- which is why it falls back to not-in-place assignment. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633630&group_id=5470 From noreply at sourceforge.net Sat Jan 13 07:48:55 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 12 Jan 2007 22:48:55 -0800 Subject: [ python-Bugs-1381476 ] csv.reader endless loop Message-ID: Bugs item #1381476, was opened at 2005-12-15 06:04 Message generated for change (Comment added) made by ironfroggy You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1381476&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Christian Harms (wwwingman) Assigned to: Andrew McNamara (andrewmcnamara) Summary: csv.reader endless loop Initial Comment: Hi, the csv.reader produce a endless loop, ifan parsing Error is in the last line of the CSV-File. If you put an "\r" in the last line, cvs.Error is raised and StopIteration will lost. import csv, StringIO fp = StringIO.StringIO("line1\nline2\rerror") reader = csv.reader(fp) while 1: try: print reader.next() except csv.Error: print "Error" except StopIteration: break Die Problem is in python 2.3 AND python 2.4. Other version are not checked. ---------------------------------------------------------------------- Comment By: Calvin Spealman (ironfroggy) Date: 2007-01-13 01:48 Message: Logged In: YES user_id=112166 Originator: NO How do you expect it to handle this? Should it treat \r bytes as a newline or as content of the field? ---------------------------------------------------------------------- Comment By: Christian Harms (wwwingman) Date: 2006-01-03 03:56 Message: Logged In: YES user_id=1405594 >birkenfeld: csv.Error would imply a StopIteration/break ... No, this Error says only: "Can not parse THIS line ...". This exception is used for reading buggy outlook-Export-CSV-Files und trying to read some lines (not all). And if the error is in the last line, the StopIteration will be forgotten and the Error will be produced in a endless-loop. input = StringIO.StringIO("1.\rerror\n2.ok\n3.\rerr") #insert my while-loop #Output: >Error >2.ok >Error >Error ... ---------------------------------------------------------------------- Comment By: Georg Brandl (birkenfeld) Date: 2005-12-17 12:02 Message: Logged In: YES user_id=1188172 Let the expert judge. ---------------------------------------------------------------------- Comment By: Thomas Lee (krumms) Date: 2005-12-17 11:56 Message: Logged In: YES user_id=315535 Actually, the problem may not be a problem with the csv module at all, it may be a misinterpretation of the API on the submitters part. Is there any time a non-fatal csv.Error would/could be raised? Seems to me that a csv.Error would imply a StopIteration/break ... ---------------------------------------------------------------------- Comment By: Thomas Lee (krumms) Date: 2005-12-17 10:17 Message: Logged In: YES user_id=315535 I think this may be fixed in subversion: tom at vanilla:~/work/python$ svn info Path: . URL: http://svn.python.org/projects/python/trunk Repository UUID: 6015fed2-1504-0410-9fe1-9d1591cc4771 Revision: 41731 Node Kind: directory Schedule: normal Last Changed Author: fredrik.lundh Last Changed Rev: 41729 Last Changed Date: 2005-12-17 18:33:21 +1000 (Sat, 17 Dec 2005) Properties Last Updated: 2005-12-17 21:44:46 +1000 (Sat, 17 Dec 2005) tom at vanilla:~/work/python$ python -V Python 2.4.2 tom at vanilla:~/work/python$ python Sandbox/csv_reader_test.py ['line1'] ERROR: newline inside string tom at vanilla:~/work/python$ ./python -V Python 2.5a0 tom at vanilla:~/work/python$ ./python Sandbox/csv_reader_test.py ['line1'] ERROR: new-line character seen in unquoted field - do you need to open the file in universal-newline mode? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1381476&group_id=5470 From noreply at sourceforge.net Sat Jan 13 15:53:04 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 06:53:04 -0800 Subject: [ python-Feature Requests-1634717 ] csv.DictWriter: Include offending name in error message Message-ID: Feature Requests item #1634717, was opened at 2007-01-13 11:53 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1634717&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Gabriel Genellina (gagenellina) Assigned to: Nobody/Anonymous (nobody) Summary: csv.DictWriter: Include offending name in error message Initial Comment: In csv.py, class DictWriter, method _dict_to_list, when rowdict contains a key that is not a known field name, a ValueError is raised, but no reference to the offending name is given. As the code iterates along the dict keys, and stops at the first unknown one, its trivial to include such information. Replace lines: if k not in self.fieldnames: raise ValueError, "dict contains fields not in fieldnames" with: if k not in self.fieldnames: raise ValueError, "dict contains field not in fieldnames: %r" % k ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1634717&group_id=5470 From noreply at sourceforge.net Sat Jan 13 16:46:45 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 07:46:45 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 15:46 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Nobody/Anonymous (nobody) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sat Jan 13 16:49:00 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 07:49:00 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 15:46 Message generated for change (Comment added) made by frougon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Nobody/Anonymous (nobody) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sat Jan 13 16:49:42 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 07:49:42 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 15:46 Message generated for change (Comment added) made by frougon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Nobody/Anonymous (nobody) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sat Jan 13 16:50:40 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 07:50:40 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 15:46 Message generated for change (Comment added) made by frougon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Nobody/Anonymous (nobody) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:50 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sat Jan 13 16:51:24 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 07:51:24 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 15:46 Message generated for change (Comment added) made by frougon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Nobody/Anonymous (nobody) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:51 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:50 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sat Jan 13 16:52:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 07:52:09 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 15:46 Message generated for change (Comment added) made by frougon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Nobody/Anonymous (nobody) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:51 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:50 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sat Jan 13 16:52:53 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 07:52:53 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 15:46 Message generated for change (Comment added) made by frougon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Nobody/Anonymous (nobody) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:51 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:50 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sat Jan 13 18:21:11 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 09:21:11 -0800 Subject: [ python-Feature Requests-1634770 ] Please provide rsync-method in the urllib[2] module Message-ID: Feature Requests item #1634770, was opened at 2007-01-13 18:21 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1634770&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: Please provide rsync-method in the urllib[2] module Initial Comment: [forwarded from http://bugs.debian.org/323213] sometimes it would be nice to be able to just open rsync-connections directly via the urllib-methode (like http- and ftp-ressources). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1634770&group_id=5470 From noreply at sourceforge.net Sat Jan 13 18:30:16 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 09:30:16 -0800 Subject: [ python-Bugs-1634774 ] locale 1251 does not convert to upper case properly Message-ID: Bugs item #1634774, was opened at 2007-01-13 18:30 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ivan Dobrokotov (dobrokot) Assigned to: Nobody/Anonymous (nobody) Summary: locale 1251 does not convert to upper case properly Initial Comment:
 # -*- coding: 1251 -*-

import locale

locale.setlocale(locale.LC_ALL, ".1251") #locale name may be Windows specific?

#-----------------------------------------------
print chr(184), chr(168)
assert  chr(255).upper() == chr(223) #OK
assert  chr(184).upper() == chr(168) #fail
#-----------------------------------------------
assert  'q'.upper() == 'Q' #OK 
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  u'?'.upper() == u'?' #OK (locale independent)
assert  '?'.upper() == '?' #fail
I suppose incorrect realization of uppercase like
if ('a' <= c && c <= '?')
  return c+'?'-'?'
symbol '?' (184 in cp1251) is not in range 'a'-'?' ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 From noreply at sourceforge.net Sat Jan 13 18:49:59 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 09:49:59 -0800 Subject: [ python-Bugs-1634774 ] locale 1251 does not convert to upper case properly Message-ID: Bugs item #1634774, was opened at 2007-01-13 18:30 Message generated for change (Comment added) made by dobrokot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ivan Dobrokotov (dobrokot) Assigned to: Nobody/Anonymous (nobody) Summary: locale 1251 does not convert to upper case properly Initial Comment:
 # -*- coding: 1251 -*-

import locale

locale.setlocale(locale.LC_ALL, ".1251") #locale name may be Windows specific?

#-----------------------------------------------
print chr(184), chr(168)
assert  chr(255).upper() == chr(223) #OK
assert  chr(184).upper() == chr(168) #fail
#-----------------------------------------------
assert  'q'.upper() == 'Q' #OK 
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  u'?'.upper() == u'?' #OK (locale independent)
assert  '?'.upper() == '?' #fail
I suppose incorrect realization of uppercase like
if ('a' <= c && c <= '?')
  return c+'?'-'?'
symbol '?' (184 in cp1251) is not in range 'a'-'?' ---------------------------------------------------------------------- >Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:49 Message: Logged In: YES user_id=1538986 Originator: YES C-CRT library fucntion toupper('?') works properly, if I set setlocale(LC_ALL, ".1251") ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 From noreply at sourceforge.net Sat Jan 13 18:51:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 09:51:10 -0800 Subject: [ python-Bugs-1634774 ] locale 1251 does not convert to upper case properly Message-ID: Bugs item #1634774, was opened at 2007-01-13 18:30 Message generated for change (Comment added) made by dobrokot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ivan Dobrokotov (dobrokot) Assigned to: Nobody/Anonymous (nobody) Summary: locale 1251 does not convert to upper case properly Initial Comment:
 # -*- coding: 1251 -*-

import locale

locale.setlocale(locale.LC_ALL, ".1251") #locale name may be Windows specific?

#-----------------------------------------------
print chr(184), chr(168)
assert  chr(255).upper() == chr(223) #OK
assert  chr(184).upper() == chr(168) #fail
#-----------------------------------------------
assert  'q'.upper() == 'Q' #OK 
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  u'?'.upper() == u'?' #OK (locale independent)
assert  '?'.upper() == '?' #fail
I suppose incorrect realization of uppercase like
if ('a' <= c && c <= '?')
  return c+'?'-'?'
symbol '?' (184 in cp1251) is not in range 'a'-'?' ---------------------------------------------------------------------- >Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:51 Message: Logged In: YES user_id=1538986 Originator: YES sorry, I mean toupper((int)(unsigned char)'?') not just toupper('?') ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:49 Message: Logged In: YES user_id=1538986 Originator: YES C-CRT library fucntion toupper('?') works properly, if I set setlocale(LC_ALL, ".1251") ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 From noreply at sourceforge.net Sat Jan 13 18:57:58 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 09:57:58 -0800 Subject: [ python-Bugs-1633630 ] class derived from float evaporates under += Message-ID: Bugs item #1633630, was opened at 2007-01-11 23:49 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633630&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Type/class unification Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: class derived from float evaporates under += Initial Comment: [forwarded from http://bugs.debian.org/345373] There seems to be a bug in classes derived from float. For instance, consider the following: >>> class Float(float): ... def __init__(self, v): ... float.__init__(self, v) ... self.x = 1 ... >>> a = Float(2.0) >>> b = Float(3.0) >>> type(a) >>> type(b) >>> a += b >>> type(a) Now, the type of a has silently changed. It was a Float, a derived class with all kinds of properties, and it became a float -- a plain vanilla number. My understanding is that this is incorrect, and certainly unexpected. If it *is* correct, it certainly deserves mention somewhere in the documentation. It seems that Float.__iadd__(a, b) should be called. This defaults to float.__iadd__(a, b), which should increment the float part of the object while leaving the rest intact. A possible explanation for this problem is that float.__iadd__ is not actually defined, and so it falls through to a = float.__add__(a, b), which assigns a float to a. This interpretation seems to be correct, as one can add a destructor to the Float class: >>> class FloatD(float): ... def __init__(self, v): ... float.__init__(self, v) ... self.x = 1 ... def __del__(self): ... print 'Deleting FloatD class, losing x=', self.x ... >>> a = FloatD(2.0) >>> b = FloatD(3.0) >>> a += b Deleting FloatD class, losing x= 1 >>> ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-13 17:57 Message: Logged In: YES user_id=849994 Originator: NO You don't need augmented assign for that, just doing "a+b" will give you a float too. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2007-01-12 21:26 Message: Logged In: YES user_id=764593 Originator: NO Python float objects are immutable and can be shared. Therefore, their values cannot be modified -- which is why it falls back to not-in-place assignment. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633630&group_id=5470 From noreply at sourceforge.net Sat Jan 13 19:32:51 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 10:32:51 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: David Watson (baikie) Date: 2007-01-13 18:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 19:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 18:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 17:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 19:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 19:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 17:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 19:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 14:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 13:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 21:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 20:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 20:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Sat Jan 13 20:25:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 11:25:10 -0800 Subject: [ python-Bugs-1313119 ] urlparse "caches" parses regardless of encoding Message-ID: Bugs item #1313119, was opened at 2005-10-04 19:57 Message generated for change (Comment added) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1313119&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Unicode Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ken Kinder (kkinder) >Assigned to: Nobody/Anonymous (nobody) Summary: urlparse "caches" parses regardless of encoding Initial Comment: The issue can be summarized with this code: >>> urlparse.urlparse(u'http://www.python.org/doc') (u'http', u'www.python.org', u'/doc', '', '', '') >>> urlparse.urlparse('http://www.python.org/doc') (u'http', u'www.python.org', u'/doc', '', '', '') Once the urlparse library has "cached" a URL, it stores the resulting value of that cache regardless of datatype. Notice that in the second use of urlparse, I passed it a STRING and got back a UNICODE object. This can be quite confusing when, as a developer, you think you've already encoded all your objects, you use urlparse, and all of a sudden you have unicode objects again, when you expected to have strings. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2007-01-13 20:25 Message: Logged In: YES user_id=38388 Originator: NO Unassigning: I don't use urlparse, so can't comment. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1313119&group_id=5470 From noreply at sourceforge.net Sat Jan 13 20:26:46 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 11:26:46 -0800 Subject: [ python-Bugs-967986 ] file.encoding doesn't apply to file.write Message-ID: Bugs item #967986, was opened at 2004-06-07 09:00 Message generated for change (Comment added) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967986&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Unicode Group: Python 2.3 >Status: Closed >Resolution: Wont Fix Priority: 5 Private: No Submitted By: Matthew Mueller (donut) Assigned to: M.-A. Lemburg (lemburg) Summary: file.encoding doesn't apply to file.write Initial Comment: In python2.3 printing unicode to an appropriate terminal actually works. But using sys.stdout.write doesn't. Ex: Python 2.3.4 (#2, May 29 2004, 03:31:27) [GCC 3.3.3 (Debian 20040417)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> sys.stdout.encoding 'UTF-8' >>> u=u'\u3053\u3093\u306b\u3061\u308f' >>> print u こんにちわ >>> sys.stdout.write(u) Traceback (most recent call last): File "", line 1, in ? UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-4: ordinal not in range(128) The file object docs say: "encoding The encoding that this file uses. When Unicode strings are written to a file, they will be converted to byte strings using this encoding. ..." Which indicates to me that it is supposed to work. ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2007-01-13 20:26 Message: Logged In: YES user_id=38388 Originator: NO Not sure whether this is still the case. No patches were provided, so closing the feature request. ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-07-23 12:23 Message: Logged In: YES user_id=38388 The encoding feature is currently only implemented for printing. We could also add it to .write() and .writelines() ... patches are welcome. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=967986&group_id=5470 From noreply at sourceforge.net Sat Jan 13 20:29:40 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 11:29:40 -0800 Subject: [ python-Bugs-928297 ] platform.libc_ver() fails on Cygwin Message-ID: Bugs item #928297, was opened at 2004-04-02 16:55 Message generated for change (Comment added) made by lemburg You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=928297&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Wont Fix Priority: 5 Private: No Submitted By: George Yoshida (quiver) Assigned to: M.-A. Lemburg (lemburg) Summary: platform.libc_ver() fails on Cygwin Initial Comment: >>> import platform >>> platform.libc_ver() Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/platform.py", line 134, in libc_ver f = open(executable,'rb') IOError: [Errno 2] No such file or directory: '/usr/bin/python' The problem is that on Cygwin sys.executable returns /path/to/python, but since Cygwin is running on Windows, sys.executable is a symbolic link to /path/to/python.exe. >>> import os, sys >>> os.path.exists(sys.executable) True >>> os.path.isfile(sys.executable) True >>> file(sys.executable) Traceback (most recent call last): File "", line 1, in ? IOError: [Errno 2] No such file or directory: '/usr/bin/python' >>> os.path.islink(sys.executable) True >>> os.path.realpath(sys.executable) '/usr/bin/python2.3.exe' >>> file(sys.executable + '.exe') Following is the info about the machine I tested: >>> from platform import * >>> platform() 'CYGWIN_NT-5.0-1.5.7-0.109-3-2-i686-32bit' >>> python_compiler() 'GCC 3.3.1 (cygming special)' >>> python_build() (1, 'Dec 30 2003 08:29:25') >>> python_version() '2.3.3' >>> uname() ('CYGWIN_NT-5.0', 'my_user_name', '1.5.7 (0.109/3/2)', '2004-01-30 19:32', 'i686', '') ---------------------------------------------------------------------- >Comment By: M.-A. Lemburg (lemburg) Date: 2007-01-13 20:29 Message: Logged In: YES user_id=38388 Originator: NO Since Cygwin doesn't appear to use the GLibC, there's no surprise in libc_ver() not returning any useful information. ---------------------------------------------------------------------- Comment By: George Yoshida (quiver) Date: 2004-07-30 01:29 Message: Logged In: YES user_id=671362 Sorry for my late response, Marc. > Would applying os.path.realpath() to sys.executable before > trying to open that file fix the problem on Cygwin ? That change fixes the IO problem. After this, it doesn't raise IOError. The result of platform.libc_ver() is as follows: >>> import platform >>> platform.libc_ver() ('', '') > Another question: does using libc_ver() even make sense on > cygwin ? As far as I have checked, it doesn't look like so. According to the Cygwin FAQ[*], Cygwin doesn't use glibc, although it says that there's a counterpart(called ``newlib'') in Cygwin. C runtime embedded into cygwin1.dll uses newlib. Experienced C & Cygwin programmers might anser this question more precisely. [*] Where is glibc? : http://rustam.uwp.edu/support/faq.html#SEC88 ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-07-23 12:34 Message: Logged In: YES user_id=38388 Would applying os.path.realpath() to sys.executable before trying to open that file fix the problem on Cygwin ? Another question: does using libc_ver() even make sense on cygwin ? libc_ver() was never intended to be used on non-*nix platforms. I don't even know whether it works on other platforms than Linux. ---------------------------------------------------------------------- Comment By: George Yoshida (quiver) Date: 2004-04-03 06:20 Message: Logged In: YES user_id=671362 First, I need to correct my previous post. 'symbolic' was unrelated. Python on Cygwin does't like exe files that doesn't end with '.exe'. I think changing fileobject.c to support I/O exe files on Cygwin whether it ends with '.exe' or not is the way to go. Is there anyone who can do that? It's beyoond my skill level. $ ls -l /usr/bin/python* lrwxrwxrwx 1 abel Users 24 Jan 1 01:34 /usr/bin/python - > python2.3.exe lrwxrwxrwx 1 abel Users 24 Jan 1 01:34 /usr/bin/python.exe -> python2.3.exe -rwxrwxrwx 1 abel Users 4608 Dec 30 22:32 /usr/bin/python2.3.exe >>> file('/usr/bin/python') Traceback (most recent call last): File "", line 1, in ? IOError: [Errno 2] No such file or directory: '/usr/bin/python' >>> file('/usr/bin/python.exe') >>> file('/usr/bin/python2.3') Traceback (most recent call last): File "", line 1, in ? IOError: [Errno 2] No such file or directory: '/usr/bin/python2.3' >>> file('/usr/bin/python2.3.exe') ---------------------------------------------------------------------- Comment By: M.-A. Lemburg (lemburg) Date: 2004-04-02 17:59 Message: Logged In: YES user_id=38388 Patches are welcome :-) I don't have cygwin installed, so there's nothing much I can do about this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=928297&group_id=5470 From noreply at sourceforge.net Sat Jan 13 22:08:44 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 13:08:44 -0800 Subject: [ python-Bugs-1634774 ] locale 1251 does not convert to upper case properly Message-ID: Bugs item #1634774, was opened at 2007-01-13 18:30 Message generated for change (Comment added) made by dobrokot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ivan Dobrokotov (dobrokot) Assigned to: Nobody/Anonymous (nobody) Summary: locale 1251 does not convert to upper case properly Initial Comment:
 # -*- coding: 1251 -*-

import locale

locale.setlocale(locale.LC_ALL, ".1251") #locale name may be Windows specific?

#-----------------------------------------------
print chr(184), chr(168)
assert  chr(255).upper() == chr(223) #OK
assert  chr(184).upper() == chr(168) #fail
#-----------------------------------------------
assert  'q'.upper() == 'Q' #OK 
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  u'?'.upper() == u'?' #OK (locale independent)
assert  '?'.upper() == '?' #fail
I suppose incorrect realization of uppercase like
if ('a' <= c && c <= '?')
  return c+'?'-'?'
symbol '?' (184 in cp1251) is not in range 'a'-'?' ---------------------------------------------------------------------- >Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 22:08 Message: Logged In: YES user_id=1538986 Originator: YES forgot to mention used python version - http://www.python.org/ftp/python/2.5/python-2.5.msi ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:51 Message: Logged In: YES user_id=1538986 Originator: YES sorry, I mean toupper((int)(unsigned char)'?') not just toupper('?') ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:49 Message: Logged In: YES user_id=1538986 Originator: YES C-CRT library fucntion toupper('?') works properly, if I set setlocale(LC_ALL, ".1251") ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 From noreply at sourceforge.net Sat Jan 13 23:14:02 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 14:14:02 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 16:46 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Nobody/Anonymous (nobody) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-13 23:14 Message: Logged In: YES user_id=344921 Originator: NO The first thing to check is if the subprocesses have different sets up file descriptors when you launch them from Python and C, respectively. On Linux, do /proc/$thepid/fd in both cases and compare the output. Does it matter if you use close_fds=1? ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:51 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:50 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sat Jan 13 23:37:46 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 14:37:46 -0800 Subject: [ python-Bugs-1634343 ] subprocess swallows empty arguments under win32 Message-ID: Bugs item #1634343, was opened at 2007-01-12 21:40 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634343&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Patrick M?zard (pmezard) >Assigned to: Peter ?strand (astrand) Summary: subprocess swallows empty arguments under win32 Initial Comment: Hello, empty arguments are not quoted by subprocess.list2cmdline. Therefore nothing is concatenated with other arguments. To reproduce it: test-empty.py """ import sys print sys.argv """ then: """ Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import subprocess >>> p = subprocess.Popen(['python', 'test-empty.py', ''], stdout=subprocess.PIPE) >>> p.communicate() ("['test-empty.py']\r\n", None) """ To solve it: """ --- a\subprocess.py 2007-01-12 21:38:57.734375000 +0100 +++ b\subprocess.py 2007-01-12 21:34:08.406250000 +0100 @@ -499,7 +499,7 @@ if result: result.append(' ') - needquote = (" " in arg) or ("\t" in arg) + needquote = (" " in arg) or ("\t" in arg) or not arg if needquote: result.append('"') """ Regard, Patrick M?zard ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-13 23:37 Message: Logged In: YES user_id=344921 Originator: NO Fixed in revision 53412 (trunk) and 53413 (25-maint). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634343&group_id=5470 From noreply at sourceforge.net Sat Jan 13 23:42:24 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 14:42:24 -0800 Subject: [ python-Bugs-1590864 ] import deadlocks when using PyObjC threads Message-ID: Bugs item #1590864, was opened at 2006-11-05 17:06 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1590864&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michael Tsai (michaeltsai) >Assigned to: Nobody/Anonymous (nobody) >Summary: import deadlocks when using PyObjC threads Initial Comment: When I use subprocess.py from a child thread, sometimes it deadlocks. I determined that the new process is blocked during an import: #0 0x90024427 in semaphore_wait_signal_trap () #1 0x90028414 in pthread_cond_wait () #2 0x004c77bf in PyThread_acquire_lock (lock=0x3189a0, waitflag=1) at Python/thread_pthread.h:452 #3 0x004ae2a6 in lock_import () at Python/import.c:266 #4 0x004b24be in PyImport_ImportModuleLevel (name=0xaad74 "errno", globals=0xbaed0, locals=0x502aa0, fromlist=0xc1378, level=-1) at Python/import.c:2054 #5 0x0048d2e2 in builtin___import__ (self=0x0, args=0x53724c90, kwds=0x0) at Python/bltinmodule.c:47 #6 0x0040decb in PyObject_Call (func=0xa94b8, arg=0x53724c90, kw=0x0) at Objects/abstract.c:1860 and that the code in question is in os.py: def _execvpe(file, args, env=None): from errno import ENOENT, ENOTDIR I think the problem is that since exec (the C function) hasn't yet been called in the new process, it's inherited from the fork a lock that's already held. The main process will eventually release its copy of the lock, but this will not unlock it in the new process, so it deadlocks. If I change os.py so that it imports the constants outside of _execvpe, the new process no longer blocks in this way. This is on Mac OS X 10.4.8. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-13 23:42 Message: Logged In: YES user_id=344921 Originator: NO Since both the reporter and I believes that this is not a bug in the subprocess module, I'm stepping back. ---------------------------------------------------------------------- Comment By: Michael Tsai (michaeltsai) Date: 2007-01-07 18:09 Message: Logged In: YES user_id=817528 Originator: YES I don't have time at the moment to write sample code that reproduces this. But, FYI, I was using PyObjC to create the threads. It might not happen with "threading" threads. And second, I think it's a bug in os.py, not in subprocess.py. Sorry for the confusion. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-07 15:10 Message: Logged In: YES user_id=344921 Originator: NO Can you provide a test case or sample code that demonstrates this problem? I'm a bit unsure of if this really is a subprocess bug or a more general Python bug. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1590864&group_id=5470 From noreply at sourceforge.net Sun Jan 14 00:37:29 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 15:37:29 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 15:46 Message generated for change (Comment added) made by frougon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Nobody/Anonymous (nobody) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Florent Rougon (frougon) Date: 2007-01-13 23:37 Message: Logged In: YES user_id=310088 Originator: YES Hi Peter, At the very beginning, it seems the fds are the same in the child processes running transcode in each implementation (C, Python). With the C version, I got: total 5 dr-x------ 2 flo users 0 2007-01-14 00:12 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:12 .. lrwx------ 1 flo users 64 2007-01-14 00:12 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:12 1 -> /home/flo/tmp/transcode-test/extract_frame.output l-wx------ 1 flo users 64 2007-01-14 00:12 2 -> /home/flo/tmp/transcode-test/extract_frame.output lr-x------ 1 flo users 64 2007-01-14 00:12 3 -> pipe:[41339] lr-x------ 1 flo users 64 2007-01-14 00:12 4 -> pipe:[41340] With the Python version, I got: total 5 dr-x------ 2 flo users 0 2007-01-14 00:05 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:05 .. lrwx------ 1 flo users 64 2007-01-14 00:05 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:05 1 -> /home/flo/tmp/transcode-test/extract_frame.py.output l-wx------ 1 flo users 64 2007-01-14 00:05 2 -> /home/flo/tmp/transcode-test/extract_frame.py.output lr-x------ 1 flo users 64 2007-01-14 00:05 3 -> pipe:[40641] lr-x------ 1 flo users 64 2007-01-14 00:05 4 -> pipe:[40642] That's the only thing I managed to get with the C version. But with the Python version, if I don't list the contents of /proc//fd immediately after the transcode process started, I get this instead: total 3 dr-x------ 2 flo users 0 2007-01-14 00:05 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:05 .. lrwx------ 1 flo users 64 2007-01-14 00:05 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:05 1 -> /home/flo/tmp/transcode-test/extract_frame.py.output l-wx------ 1 flo users 64 2007-01-14 00:05 2 -> /home/flo/tmp/transcode-test/extract_frame.py.output No pipes anymore. Only the 3 standard fds. Note: I performed these tests with the .mpg file that does *not* cause the "Broken pipe" message to appear; therefore, the broken pipe in question is probably unrelated to those we saw disappear in this experiment (transcode launches several processes such as tcdecode, tcextract, etc. all communicating via pipes; I suppose the "Broken pipe" message shows up when one of these programs fails, for reasons we have yet to discover). Regarding your mentioning of close_fds, if I am not mistaken, it's only an optional argument of subrocess.Popen(). I did try to set it to True when first running into the problem, and it didn't help. But now, I am using basic fork() and execvp() (see the attachments), so there is no such close_fds option, right? Thanks. Florent ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-13 22:14 Message: Logged In: YES user_id=344921 Originator: NO The first thing to check is if the subprocesses have different sets up file descriptors when you launch them from Python and C, respectively. On Linux, do /proc/$thepid/fd in both cases and compare the output. Does it matter if you use close_fds=1? ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:51 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:50 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sun Jan 14 01:08:34 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 16:08:34 -0800 Subject: [ python-Bugs-1619659 ] htonl, ntohl don't handle negative longs Message-ID: Bugs item #1619659, was opened at 2006-12-20 13:42 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619659&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Adam Olsen (rhamphoryncus) Assigned to: Nobody/Anonymous (nobody) Summary: htonl, ntohl don't handle negative longs Initial Comment: >>> htonl(-5) -67108865 >>> htonl(-5L) Traceback (most recent call last): File "", line 1, in ? OverflowError: can't convert negative value to unsigned long It works fine in 2.1 and 2.2, but fails in 2.3, 2.4, 2.5. htons, ntohs do not appear to have the bug, but I'm not 100% sure. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-13 19:08 Message: Logged In: YES user_id=6380 Originator: NO mark-roberts, where's your patch? ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2006-12-29 21:15 Message: Logged In: YES user_id=1591633 Originator: NO Hmmm, yes, I see a problem. At the very least, I think we may be wanting some consistency between the acceptance of ints and longs. Also, I think we should return an unsigned long instead of just a long (which can be negative). I've got a patch right now to make htonl, ntohl, htons, and ntohs never return a negative number. I'm rather waffling to the idea of whether we should accept negative numbers at all in any of the functions. The behavior is undefined, and it is, afterall, better not to guess what a user intended. However, consistency should be a desirable goal, and we should accept make the interface consistent for both ints and longs. Mark ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2006-12-28 16:37 Message: Logged In: YES user_id=12364 Originator: YES I forgot to mention it, but the only reason htonl should get passed a negative number is that it (and possibly struct?) produce a negative number. Changing them to always produce positive numbers may be an alternative solution. Or we may want to do both, always producing positive while also accepting negative. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2006-12-26 04:24 Message: Logged In: YES user_id=1591633 Originator: NO >From man page for htonl and friends: #include uint32_t htonl(uint32_t hostlong); uint16_t htons(uint16_t hostshort); uint32_t ntohl(uint32_t netlong); uint16_t ntohs(uint16_t netshort); Python does call these underlying functions in Modules/socketmodule.c. The problem comes from that PyLong_AsUnsignedLong() called in socket_htonl() specifically checks to see that the value cannot be less than 0. The error checking was rather exquisite, I might add. - Mark ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619659&group_id=5470 From noreply at sourceforge.net Sun Jan 14 08:36:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 23:36:47 -0800 Subject: [ python-Bugs-1619659 ] htonl, ntohl don't handle negative longs Message-ID: Bugs item #1619659, was opened at 2006-12-20 12:42 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619659&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Adam Olsen (rhamphoryncus) Assigned to: Nobody/Anonymous (nobody) Summary: htonl, ntohl don't handle negative longs Initial Comment: >>> htonl(-5) -67108865 >>> htonl(-5L) Traceback (most recent call last): File "", line 1, in ? OverflowError: can't convert negative value to unsigned long It works fine in 2.1 and 2.2, but fails in 2.3, 2.4, 2.5. htons, ntohs do not appear to have the bug, but I'm not 100% sure. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-14 01:36 Message: Logged In: YES user_id=1591633 Originator: NO It is here: https://sourceforge.net/tracker/index.php?func=detail&aid=1635058&group_id=5470&atid=305470 I apologize for not getting to this sooner, but I've been working like a frenzied devil at work. Things have been really hectic with our customers wanting year end reports. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-13 18:08 Message: Logged In: YES user_id=6380 Originator: NO mark-roberts, where's your patch? ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2006-12-29 20:15 Message: Logged In: YES user_id=1591633 Originator: NO Hmmm, yes, I see a problem. At the very least, I think we may be wanting some consistency between the acceptance of ints and longs. Also, I think we should return an unsigned long instead of just a long (which can be negative). I've got a patch right now to make htonl, ntohl, htons, and ntohs never return a negative number. I'm rather waffling to the idea of whether we should accept negative numbers at all in any of the functions. The behavior is undefined, and it is, afterall, better not to guess what a user intended. However, consistency should be a desirable goal, and we should accept make the interface consistent for both ints and longs. Mark ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2006-12-28 15:37 Message: Logged In: YES user_id=12364 Originator: YES I forgot to mention it, but the only reason htonl should get passed a negative number is that it (and possibly struct?) produce a negative number. Changing them to always produce positive numbers may be an alternative solution. Or we may want to do both, always producing positive while also accepting negative. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2006-12-26 03:24 Message: Logged In: YES user_id=1591633 Originator: NO >From man page for htonl and friends: #include uint32_t htonl(uint32_t hostlong); uint16_t htons(uint16_t hostshort); uint32_t ntohl(uint32_t netlong); uint16_t ntohs(uint16_t netshort); Python does call these underlying functions in Modules/socketmodule.c. The problem comes from that PyLong_AsUnsignedLong() called in socket_htonl() specifically checks to see that the value cannot be less than 0. The error checking was rather exquisite, I might add. - Mark ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619659&group_id=5470 From noreply at sourceforge.net Sun Jan 14 16:09:04 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 07:09:04 -0800 Subject: [ python-Bugs-1635217 ] Little mistake in docs Message-ID: Bugs item #1635217, was opened at 2007-01-14 15:09 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635217&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Nobody/Anonymous (nobody) Summary: Little mistake in docs Initial Comment: It would be nice to see example of setup() call on the page with "requires" keywords argument description http://docs.python.org/dist/node10.html Like: setup(..., requires=["somepackage (>1.0, !=1.5)"], provides=["mypkg (1.1)"] ) There seems to be mistake in table with examples for "provides" keyword on the same page - it looks like: mypkg (1.1 shouldn't this be mypkg (1.1)? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635217&group_id=5470 From noreply at sourceforge.net Sun Jan 14 21:00:49 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 12:00:49 -0800 Subject: [ python-Bugs-1635335 ] Add registry functions to windows postinstall Message-ID: Bugs item #1635335, was opened at 2007-01-14 20:00 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635335&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: Feature Request Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Nobody/Anonymous (nobody) Summary: Add registry functions to windows postinstall Initial Comment: It would be useful to add regkey_created() or regkey_modified() to windows postinstall scripts along with directory_created() and file_created(). Useful for adding installed package to App Paths. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635335&group_id=5470 From noreply at sourceforge.net Sun Jan 14 21:28:45 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 12:28:45 -0800 Subject: [ python-Bugs-1635353 ] expanduser tests in test_posixpath fail if $HOME ends in a / Message-ID: Bugs item #1635353, was opened at 2007-01-14 21:28 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635353&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Marien Zwart (marienz) Assigned to: Nobody/Anonymous (nobody) Summary: expanduser tests in test_posixpath fail if $HOME ends in a / Initial Comment: test_expanduser in test_posixpath checks if expanduser('~/') equals expanduser('~') + '/'. expanduser checks if the home dir location ends in a / and skips the first character of the appended path if it does (so expanduser('~/foo') with HOME=/spork/ becomes /spork/foo, not /spork//foo). This means that if you run test_posixpath with HOME=/spork/ expanduser('~') and expanduser('~/') both return '/spork/' and the test fails because '/spork//' != '/spork/'. Possible fixes I can think of: either have expanduser strip the trailing slash from the home directory instead of skipping the first slash from the appended path (so still with HOME=/spork/ expanduser('~') would be '/spork'), or have the test check if expanduser('~') ends in a backslash and check if expanduser('~') is equal to expanduser('~/') in that case. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635353&group_id=5470 From noreply at sourceforge.net Sun Jan 14 21:58:24 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 12:58:24 -0800 Subject: [ python-Bugs-1635363 ] Add command line help to windows unistall binary Message-ID: Bugs item #1635363, was opened at 2007-01-14 20:58 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635363&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Nobody/Anonymous (nobody) Summary: Add command line help to windows unistall binary Initial Comment: It is impossible to remove package installed with uninstall binary created with Distutils unless you know that you need to specify -u switch. "E:\ENV\Python24\Removescons.exe" -u "E:\ENV\Python24\scons-wininst.log" If there are any additional switches - they could be displayed in MsgBox instead of/along with error message. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635363&group_id=5470 From noreply at sourceforge.net Mon Jan 15 00:08:54 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 15:08:54 -0800 Subject: [ python-Bugs-1624674 ] webbrowser.open_new() suggestion Message-ID: Bugs item #1624674, was opened at 2006-12-29 18:03 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1624674&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Imre P?ntek (imi1984) Assigned to: Nobody/Anonymous (nobody) Summary: webbrowser.open_new() suggestion Initial Comment: Hello, under Linux if I use webbrowser.open_new('...') a konqueror gets invoked. At the time when invoking konqueror (maybe you probe first, but anyways) you assume that user has a properly installed kde. But if you assume the user has a properly installed KDE you have a better opportunity to open a webpage, even in the browser preferred by the user -- no matter really what it is. Try this one: kfmclient exec http://sourceforge.net/ using this one the client associated with .html in kcontrol gets invoked. I suppose that (becouse of the ability to customize the browser) this way would be better if available than guessing which browser would the user prefer. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-14 17:08 Message: Logged In: YES user_id=1591633 Originator: NO A quick look at the code makes me think that it does try to run kfmclient first. Specifically, line 351 of webbrowser.py tries kfmclient, while like line 363 of webbrowser.py opens konqueror. I don't really run KDE, Gnome, or Windows, so I'm not a lot of help for testing this for you. I can, however, tell you that it does the "right thing" for me, in that it opens Firefox. When I did Python development on Windows, it also "did the right thing" there. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1624674&group_id=5470 From noreply at sourceforge.net Mon Jan 15 01:04:54 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 16:04:54 -0800 Subject: [ python-Feature Requests-1634717 ] csv.DictWriter: Include offending name in error message Message-ID: Feature Requests item #1634717, was opened at 2007-01-13 08:53 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1634717&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Gabriel Genellina (gagenellina) Assigned to: Nobody/Anonymous (nobody) Summary: csv.DictWriter: Include offending name in error message Initial Comment: In csv.py, class DictWriter, method _dict_to_list, when rowdict contains a key that is not a known field name, a ValueError is raised, but no reference to the offending name is given. As the code iterates along the dict keys, and stops at the first unknown one, its trivial to include such information. Replace lines: if k not in self.fieldnames: raise ValueError, "dict contains fields not in fieldnames" with: if k not in self.fieldnames: raise ValueError, "dict contains field not in fieldnames: %r" % k ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-14 18:04 Message: Logged In: YES user_id=1591633 Originator: NO Even better would be a list of all extraneous fields fields. I offered patch 1635454. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1634717&group_id=5470 From noreply at sourceforge.net Mon Jan 15 01:40:44 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 16:40:44 -0800 Subject: [ python-Bugs-1633628 ] time.strftime() accepts format which time.strptime doesnt Message-ID: Bugs item #1633628, was opened at 2007-01-11 17:44 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633628&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: time.strftime() accepts format which time.strptime doesnt Initial Comment: [forwarded from http://bugs.debian.org/354636] time.strftime() accepts '%F %T' as format but time.strptime() doesn't, if the rule is "all what strftime accepts strptime must also" then that is bad. Check this: darwin:~# python2.4 Python 2.4.2 (#2, Nov 20 2005, 17:04:48) [GCC 4.0.3 20051111 (prerelease) (Debian 4.0.2-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> format = '%F %T' >>> t = time.strftime(format) >>> t '2006-02-27 18:09:37' >>> time.strptime(t,format) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/_strptime.py", line 287, in strptime format_regex = time_re.compile(format) File "/usr/lib/python2.4/_strptime.py", line 264, in compile return re_compile(self.pattern(format), IGNORECASE) File "/usr/lib/python2.4/_strptime.py", line 256, in pattern processed_format = "%s%s%s" % (processed_format, KeyError: 'F' >>> ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-14 18:40 Message: Logged In: YES user_id=1591633 Originator: NO For record: %F = '%Y-%m-%d' %T = '%H:%M:%S'. Patch 1635473: http://sourceforge.net/tracker/index.php?func=detail&aid=1635473&group_id=5470&atid=305470 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633628&group_id=5470 From noreply at sourceforge.net Mon Jan 15 03:33:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 18:33:09 -0800 Subject: [ python-Bugs-1603688 ] SaveConfigParser.write() doesn't quote %-Sign Message-ID: Bugs item #1603688, was opened at 2006-11-27 06:15 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603688&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Rebecca Breu (rbreu) Assigned to: Nobody/Anonymous (nobody) Summary: SaveConfigParser.write() doesn't quote %-Sign Initial Comment: >>> parser = ConfigParser.SafeConfigParser() >>> parser.add_section("test") >>> parser.set("test", "foo", "bar%bar") >>> parser.write(open("test.config", "w")) >>> parser2 = ConfigParser.SafeConfigParser() >>> parser2.readfp(open("test.config")) >>> parser.get("test", "foo") Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/ConfigParser.py", line 525, in get return self._interpolate(section, option, value, d) File "/usr/lib/python2.4/ConfigParser.py", line 593, in _interpolate self._interpolate_some(option, L, rawval, section, vars, 1) File "/usr/lib/python2.4/ConfigParser.py", line 634, in _interpolate_some "'%%' must be followed by '%%' or '(', found: %r" % (rest,)) ConfigParser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%bar' Problem: SaveConfigParser saves the string "bar%bar" as is and not as "bar%%bar". ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-14 20:33 Message: Logged In: YES user_id=1591633 Originator: NO I'm not sure that automagically changing their input is such a great idea. I'm -0 for automagically changing their input, but +1 for raising ValueError when the input contains a string that can't be properly interpolated. I've implemented the patch both ways. Anyone else have an opinion about this? Examples of such malformatted strings include bar%bar and bar%. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603688&group_id=5470 From noreply at sourceforge.net Sun Jan 14 05:20:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 13 Jan 2007 20:20:50 -0800 Subject: [ python-Bugs-1633941 ] for line in sys.stdin: doesn't notice EOF the first time Message-ID: Bugs item #1633941, was opened at 2007-01-12 07:34 Message generated for change (Comment added) made by gagenellina You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: for line in sys.stdin: doesn't notice EOF the first time Initial Comment: [forwarded from http://bugs.debian.org/315888] for line in sys.stdin: doesn't notice EOF the first time when reading from tty. The test program: import sys for line in sys.stdin: print line, print "eof" A sample session: liw at esme$ python foo.py foo <--- I pressed Enter and then Ctrl-D foo <--- then this appeared, but not more eof <--- this only came when I pressed Ctrl-D a second time liw at esme$ Seems to me that there is some buffering issue where Python needs to read end-of-file twice to notice it on all levels. Once should be enough. ---------------------------------------------------------------------- Comment By: Gabriel Genellina (gagenellina) Date: 2007-01-14 01:20 Message: Logged In: YES user_id=479790 Originator: NO Same thing occurs on Windows. Even worse, if the line does not end with CR, Ctrl-Z (EOF in Windows, equivalent to Ctrl-D) has to be pressed 3 times: D:\Temp>python foo.py foo <--- I pressed Enter ^Z <--- I pressed Ctrl-Z and then Enter again foo <--- this appeared ^Z <--- I pressed Ctrl-Z and then Enter again D:\Temp>python foo.py foo^Z <--- I pressed Ctrl-Z and then Enter ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again foo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 From noreply at sourceforge.net Mon Jan 15 08:43:36 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 23:43:36 -0800 Subject: [ python-Bugs-1635639 ] ConfigParser does not quote % Message-ID: Bugs item #1635639, was opened at 2007-01-15 01:43 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635639&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Mark Roberts (mark-roberts) Assigned to: Nobody/Anonymous (nobody) Summary: ConfigParser does not quote % Initial Comment: This is covered by bug 1603688 (https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603688&group_id=5470) I implemented 2 versions of this patch. One version raises ValueError when an invalid interpolation syntax is encountered (such as foo%, foo%bar, and %foo, but not %%foo and %(dir)foo). The other version simply replaces appropriate %s with %%s. Initially, I believed ValueError was the appropriate way to go with this. However, when I thought about how I use ConfigParser, I realized that it would be far nicer if it simply worked. I'm +0.5 to ValueError, and +1 to munging the values. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635639&group_id=5470 From noreply at sourceforge.net Mon Jan 15 08:44:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 23:44:10 -0800 Subject: [ python-Bugs-1635639 ] ConfigParser does not quote % Message-ID: Bugs item #1635639, was opened at 2007-01-15 01:43 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635639&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Mark Roberts (mark-roberts) Assigned to: Nobody/Anonymous (nobody) Summary: ConfigParser does not quote % Initial Comment: This is covered by bug 1603688 (https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603688&group_id=5470) I implemented 2 versions of this patch. One version raises ValueError when an invalid interpolation syntax is encountered (such as foo%, foo%bar, and %foo, but not %%foo and %(dir)foo). The other version simply replaces appropriate %s with %%s. Initially, I believed ValueError was the appropriate way to go with this. However, when I thought about how I use ConfigParser, I realized that it would be far nicer if it simply worked. I'm +0.5 to ValueError, and +1 to munging the values. ---------------------------------------------------------------------- >Comment By: Mark Roberts (mark-roberts) Date: 2007-01-15 01:44 Message: Logged In: YES user_id=1591633 Originator: YES File Added: bug_1603688_cfgparser_munges.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635639&group_id=5470 From noreply at sourceforge.net Mon Jan 15 08:45:40 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 14 Jan 2007 23:45:40 -0800 Subject: [ python-Bugs-1603688 ] SaveConfigParser.write() doesn't quote %-Sign Message-ID: Bugs item #1603688, was opened at 2006-11-27 06:15 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603688&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Rebecca Breu (rbreu) Assigned to: Nobody/Anonymous (nobody) Summary: SaveConfigParser.write() doesn't quote %-Sign Initial Comment: >>> parser = ConfigParser.SafeConfigParser() >>> parser.add_section("test") >>> parser.set("test", "foo", "bar%bar") >>> parser.write(open("test.config", "w")) >>> parser2 = ConfigParser.SafeConfigParser() >>> parser2.readfp(open("test.config")) >>> parser.get("test", "foo") Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/ConfigParser.py", line 525, in get return self._interpolate(section, option, value, d) File "/usr/lib/python2.4/ConfigParser.py", line 593, in _interpolate self._interpolate_some(option, L, rawval, section, vars, 1) File "/usr/lib/python2.4/ConfigParser.py", line 634, in _interpolate_some "'%%' must be followed by '%%' or '(', found: %r" % (rest,)) ConfigParser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%bar' Problem: SaveConfigParser saves the string "bar%bar" as is and not as "bar%%bar". ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-15 01:45 Message: Logged In: YES user_id=1591633 Originator: NO Initially, I believed ValueError was the appropriate way to go with this. However, when I thought about how I use ConfigParser, I realized that it would be far nicer if it simply worked. See the patches in 1635639. http://sourceforge.net/tracker/index.php?func=detail&aid=1635639&group_id=5470&atid=105470 Good catch on this. I haven't caught it and I've been using ConfigParser for a while now. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-14 20:33 Message: Logged In: YES user_id=1591633 Originator: NO I'm not sure that automagically changing their input is such a great idea. I'm -0 for automagically changing their input, but +1 for raising ValueError when the input contains a string that can't be properly interpolated. I've implemented the patch both ways. Anyone else have an opinion about this? Examples of such malformatted strings include bar%bar and bar%. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603688&group_id=5470 From noreply at sourceforge.net Mon Jan 15 11:26:05 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 15 Jan 2007 02:26:05 -0800 Subject: [ python-Bugs-1635741 ] Interpreter seems to leak references after finalization Message-ID: Bugs item #1635741, was opened at 2007-01-15 10:26 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635741&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: B Sizer (kylotan) Assigned to: Nobody/Anonymous (nobody) Summary: Interpreter seems to leak references after finalization Initial Comment: This C code: #include int main(int argc, char *argv[]) { Py_Initialize(); Py_Finalize(); Py_Initialize(); Py_Finalize(); Py_Initialize(); Py_Finalize(); Py_Initialize(); Py_Finalize(); Py_Initialize(); Py_Finalize(); Py_Initialize(); Py_Finalize(); Py_Initialize(); Py_Finalize(); } Produces this output: [7438 refs] [7499 refs] [7550 refs] [7601 refs] [7652 refs] [7703 refs] [7754 refs] A similar program configured to call the Py_Initialize()/Py_Finalize() 1000 times ends up with: ... [58295 refs] [58346 refs] [58397 refs] This is with a fresh debug build of Python 2.5.0 on Windows XP, using Visual C++ 2003. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635741&group_id=5470 From noreply at sourceforge.net Mon Jan 15 13:48:21 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 15 Jan 2007 04:48:21 -0800 Subject: [ python-Bugs-1633605 ] logging module / wrong bytecode? Message-ID: Bugs item #1633605, was opened at 2007-01-11 23:06 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633605&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: logging module / wrong bytecode? Initial Comment: [forwarded from http://bugs.debian.org/390152] seen with python2.4 and python2.5 on debian unstable import logging logging.basicConfig(level=logging.DEBUG, format='%(pathname)s:%(lineno)d') logging.info('whoops') The output when the logging/__init__.pyc file exists is: logging/__init__.py:1072 and when the __init__.pyc is deleted the output becomes: tst.py:5 ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2007-01-15 12:48 Message: Logged In: YES user_id=308438 Originator: NO It's also possible that symlinks mean that the value stored in a .pyc file are different to the expected values. See bug #1616422 - this appears to be the same issue. It's not about the frames - it's about the paths stored in the .pyc files. If at any time the path (the module's __file__ attribute) in the .pyc file is different to the actual path to the .py file, you would get this issue. It's not a logging problem per se - it's that the .py path and the path in the .pyc files don't match when they should. Logging just happens to be one of the packages which tries to use the information. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2007-01-12 21:22 Message: Logged In: YES user_id=764593 Originator: NO Does debian by any chance (try to?) store the .py and .pyc files in different directories? The second result is correct; the second suggests that it somehow got confused about which frames to ignore. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633605&group_id=5470 From noreply at sourceforge.net Mon Jan 15 14:59:25 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 15 Jan 2007 05:59:25 -0800 Subject: [ python-Bugs-1635892 ] description of the beta distribution is incorrect Message-ID: Bugs item #1635892, was opened at 2007-01-15 06:59 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635892&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: elgordo (azgordo) Assigned to: Nobody/Anonymous (nobody) Summary: description of the beta distribution is incorrect Initial Comment: In the random module, the documentation is incorrect. Specifically, the limits on the parameters for the beta-distribution should be changed from ">-1" to ">0". This parallels to (correct) limits on the parameters for the gamma-distribution. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635892&group_id=5470 From noreply at sourceforge.net Mon Jan 15 17:44:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 15 Jan 2007 08:44:13 -0800 Subject: [ python-Feature Requests-1567331 ] logging.RotatingFileHandler has no "infinite" backupCount Message-ID: Feature Requests item #1567331, was opened at 2006-09-28 21:36 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1567331&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Skip Montanaro (montanaro) Assigned to: Vinay Sajip (vsajip) Summary: logging.RotatingFileHandler has no "infinite" backupCount Initial Comment: It seems to me that logging.RotatingFileHandler should have a way to spell "never delete old log files". This is useful in situations where you want an external process (manual or automatic) make decisions about deleting log files. ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2007-01-15 16:44 Message: Logged In: YES user_id=308438 Originator: NO The problem with this is that on rollover, RotatingFileHandler renames old logs: rollover.log.3 -> rollover.log.4, rollover.log.2 -> rollover.log.3, rollover.log.1 -> rollover.log.2, rollover.log -> rollover.log.1, and a new rollover.log is opened. With an arbitrary number of old log files, this leads to arbitrary renaming time - which could cause long pauses due to logging, not a good idea. If you are using e.g. logrotate or newsyslog, or a custom program to do logfile rotation, you can use the new logging.handlers.WatchedFileHandler handler (meant for use on Unix/Linux only - on Windows, logfiles can't be renamed or moved while in use and so the requirement doesn't arise) which watches the logged-to file to see when it changes. This has recently been checked into SVN trunk. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1567331&group_id=5470 From noreply at sourceforge.net Mon Jan 15 17:48:59 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 15 Jan 2007 08:48:59 -0800 Subject: [ python-Feature Requests-1553380 ] Print full exceptions as they occur in logging Message-ID: Feature Requests item #1553380, was opened at 2006-09-06 12:57 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1553380&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.6 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Michael Hoffman (hoffmanm) Assigned to: Vinay Sajip (vsajip) Summary: Print full exceptions as they occur in logging Initial Comment: Sometimes exceptions occur when using logging that are caused by the user code. However, logging catches these exceptions and does not give a clue as to where the error occurred. Printing full exceptions as suggested in RFE http://www.python.org/sf/1553375 would be a big help. ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2007-01-15 16:48 Message: Logged In: YES user_id=308438 Originator: NO Logging now catches very few exceptions: if logging.raiseExceptions is set to 1 (the default), exceptions are generally raised. There have been recent changes in logging to reduce the number of bare except: clauses, so I am closing this item as I believe it has been adequately addressed. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1553380&group_id=5470 From noreply at sourceforge.net Mon Jan 15 20:01:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 15 Jan 2007 11:01:57 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 14:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 13:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 14:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Mon Jan 15 20:15:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 15 Jan 2007 11:15:47 -0800 Subject: [ python-Bugs-1633628 ] time.strftime() accepts format which time.strptime doesnt Message-ID: Bugs item #1633628, was opened at 2007-01-11 15:44 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633628&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: time.strftime() accepts format which time.strptime doesnt Initial Comment: [forwarded from http://bugs.debian.org/354636] time.strftime() accepts '%F %T' as format but time.strptime() doesn't, if the rule is "all what strftime accepts strptime must also" then that is bad. Check this: darwin:~# python2.4 Python 2.4.2 (#2, Nov 20 2005, 17:04:48) [GCC 4.0.3 20051111 (prerelease) (Debian 4.0.2-4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import time >>> format = '%F %T' >>> t = time.strftime(format) >>> t '2006-02-27 18:09:37' >>> time.strptime(t,format) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/_strptime.py", line 287, in strptime format_regex = time_re.compile(format) File "/usr/lib/python2.4/_strptime.py", line 264, in compile return re_compile(self.pattern(format), IGNORECASE) File "/usr/lib/python2.4/_strptime.py", line 256, in pattern processed_format = "%s%s%s" % (processed_format, KeyError: 'F' >>> ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2007-01-15 11:15 Message: Logged In: YES user_id=357491 Originator: NO It is not a goal of strptime to support directive that are not explicitly documented as supported. time.strftime uses the platform's implementation which can implement more directives than documented. But strptime is meant to be fully platform-independent for those directive documented only. Trying to support all directives for all platforms is just a practice in futility considering how many they are and how they might be implemented differently. As both directives mentioned here are not documented as supported I am closing as invalid. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-14 16:40 Message: Logged In: YES user_id=1591633 Originator: NO For record: %F = '%Y-%m-%d' %T = '%H:%M:%S'. Patch 1635473: http://sourceforge.net/tracker/index.php?func=detail&aid=1635473&group_id=5470&atid=305470 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633628&group_id=5470 From noreply at sourceforge.net Tue Jan 16 03:17:02 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 15 Jan 2007 18:17:02 -0800 Subject: [ python-Bugs-1635639 ] ConfigParser does not quote % Message-ID: Bugs item #1635639, was opened at 2007-01-15 01:43 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635639&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Mark Roberts (mark-roberts) Assigned to: Nobody/Anonymous (nobody) Summary: ConfigParser does not quote % Initial Comment: This is covered by bug 1603688 (https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603688&group_id=5470) I implemented 2 versions of this patch. One version raises ValueError when an invalid interpolation syntax is encountered (such as foo%, foo%bar, and %foo, but not %%foo and %(dir)foo). The other version simply replaces appropriate %s with %%s. Initially, I believed ValueError was the appropriate way to go with this. However, when I thought about how I use ConfigParser, I realized that it would be far nicer if it simply worked. I'm +0.5 to ValueError, and +1 to munging the values. ---------------------------------------------------------------------- >Comment By: Mark Roberts (mark-roberts) Date: 2007-01-15 20:17 Message: Logged In: YES user_id=1591633 Originator: YES For the record, this was supposed to be a patch. I don't know if the admins have any way of moving it to that category. I guess that explained the funky categories and groups. Sorry for the inconvenience. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-15 01:44 Message: Logged In: YES user_id=1591633 Originator: YES File Added: bug_1603688_cfgparser_munges.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635639&group_id=5470 From noreply at sourceforge.net Tue Jan 16 12:42:03 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 03:42:03 -0800 Subject: [ python-Bugs-494589 ] os.path.expandvars deletes things on w32 Message-ID: Bugs item #494589, was opened at 2001-12-18 15:29 Message generated for change (Comment added) made by sjoerd You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494589&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Michael McCandless (mikemccand) Assigned to: Nobody/Anonymous (nobody) Summary: os.path.expandvars deletes things on w32 Initial Comment: Try this: import os.path print os.path.expandvars('foo$doesnotexist') On FreeBSD, Python 2.1, I get: 'foo$doesnotexist' But on WIN32, Python 2.1, I get: 'foo' The docs explicitly states that variables that are not found will be left in place ... but on win32 that appears to not be the case. ---------------------------------------------------------------------- >Comment By: Sjoerd Mullender (sjoerd) Date: 2007-01-16 12:42 Message: Logged In: YES user_id=43607 Originator: NO I got bit by this today and saw there was a bug report of over 6 years old. The patch is trivial, though. The attached patch may not solve the problem that the various implementations of expandvars are made exactly the same again, but it does solve the problem that this implementation doesn't do what it promises in the doc string. It also solves the problem noted by Tim of two consecutive non-existing variables being treated differently. File Added: ntpath.patch ---------------------------------------------------------------------- Comment By: Behrang Dadsetan (bdadsetan) Date: 2003-06-22 15:45 Message: Logged In: YES user_id=806514 tim_one is right. There is plenty of dodgy things hiding behind the os.path world, especially when it comes to os.path.expandvars() There are two problems here. - Mismatch in between the doc strings of the different implementation of expandvars and the "official" os.path.expandvars documentation. - the ntpath and dospath implementations are buggy when compared to their comments/docstrings. About the first problem, the inconsistency created some time ago in between the different implementations tasks makes it difficult to choose a solution. Everyone will probably agree that all the platform specific implementations of expandvars should have the same functionality. The one that should be taken over will probably need to be announced by the BDFL. Some rule which should not have let this here happen, and on which I believe we all will agree on: Same interface=same documentation->same functionality To implement either copy paste exactly the same expandvars definition from one platform to another (NT, DOS, POSIX), or somehow rather arrange that when there is no specific implementation for the platform, a "default" python implementation is used on the os.path level. To maximize the fruits of my small work, I would of course prefer that the version below becomes the standard and that the documentation get updated. To be complete, shall the documentation remain unchanged and the implementation of dos and nt gets adapted (copied from posix), the mac implementation could remain unchanged. But I feel its docstring and its documentation should be in line with the rest of the implementations. So my view point-> same interface, same documentation For the second problem - as of now a real bug whatever we decide, I wrote within this comment (hereafter) a new expandvars version which fits the docstring documentation of dospath.py and the comments of ntpath.py. Sorry you will be getting no patch from me at the moment since sourceforge's anonymous CVS access does not like me. Please note that my version borrows alot from the posixpath.py implementation and my changes are the ones of a python amateur who is open to critic. #expandvars() implementation _varprog = None _findquotes = None def expandvars(path): """Expand paths containing shell variable substitutions. The following rules apply: - no expansion within single quotes - no escape character, except for '$$' which is translated into '$' - ${varname} is accepted. - varnames can be made out of letters, digits and the character '_'""" global _varprog, _findquotes if '$' not in path: return path if not _varprog: import re _varprog = re.compile(r'\$(\w+|\{[^}]*\}|\$)') _findquotes = re.compile("'.*?'") quoteareas = [] i = 0 while 1: quotearea = _findquotes.search(path, i) if not quotearea: break (i, j) = quotearea.span(0) quoteareas.append((i, j)) i = j i = 0 while 1: m = _varprog.search(path, i) if not m: break i, j = m.span(0) insidequotes=None for (quotebegin, quoteend) in quoteareas: if quotebegin < i and quoteend > i: insidequotes=1 break if insidequotes: i = j continue name = m.group(1) if name[:1] == '$': path = path[:i] + '$' + path[j:] i = i + 1 else: if name[:1] == '{' and name[-1:] == '}': name = name[1:-1] if os.environ.has_key(name): tail = path[j:] path = path[:i] + os.environ[name] i = len(path) path = path + tail else: i = j return path ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-19 07:56 Message: Logged In: YES user_id=31435 Another bug: with two adjacent envars that do exist, only the first is expanded (on Windows): >>> os.path.expandvars('$TMP$TMP') 'c:\\windows\\TEMP$TMP' >>> Another bug: the Windows expandvars doesn't expand envars in single quotes, but the posixpath flavor does: >>> ntpath.expandvars("'$TMP'") "'$TMP'" >>> posixpath.expandvars("'$TMP'") "'c:\\windows\\TEMP'" >>> Another bug: $$ is an escape sequence (meaning a single $) on Windows but not on Unix: >>> ntpath.expandvars('$$') '$' >>> posixpath.expandvars('$$') '$$' >>> Unassigning from me, as this is a bottomless pit spanning platforms and bristling with backward-compatibility traps no matter what's done about it. Somebody who cares enough should write a PEPlet to sort out the mess, else I'd just leave it alone. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-18 15:43 Message: Logged In: YES user_id=6380 Hm, I do understand it, the code is broken (compared to the spec). No time to fix it. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-18 15:35 Message: Logged In: YES user_id=6380 Confirmed, also in 2.2. I don't understand it, the code looks OK. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494589&group_id=5470 From noreply at sourceforge.net Tue Jan 16 16:50:53 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 07:50:53 -0800 Subject: [ python-Bugs-494589 ] os.path.expandvars deletes things on w32 Message-ID: Bugs item #494589, was opened at 2001-12-18 09:29 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494589&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open >Resolution: Accepted Priority: 5 Private: No Submitted By: Michael McCandless (mikemccand) Assigned to: Nobody/Anonymous (nobody) Summary: os.path.expandvars deletes things on w32 Initial Comment: Try this: import os.path print os.path.expandvars('foo$doesnotexist') On FreeBSD, Python 2.1, I get: 'foo$doesnotexist' But on WIN32, Python 2.1, I get: 'foo' The docs explicitly states that variables that are not found will be left in place ... but on win32 that appears to not be the case. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-16 10:50 Message: Logged In: YES user_id=6380 Originator: NO Looks good. Sjoerd, can you check that in yourself or did you give up your privileges? ---------------------------------------------------------------------- Comment By: Sjoerd Mullender (sjoerd) Date: 2007-01-16 06:42 Message: Logged In: YES user_id=43607 Originator: NO I got bit by this today and saw there was a bug report of over 6 years old. The patch is trivial, though. The attached patch may not solve the problem that the various implementations of expandvars are made exactly the same again, but it does solve the problem that this implementation doesn't do what it promises in the doc string. It also solves the problem noted by Tim of two consecutive non-existing variables being treated differently. File Added: ntpath.patch ---------------------------------------------------------------------- Comment By: Behrang Dadsetan (bdadsetan) Date: 2003-06-22 09:45 Message: Logged In: YES user_id=806514 tim_one is right. There is plenty of dodgy things hiding behind the os.path world, especially when it comes to os.path.expandvars() There are two problems here. - Mismatch in between the doc strings of the different implementation of expandvars and the "official" os.path.expandvars documentation. - the ntpath and dospath implementations are buggy when compared to their comments/docstrings. About the first problem, the inconsistency created some time ago in between the different implementations tasks makes it difficult to choose a solution. Everyone will probably agree that all the platform specific implementations of expandvars should have the same functionality. The one that should be taken over will probably need to be announced by the BDFL. Some rule which should not have let this here happen, and on which I believe we all will agree on: Same interface=same documentation->same functionality To implement either copy paste exactly the same expandvars definition from one platform to another (NT, DOS, POSIX), or somehow rather arrange that when there is no specific implementation for the platform, a "default" python implementation is used on the os.path level. To maximize the fruits of my small work, I would of course prefer that the version below becomes the standard and that the documentation get updated. To be complete, shall the documentation remain unchanged and the implementation of dos and nt gets adapted (copied from posix), the mac implementation could remain unchanged. But I feel its docstring and its documentation should be in line with the rest of the implementations. So my view point-> same interface, same documentation For the second problem - as of now a real bug whatever we decide, I wrote within this comment (hereafter) a new expandvars version which fits the docstring documentation of dospath.py and the comments of ntpath.py. Sorry you will be getting no patch from me at the moment since sourceforge's anonymous CVS access does not like me. Please note that my version borrows alot from the posixpath.py implementation and my changes are the ones of a python amateur who is open to critic. #expandvars() implementation _varprog = None _findquotes = None def expandvars(path): """Expand paths containing shell variable substitutions. The following rules apply: - no expansion within single quotes - no escape character, except for '$$' which is translated into '$' - ${varname} is accepted. - varnames can be made out of letters, digits and the character '_'""" global _varprog, _findquotes if '$' not in path: return path if not _varprog: import re _varprog = re.compile(r'\$(\w+|\{[^}]*\}|\$)') _findquotes = re.compile("'.*?'") quoteareas = [] i = 0 while 1: quotearea = _findquotes.search(path, i) if not quotearea: break (i, j) = quotearea.span(0) quoteareas.append((i, j)) i = j i = 0 while 1: m = _varprog.search(path, i) if not m: break i, j = m.span(0) insidequotes=None for (quotebegin, quoteend) in quoteareas: if quotebegin < i and quoteend > i: insidequotes=1 break if insidequotes: i = j continue name = m.group(1) if name[:1] == '$': path = path[:i] + '$' + path[j:] i = i + 1 else: if name[:1] == '{' and name[-1:] == '}': name = name[1:-1] if os.environ.has_key(name): tail = path[j:] path = path[:i] + os.environ[name] i = len(path) path = path + tail else: i = j return path ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-19 01:56 Message: Logged In: YES user_id=31435 Another bug: with two adjacent envars that do exist, only the first is expanded (on Windows): >>> os.path.expandvars('$TMP$TMP') 'c:\\windows\\TEMP$TMP' >>> Another bug: the Windows expandvars doesn't expand envars in single quotes, but the posixpath flavor does: >>> ntpath.expandvars("'$TMP'") "'$TMP'" >>> posixpath.expandvars("'$TMP'") "'c:\\windows\\TEMP'" >>> Another bug: $$ is an escape sequence (meaning a single $) on Windows but not on Unix: >>> ntpath.expandvars('$$') '$' >>> posixpath.expandvars('$$') '$$' >>> Unassigning from me, as this is a bottomless pit spanning platforms and bristling with backward-compatibility traps no matter what's done about it. Somebody who cares enough should write a PEPlet to sort out the mess, else I'd just leave it alone. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-18 09:43 Message: Logged In: YES user_id=6380 Hm, I do understand it, the code is broken (compared to the spec). No time to fix it. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-18 09:35 Message: Logged In: YES user_id=6380 Confirmed, also in 2.2. I don't understand it, the code looks OK. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494589&group_id=5470 From noreply at sourceforge.net Tue Jan 16 16:52:00 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 07:52:00 -0800 Subject: [ python-Bugs-494589 ] os.path.expandvars deletes things on w32 Message-ID: Bugs item #494589, was opened at 2001-12-18 09:29 Message generated for change (Comment added) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494589&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: Accepted Priority: 5 Private: No Submitted By: Michael McCandless (mikemccand) Assigned to: Nobody/Anonymous (nobody) Summary: os.path.expandvars deletes things on w32 Initial Comment: Try this: import os.path print os.path.expandvars('foo$doesnotexist') On FreeBSD, Python 2.1, I get: 'foo$doesnotexist' But on WIN32, Python 2.1, I get: 'foo' The docs explicitly states that variables that are not found will be left in place ... but on win32 that appears to not be the case. ---------------------------------------------------------------------- >Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-16 10:52 Message: Logged In: YES user_id=6380 Originator: NO Oh, I forgot. It needs a unit test (preferably one that tests each xxpath module on each platform). ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-16 10:50 Message: Logged In: YES user_id=6380 Originator: NO Looks good. Sjoerd, can you check that in yourself or did you give up your privileges? ---------------------------------------------------------------------- Comment By: Sjoerd Mullender (sjoerd) Date: 2007-01-16 06:42 Message: Logged In: YES user_id=43607 Originator: NO I got bit by this today and saw there was a bug report of over 6 years old. The patch is trivial, though. The attached patch may not solve the problem that the various implementations of expandvars are made exactly the same again, but it does solve the problem that this implementation doesn't do what it promises in the doc string. It also solves the problem noted by Tim of two consecutive non-existing variables being treated differently. File Added: ntpath.patch ---------------------------------------------------------------------- Comment By: Behrang Dadsetan (bdadsetan) Date: 2003-06-22 09:45 Message: Logged In: YES user_id=806514 tim_one is right. There is plenty of dodgy things hiding behind the os.path world, especially when it comes to os.path.expandvars() There are two problems here. - Mismatch in between the doc strings of the different implementation of expandvars and the "official" os.path.expandvars documentation. - the ntpath and dospath implementations are buggy when compared to their comments/docstrings. About the first problem, the inconsistency created some time ago in between the different implementations tasks makes it difficult to choose a solution. Everyone will probably agree that all the platform specific implementations of expandvars should have the same functionality. The one that should be taken over will probably need to be announced by the BDFL. Some rule which should not have let this here happen, and on which I believe we all will agree on: Same interface=same documentation->same functionality To implement either copy paste exactly the same expandvars definition from one platform to another (NT, DOS, POSIX), or somehow rather arrange that when there is no specific implementation for the platform, a "default" python implementation is used on the os.path level. To maximize the fruits of my small work, I would of course prefer that the version below becomes the standard and that the documentation get updated. To be complete, shall the documentation remain unchanged and the implementation of dos and nt gets adapted (copied from posix), the mac implementation could remain unchanged. But I feel its docstring and its documentation should be in line with the rest of the implementations. So my view point-> same interface, same documentation For the second problem - as of now a real bug whatever we decide, I wrote within this comment (hereafter) a new expandvars version which fits the docstring documentation of dospath.py and the comments of ntpath.py. Sorry you will be getting no patch from me at the moment since sourceforge's anonymous CVS access does not like me. Please note that my version borrows alot from the posixpath.py implementation and my changes are the ones of a python amateur who is open to critic. #expandvars() implementation _varprog = None _findquotes = None def expandvars(path): """Expand paths containing shell variable substitutions. The following rules apply: - no expansion within single quotes - no escape character, except for '$$' which is translated into '$' - ${varname} is accepted. - varnames can be made out of letters, digits and the character '_'""" global _varprog, _findquotes if '$' not in path: return path if not _varprog: import re _varprog = re.compile(r'\$(\w+|\{[^}]*\}|\$)') _findquotes = re.compile("'.*?'") quoteareas = [] i = 0 while 1: quotearea = _findquotes.search(path, i) if not quotearea: break (i, j) = quotearea.span(0) quoteareas.append((i, j)) i = j i = 0 while 1: m = _varprog.search(path, i) if not m: break i, j = m.span(0) insidequotes=None for (quotebegin, quoteend) in quoteareas: if quotebegin < i and quoteend > i: insidequotes=1 break if insidequotes: i = j continue name = m.group(1) if name[:1] == '$': path = path[:i] + '$' + path[j:] i = i + 1 else: if name[:1] == '{' and name[-1:] == '}': name = name[1:-1] if os.environ.has_key(name): tail = path[j:] path = path[:i] + os.environ[name] i = len(path) path = path + tail else: i = j return path ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-19 01:56 Message: Logged In: YES user_id=31435 Another bug: with two adjacent envars that do exist, only the first is expanded (on Windows): >>> os.path.expandvars('$TMP$TMP') 'c:\\windows\\TEMP$TMP' >>> Another bug: the Windows expandvars doesn't expand envars in single quotes, but the posixpath flavor does: >>> ntpath.expandvars("'$TMP'") "'$TMP'" >>> posixpath.expandvars("'$TMP'") "'c:\\windows\\TEMP'" >>> Another bug: $$ is an escape sequence (meaning a single $) on Windows but not on Unix: >>> ntpath.expandvars('$$') '$' >>> posixpath.expandvars('$$') '$$' >>> Unassigning from me, as this is a bottomless pit spanning platforms and bristling with backward-compatibility traps no matter what's done about it. Somebody who cares enough should write a PEPlet to sort out the mess, else I'd just leave it alone. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-18 09:43 Message: Logged In: YES user_id=6380 Hm, I do understand it, the code is broken (compared to the spec). No time to fix it. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-18 09:35 Message: Logged In: YES user_id=6380 Confirmed, also in 2.2. I don't understand it, the code looks OK. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494589&group_id=5470 From noreply at sourceforge.net Tue Jan 16 17:03:42 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 08:03:42 -0800 Subject: [ python-Bugs-494589 ] os.path.expandvars deletes things on w32 Message-ID: Bugs item #494589, was opened at 2001-12-18 15:29 Message generated for change (Comment added) made by sjoerd You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494589&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: Accepted Priority: 5 Private: No Submitted By: Michael McCandless (mikemccand) Assigned to: Nobody/Anonymous (nobody) Summary: os.path.expandvars deletes things on w32 Initial Comment: Try this: import os.path print os.path.expandvars('foo$doesnotexist') On FreeBSD, Python 2.1, I get: 'foo$doesnotexist' But on WIN32, Python 2.1, I get: 'foo' The docs explicitly states that variables that are not found will be left in place ... but on win32 that appears to not be the case. ---------------------------------------------------------------------- >Comment By: Sjoerd Mullender (sjoerd) Date: 2007-01-16 17:03 Message: Logged In: YES user_id=43607 Originator: NO I can check this in. I'll try to create some tests. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-16 16:52 Message: Logged In: YES user_id=6380 Originator: NO Oh, I forgot. It needs a unit test (preferably one that tests each xxpath module on each platform). ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-16 16:50 Message: Logged In: YES user_id=6380 Originator: NO Looks good. Sjoerd, can you check that in yourself or did you give up your privileges? ---------------------------------------------------------------------- Comment By: Sjoerd Mullender (sjoerd) Date: 2007-01-16 12:42 Message: Logged In: YES user_id=43607 Originator: NO I got bit by this today and saw there was a bug report of over 6 years old. The patch is trivial, though. The attached patch may not solve the problem that the various implementations of expandvars are made exactly the same again, but it does solve the problem that this implementation doesn't do what it promises in the doc string. It also solves the problem noted by Tim of two consecutive non-existing variables being treated differently. File Added: ntpath.patch ---------------------------------------------------------------------- Comment By: Behrang Dadsetan (bdadsetan) Date: 2003-06-22 15:45 Message: Logged In: YES user_id=806514 tim_one is right. There is plenty of dodgy things hiding behind the os.path world, especially when it comes to os.path.expandvars() There are two problems here. - Mismatch in between the doc strings of the different implementation of expandvars and the "official" os.path.expandvars documentation. - the ntpath and dospath implementations are buggy when compared to their comments/docstrings. About the first problem, the inconsistency created some time ago in between the different implementations tasks makes it difficult to choose a solution. Everyone will probably agree that all the platform specific implementations of expandvars should have the same functionality. The one that should be taken over will probably need to be announced by the BDFL. Some rule which should not have let this here happen, and on which I believe we all will agree on: Same interface=same documentation->same functionality To implement either copy paste exactly the same expandvars definition from one platform to another (NT, DOS, POSIX), or somehow rather arrange that when there is no specific implementation for the platform, a "default" python implementation is used on the os.path level. To maximize the fruits of my small work, I would of course prefer that the version below becomes the standard and that the documentation get updated. To be complete, shall the documentation remain unchanged and the implementation of dos and nt gets adapted (copied from posix), the mac implementation could remain unchanged. But I feel its docstring and its documentation should be in line with the rest of the implementations. So my view point-> same interface, same documentation For the second problem - as of now a real bug whatever we decide, I wrote within this comment (hereafter) a new expandvars version which fits the docstring documentation of dospath.py and the comments of ntpath.py. Sorry you will be getting no patch from me at the moment since sourceforge's anonymous CVS access does not like me. Please note that my version borrows alot from the posixpath.py implementation and my changes are the ones of a python amateur who is open to critic. #expandvars() implementation _varprog = None _findquotes = None def expandvars(path): """Expand paths containing shell variable substitutions. The following rules apply: - no expansion within single quotes - no escape character, except for '$$' which is translated into '$' - ${varname} is accepted. - varnames can be made out of letters, digits and the character '_'""" global _varprog, _findquotes if '$' not in path: return path if not _varprog: import re _varprog = re.compile(r'\$(\w+|\{[^}]*\}|\$)') _findquotes = re.compile("'.*?'") quoteareas = [] i = 0 while 1: quotearea = _findquotes.search(path, i) if not quotearea: break (i, j) = quotearea.span(0) quoteareas.append((i, j)) i = j i = 0 while 1: m = _varprog.search(path, i) if not m: break i, j = m.span(0) insidequotes=None for (quotebegin, quoteend) in quoteareas: if quotebegin < i and quoteend > i: insidequotes=1 break if insidequotes: i = j continue name = m.group(1) if name[:1] == '$': path = path[:i] + '$' + path[j:] i = i + 1 else: if name[:1] == '{' and name[-1:] == '}': name = name[1:-1] if os.environ.has_key(name): tail = path[j:] path = path[:i] + os.environ[name] i = len(path) path = path + tail else: i = j return path ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-19 07:56 Message: Logged In: YES user_id=31435 Another bug: with two adjacent envars that do exist, only the first is expanded (on Windows): >>> os.path.expandvars('$TMP$TMP') 'c:\\windows\\TEMP$TMP' >>> Another bug: the Windows expandvars doesn't expand envars in single quotes, but the posixpath flavor does: >>> ntpath.expandvars("'$TMP'") "'$TMP'" >>> posixpath.expandvars("'$TMP'") "'c:\\windows\\TEMP'" >>> Another bug: $$ is an escape sequence (meaning a single $) on Windows but not on Unix: >>> ntpath.expandvars('$$') '$' >>> posixpath.expandvars('$$') '$$' >>> Unassigning from me, as this is a bottomless pit spanning platforms and bristling with backward-compatibility traps no matter what's done about it. Somebody who cares enough should write a PEPlet to sort out the mess, else I'd just leave it alone. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-18 15:43 Message: Logged In: YES user_id=6380 Hm, I do understand it, the code is broken (compared to the spec). No time to fix it. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-18 15:35 Message: Logged In: YES user_id=6380 Confirmed, also in 2.2. I don't understand it, the code looks OK. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494589&group_id=5470 From noreply at sourceforge.net Tue Jan 16 17:44:18 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 08:44:18 -0800 Subject: [ python-Bugs-494589 ] os.path.expandvars deletes things on w32 Message-ID: Bugs item #494589, was opened at 2001-12-18 15:29 Message generated for change (Comment added) made by sjoerd You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494589&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Michael McCandless (mikemccand) >Assigned to: Sjoerd Mullender (sjoerd) Summary: os.path.expandvars deletes things on w32 Initial Comment: Try this: import os.path print os.path.expandvars('foo$doesnotexist') On FreeBSD, Python 2.1, I get: 'foo$doesnotexist' But on WIN32, Python 2.1, I get: 'foo' The docs explicitly states that variables that are not found will be left in place ... but on win32 that appears to not be the case. ---------------------------------------------------------------------- >Comment By: Sjoerd Mullender (sjoerd) Date: 2007-01-16 17:44 Message: Logged In: YES user_id=43607 Originator: NO Committed as rev. 53460. ---------------------------------------------------------------------- Comment By: Sjoerd Mullender (sjoerd) Date: 2007-01-16 17:03 Message: Logged In: YES user_id=43607 Originator: NO I can check this in. I'll try to create some tests. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-16 16:52 Message: Logged In: YES user_id=6380 Originator: NO Oh, I forgot. It needs a unit test (preferably one that tests each xxpath module on each platform). ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-16 16:50 Message: Logged In: YES user_id=6380 Originator: NO Looks good. Sjoerd, can you check that in yourself or did you give up your privileges? ---------------------------------------------------------------------- Comment By: Sjoerd Mullender (sjoerd) Date: 2007-01-16 12:42 Message: Logged In: YES user_id=43607 Originator: NO I got bit by this today and saw there was a bug report of over 6 years old. The patch is trivial, though. The attached patch may not solve the problem that the various implementations of expandvars are made exactly the same again, but it does solve the problem that this implementation doesn't do what it promises in the doc string. It also solves the problem noted by Tim of two consecutive non-existing variables being treated differently. File Added: ntpath.patch ---------------------------------------------------------------------- Comment By: Behrang Dadsetan (bdadsetan) Date: 2003-06-22 15:45 Message: Logged In: YES user_id=806514 tim_one is right. There is plenty of dodgy things hiding behind the os.path world, especially when it comes to os.path.expandvars() There are two problems here. - Mismatch in between the doc strings of the different implementation of expandvars and the "official" os.path.expandvars documentation. - the ntpath and dospath implementations are buggy when compared to their comments/docstrings. About the first problem, the inconsistency created some time ago in between the different implementations tasks makes it difficult to choose a solution. Everyone will probably agree that all the platform specific implementations of expandvars should have the same functionality. The one that should be taken over will probably need to be announced by the BDFL. Some rule which should not have let this here happen, and on which I believe we all will agree on: Same interface=same documentation->same functionality To implement either copy paste exactly the same expandvars definition from one platform to another (NT, DOS, POSIX), or somehow rather arrange that when there is no specific implementation for the platform, a "default" python implementation is used on the os.path level. To maximize the fruits of my small work, I would of course prefer that the version below becomes the standard and that the documentation get updated. To be complete, shall the documentation remain unchanged and the implementation of dos and nt gets adapted (copied from posix), the mac implementation could remain unchanged. But I feel its docstring and its documentation should be in line with the rest of the implementations. So my view point-> same interface, same documentation For the second problem - as of now a real bug whatever we decide, I wrote within this comment (hereafter) a new expandvars version which fits the docstring documentation of dospath.py and the comments of ntpath.py. Sorry you will be getting no patch from me at the moment since sourceforge's anonymous CVS access does not like me. Please note that my version borrows alot from the posixpath.py implementation and my changes are the ones of a python amateur who is open to critic. #expandvars() implementation _varprog = None _findquotes = None def expandvars(path): """Expand paths containing shell variable substitutions. The following rules apply: - no expansion within single quotes - no escape character, except for '$$' which is translated into '$' - ${varname} is accepted. - varnames can be made out of letters, digits and the character '_'""" global _varprog, _findquotes if '$' not in path: return path if not _varprog: import re _varprog = re.compile(r'\$(\w+|\{[^}]*\}|\$)') _findquotes = re.compile("'.*?'") quoteareas = [] i = 0 while 1: quotearea = _findquotes.search(path, i) if not quotearea: break (i, j) = quotearea.span(0) quoteareas.append((i, j)) i = j i = 0 while 1: m = _varprog.search(path, i) if not m: break i, j = m.span(0) insidequotes=None for (quotebegin, quoteend) in quoteareas: if quotebegin < i and quoteend > i: insidequotes=1 break if insidequotes: i = j continue name = m.group(1) if name[:1] == '$': path = path[:i] + '$' + path[j:] i = i + 1 else: if name[:1] == '{' and name[-1:] == '}': name = name[1:-1] if os.environ.has_key(name): tail = path[j:] path = path[:i] + os.environ[name] i = len(path) path = path + tail else: i = j return path ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-12-19 07:56 Message: Logged In: YES user_id=31435 Another bug: with two adjacent envars that do exist, only the first is expanded (on Windows): >>> os.path.expandvars('$TMP$TMP') 'c:\\windows\\TEMP$TMP' >>> Another bug: the Windows expandvars doesn't expand envars in single quotes, but the posixpath flavor does: >>> ntpath.expandvars("'$TMP'") "'$TMP'" >>> posixpath.expandvars("'$TMP'") "'c:\\windows\\TEMP'" >>> Another bug: $$ is an escape sequence (meaning a single $) on Windows but not on Unix: >>> ntpath.expandvars('$$') '$' >>> posixpath.expandvars('$$') '$$' >>> Unassigning from me, as this is a bottomless pit spanning platforms and bristling with backward-compatibility traps no matter what's done about it. Somebody who cares enough should write a PEPlet to sort out the mess, else I'd just leave it alone. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-18 15:43 Message: Logged In: YES user_id=6380 Hm, I do understand it, the code is broken (compared to the spec). No time to fix it. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-12-18 15:35 Message: Logged In: YES user_id=6380 Confirmed, also in 2.2. I don't understand it, the code looks OK. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=494589&group_id=5470 From noreply at sourceforge.net Tue Jan 16 17:56:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 08:56:09 -0800 Subject: [ python-Bugs-1636950 ] Newline skipped in "for line in file" Message-ID: Bugs item #1636950, was opened at 2007-01-16 10:56 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Andy Monthei (amonthei) Assigned to: Nobody/Anonymous (nobody) Summary: Newline skipped in "for line in file" Initial Comment: When processing huge fixed block files of about 7000 bytes wide and several hundred thousand lines long some pairs of lines get read as one long line with no line break when using "for line in file:". The problem is even worse when using the fileinput module and reading in five or six huge files consisting of 4.8 million records causes several hundred pairs of lines to be read as single lines. When a newline is skipped it is usually followed by several more in the next few hundred lines. I have not noticed any other characters being skipped, only the line break. O.S. Windows (5, 1, 2600, 2, 'Service Pack 2') Python 2.5 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 From noreply at sourceforge.net Tue Jan 16 18:40:55 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 09:40:55 -0800 Subject: [ python-Bugs-1633630 ] class derived from float evaporates under += Message-ID: Bugs item #1633630, was opened at 2007-01-11 15:49 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633630&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Type/class unification Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: class derived from float evaporates under += Initial Comment: [forwarded from http://bugs.debian.org/345373] There seems to be a bug in classes derived from float. For instance, consider the following: >>> class Float(float): ... def __init__(self, v): ... float.__init__(self, v) ... self.x = 1 ... >>> a = Float(2.0) >>> b = Float(3.0) >>> type(a) >>> type(b) >>> a += b >>> type(a) Now, the type of a has silently changed. It was a Float, a derived class with all kinds of properties, and it became a float -- a plain vanilla number. My understanding is that this is incorrect, and certainly unexpected. If it *is* correct, it certainly deserves mention somewhere in the documentation. It seems that Float.__iadd__(a, b) should be called. This defaults to float.__iadd__(a, b), which should increment the float part of the object while leaving the rest intact. A possible explanation for this problem is that float.__iadd__ is not actually defined, and so it falls through to a = float.__add__(a, b), which assigns a float to a. This interpretation seems to be correct, as one can add a destructor to the Float class: >>> class FloatD(float): ... def __init__(self, v): ... float.__init__(self, v) ... self.x = 1 ... def __del__(self): ... print 'Deleting FloatD class, losing x=', self.x ... >>> a = FloatD(2.0) >>> b = FloatD(3.0) >>> a += b Deleting FloatD class, losing x= 1 >>> ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-16 09:40 Message: Logged In: YES user_id=341410 Originator: NO The current behavior is as designed. Not a bug. Suggested move to RFE or close as "Not a bug". There has been discussion on either the python-dev or python-3000 mailing lists discussing how subclasses of builtin types (int, long, float, str, unicode, list, tuple, ...) should behave when confronted with one of a set of "standard" operators. While there has been general "it would be nice" if 'a + b' produced 'type(a)(a + b)' on certain occasions, this would change the semantics of all such operations in a backwards incompatible way (so has not been implemented). If you want to guarantee such behavior (without writing all of the __special__ methods) I would suggest that you instead create a __getattr__ method to automatically handle the coercion back into your subtype. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-13 09:57 Message: Logged In: YES user_id=849994 Originator: NO You don't need augmented assign for that, just doing "a+b" will give you a float too. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2007-01-12 13:26 Message: Logged In: YES user_id=764593 Originator: NO Python float objects are immutable and can be shared. Therefore, their values cannot be modified -- which is why it falls back to not-in-place assignment. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633630&group_id=5470 From noreply at sourceforge.net Tue Jan 16 19:46:21 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 10:46:21 -0800 Subject: [ python-Bugs-1637022 ] Python-2.5 segfault with tktreectrl Message-ID: Bugs item #1637022, was opened at 2007-01-16 19:46 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637022&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: AST Status: Open Resolution: None Priority: 5 Private: No Submitted By: klappnase (klappnase) Assigned to: Nobody/Anonymous (nobody) Summary: Python-2.5 segfault with tktreectrl Initial Comment: Python-2.5 segfaults when using the tktreectrl widget. As Anton Hartl pointed out (see http://groups.google.com/group/comp.lang.python/browse_thread/thread/37536988c8499708/aed1d725d8e84ed8?lnk=raot#aed1d725d8e84ed8) this is because both Python-2.5 and tktreectrl use a global symbol "Ellipsis". Changing "Ellipsis" in ast.c and Python-ast.c into something like "PyAst_Ellipsis" fixes this. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637022&group_id=5470 From noreply at sourceforge.net Tue Jan 16 21:50:44 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 12:50:44 -0800 Subject: [ python-Bugs-1603688 ] SaveConfigParser.write() doesn't quote %-Sign Message-ID: Bugs item #1603688, was opened at 2006-11-27 12:15 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603688&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Rebecca Breu (rbreu) Assigned to: Nobody/Anonymous (nobody) Summary: SaveConfigParser.write() doesn't quote %-Sign Initial Comment: >>> parser = ConfigParser.SafeConfigParser() >>> parser.add_section("test") >>> parser.set("test", "foo", "bar%bar") >>> parser.write(open("test.config", "w")) >>> parser2 = ConfigParser.SafeConfigParser() >>> parser2.readfp(open("test.config")) >>> parser.get("test", "foo") Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.4/ConfigParser.py", line 525, in get return self._interpolate(section, option, value, d) File "/usr/lib/python2.4/ConfigParser.py", line 593, in _interpolate self._interpolate_some(option, L, rawval, section, vars, 1) File "/usr/lib/python2.4/ConfigParser.py", line 634, in _interpolate_some "'%%' must be followed by '%%' or '(', found: %r" % (rest,)) ConfigParser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%bar' Problem: SaveConfigParser saves the string "bar%bar" as is and not as "bar%%bar". ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-16 20:50 Message: Logged In: YES user_id=849994 Originator: NO Closing this as a duplicate. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-15 07:45 Message: Logged In: YES user_id=1591633 Originator: NO Initially, I believed ValueError was the appropriate way to go with this. However, when I thought about how I use ConfigParser, I realized that it would be far nicer if it simply worked. See the patches in 1635639. http://sourceforge.net/tracker/index.php?func=detail&aid=1635639&group_id=5470&atid=105470 Good catch on this. I haven't caught it and I've been using ConfigParser for a while now. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-15 02:33 Message: Logged In: YES user_id=1591633 Originator: NO I'm not sure that automagically changing their input is such a great idea. I'm -0 for automagically changing their input, but +1 for raising ValueError when the input contains a string that can't be properly interpolated. I've implemented the patch both ways. Anyone else have an opinion about this? Examples of such malformatted strings include bar%bar and bar%. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603688&group_id=5470 From noreply at sourceforge.net Tue Jan 16 22:06:15 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 13:06:15 -0800 Subject: [ python-Bugs-1637120 ] Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Message-ID: Bugs item #1637120, was opened at 2007-01-16 18:06 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Orlando Irrazabal (oirraza) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Initial Comment: Initial Comment: Build of Python 2.5 on AIX 5.3 with xlc_r fails with the below error message. The configure line is: ./configure --with-gcc="xlc_r -q64" --with-cxx="xlC_r -q64" --disable-ipv6 AR="ar -X64" [...] building '_ctypes' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -Ibuild/temp.aix-5.3-2.5/libffi/include -Ibuild/temp.aix-5.3-2.5/libffi -I/sw_install/python-2.5/Modules/_ctypes/libffi/src -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes.o "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.15: 1506-276 (S) Syntax error: possible missing '{'? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 248.3: 1506-273 (E) Missing type in declaration of ffi_raw_closure. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.38: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 252.23: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-033 (S) Function ffi_prep_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.23: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.38: 1506-275 (S) Unexpected text ')' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.43: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 258.28: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-033 (S) Function ffi_prep_java_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.28: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.43: 1506-275 (S) Unexpected text ')' encountered. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 71.9: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 77.26: 1506-195 (S) Integral constant expression with a value greater than zero is required. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 2804.31: 1506-068 (E) Operation between types "void*" and "int(*)(void)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3357.28: 1506-280 (E) Function argument assignment between types "int(*)(void)" and "void*" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3417.42: 1506-022 (S) "pcl" is not a member of "struct {...}". "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4749.67: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,const void*,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4750.66: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,int,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4751.69: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const char*,long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4752.64: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(void*,struct _object*,struct _object*)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4754.70: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const unsigned int*,int)" is not allowed. building '_ctypes_test' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.o "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 61.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 68.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 75.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field M must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field N must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field O must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field P must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field Q must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field R must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field S must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_s. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_u. [...] ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 From noreply at sourceforge.net Tue Jan 16 23:19:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 14:19:33 -0800 Subject: [ python-Bugs-1637167 ] mailbox.py uses old email names Message-ID: Bugs item #1637167, was opened at 2007-01-16 14:19 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637167&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Russell Owen (reowen) Assigned to: Nobody/Anonymous (nobody) Summary: mailbox.py uses old email names Initial Comment: mailbox.py uses old (and presumably deprecated) names for stuff in the email package. This can confuse application packagers such as py2app. I believe the complete list of desirable changes is: email.Generator -> email.generator email.Message -> email.message email.message_from_string -> email.parser.message_from_string email.message_from_file -> email.parser.message_from_file I submitted patches for urllib, urllib2 and smptlib but wasn't sure enough of mailbox to do that. Those four modules are the only instances I found that needed changing at the main level of the library. However, I did not do a recursive search. There may be files inside packages that could also use cleanup. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637167&group_id=5470 From noreply at sourceforge.net Tue Jan 16 23:33:27 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 14:33:27 -0800 Subject: [ python-Bugs-1636950 ] Newline skipped in "for line in file" Message-ID: Bugs item #1636950, was opened at 2007-01-16 08:56 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Andy Monthei (amonthei) Assigned to: Nobody/Anonymous (nobody) Summary: Newline skipped in "for line in file" Initial Comment: When processing huge fixed block files of about 7000 bytes wide and several hundred thousand lines long some pairs of lines get read as one long line with no line break when using "for line in file:". The problem is even worse when using the fileinput module and reading in five or six huge files consisting of 4.8 million records causes several hundred pairs of lines to be read as single lines. When a newline is skipped it is usually followed by several more in the next few hundred lines. I have not noticed any other characters being skipped, only the line break. O.S. Windows (5, 1, 2600, 2, 'Service Pack 2') Python 2.5 ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2007-01-16 14:33 Message: Logged In: YES user_id=357491 Originator: NO Do you happen to have a sample you could upload that triggers the bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 From noreply at sourceforge.net Wed Jan 17 00:07:22 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 15:07:22 -0800 Subject: [ python-Bugs-1037516 ] ftplib PASV error bug Message-ID: Bugs item #1037516, was opened at 2004-09-30 05:35 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1037516&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Tim Nelson (wayland) Assigned to: Nobody/Anonymous (nobody) Summary: ftplib PASV error bug Initial Comment: Hi. If ftplib gets an error while doing the PASV section of the ntransfercmd it dies. I've altered it so that ntransfercmd does an autodetect, if an autodetect hasn't been done yet. If there are any problems (as I'm not a python programmer :) ), please either fix them or let me know. ---------------------------------------------------------------------- Comment By: Tim Nelson (wayland) Date: 2007-01-16 00:02 Message: Logged In: YES user_id=401793 Originator: YES Oops. I probably did, but I don't work in that job any more, so I'm afraid I don't have access to it. Sorry. You should, however, be able to correct it from the description. ---------------------------------------------------------------------- Comment By: Andrew Bennetts (spiv) Date: 2004-10-06 10:49 Message: Logged In: YES user_id=50945 Did you mean to submit a patch with this bug report? It sounds like you did, but there's no files attached to this bug. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1037516&group_id=5470 From noreply at sourceforge.net Wed Jan 17 06:11:05 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 21:11:05 -0800 Subject: [ python-Feature Requests-1637365 ] if __name__=='__main__' missing in tutorial Message-ID: Feature Requests item #1637365, was opened at 2007-01-17 02:11 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1637365&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Gabriel Genellina (gagenellina) Assigned to: Nobody/Anonymous (nobody) Summary: if __name__=='__main__' missing in tutorial Initial Comment: I could not find any reference to the big idiom: if __name__=="__main__": xxx() inside the Python tutorial. Of course it is documented in the Library Reference and the Reference Manual, but such an important idiom should be on the Tutorial for beginners to see. I can't provide a patch, and English is not my native language, but I think a short text like the following would suffice (in section More on Modules, before the paragraph "Modules can import other modules..."): Sometimes it is convenient to invoke a module as it were a script, either for testing purposes, or to provide a convenient user interfase to the functions contained in the module. But you don't want to run such code when the module is imported into another program, only when it's used as a standalone script. The way of differentiate both cases is checking the \code{__name__} attribute: as seen on the previous section, it usually holds the module name, but when the module is invoked directly, it's always \samp{__main__} regardless of the script name. Add this at the end of \file{fibo.py}: \begin{verbatim} if __name__=="__main__": import sys fib(int(sys.argv[1])) \end{verbatim} and then you can execute it using: \begin{verbatim} python fibo.py 50 1 1 2 3 5 8 13 21 34 \end{verbatim} ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1637365&group_id=5470 From noreply at sourceforge.net Wed Jan 17 07:47:05 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 22:47:05 -0800 Subject: [ python-Bugs-1552726 ] Python polls unnecessarily every 0.1 second when interactive Message-ID: Bugs item #1552726, was opened at 2006-09-05 07:42 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1552726&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: Fixed Priority: 9 Private: No Submitted By: Richard Boulton (richardb) Assigned to: A.M. Kuchling (akuchling) Summary: Python polls unnecessarily every 0.1 second when interactive Initial Comment: When python is running an interactive session, and is idle, it calls "select" with a timeout of 0.1 seconds repeatedly. This is intended to allow PyOS_InputHook() to be called every 0.1 seconds, but happens even if PyOS_InputHook() isn't being used (ie, is NULL). To reproduce: - start a python session - attach to it using strace -p PID - observe that python repeatedly This isn't a significant problem, since it only affects idle interactive python sessions and uses only a tiny bit of CPU, but people are whinging about it (though some appear to be doing so tongue-in-cheek) and it would be nice to fix it. The attached patch (against Python-2.5c1) modifies the readline.c module so that the polling doesn't happen unless PyOS_InputHook is not NULL. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-16 22:47 Message: Logged In: YES user_id=33168 Originator: NO I'm fine if this patch is applied. Since it was applied to trunk, it seems like it might as well go into 2.5.1 as well. I agree it's not that high priority, but don't see much reason to wait either. OTOH, I won't lose sleep if it's not applied, so do what you think is best. ---------------------------------------------------------------------- Comment By: Richard Boulton (richardb) Date: 2006-09-08 07:30 Message: Logged In: YES user_id=9565 I'm finding the function because it's defined in the compiled library - the header files aren't examined by configure when testing for this function. (this is because configure.in uses AC_CHECK_LIB to check for rl_callback_handler_install, which just tries to link the named function against the library). Presumably, rlconf.h is the configuration used when the readline library was compiled, so if READLINE_CALLBACKS is defined in it, I would expect the relevant functions to be present in the compiled library. In any case, this isn't desperately important, since you've managed to hack around the test anyway. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-08 06:12 Message: Logged In: YES user_id=11375 That's exactly my setup. I don't think there is a -dev package for readline 4. I do note that READLINE_CALLBACKS is defined in /usr/include/readline/rlconf.h, but Python's readline.c doesn't include this file, and none of the readline headers include it. So I don't know why you're finding the function! ---------------------------------------------------------------------- Comment By: Richard Boulton (richardb) Date: 2006-09-08 02:34 Message: Logged In: YES user_id=9565 HAVE_READLINE_CALLBACK is defined by configure.in whenever the readline library on the platform supports the rl_callback_handler_install() function. I'm using Ubuntu Dapper, and have libreadline 4 and 5 installed (more precisely, 4.3-18 and 5.1-7build1), but only the -dev package for 5.1-7build1. "info readline" describes rl_callback_handler_install(), and configure.in finds it, so I'm surprised it wasn't found on akuchling's machine. I agree that the code looks buggy on platforms in which signals don't necessarily get delivered to the main thread, but looks no more buggy with the patch than without. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 07:38 Message: Logged In: YES user_id=11375 On looking at the readline code, I think this patch makes no difference to signals. The code in readline.c for the callbacks looks like this: has_input = 0; while (!has_input) { ... has_input = select.select(rl_input); } if (has_input > 0) {read character} elif (errno == EINTR) {check signals} So I think that, if a signal is delivered to a thread and select() in the main thread doesn't return EINTR, the old code is just as problematic as the code with this patch. The (while !has_input) loop doesn't check for signals at all as an exit condition. I'm not sure what to do at this point. I think the new code is no worse than the old code with regard to signals. Maybe this loop is buggy w.r.t. to signals, but I don't know how to test that. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 07:17 Message: Logged In: YES user_id=11375 HAVE_READLINE_CALLBACK was not defined with readline 5.1 on Ubuntu Dapper, until I did the configure/CFLAG trick. I didn't think of a possible interaction with signals, and will re-open the bug while trying to work up a test case. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-09-07 07:12 Message: Logged In: YES user_id=6656 I'd be cautious about applying this to 2.5: we could end up with the same problem currently entertaining python-dev, i.e. a signal gets delivered to a non- main thread but the main thread is sitting in a select with no timeout so any python signal handler doesn't run until the user hits a key. HAVE_READLINE_CALLBACK is defined when readline is 2.1 *or newer* I think... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 07:02 Message: Logged In: YES user_id=11375 Recent versions of readline can still support callbacks if READLINE_CALLBACK is defined, so I could test the patch by running 'CFLAGS=-DREADLINE_CALLBACK' and re-running configure. Applied as rev. 51815 to the trunk, so the fix will be in Python 2.6. The 2.5 release manager needs to decide if it should be applied to the 2.5 branch. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 06:24 Message: Logged In: YES user_id=11375 Original report: http://perkypants.org/blog/2006/09/02/rfte-python This is tied to the version of readline being used; the select code is only used if HAVE_RL_CALLBACK is defined, and a comment in Python's configure.in claims it's only defined with readline 2.1. Current versions of readline are 4.3 and 5.1; are people still using such an ancient version of readline? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1552726&group_id=5470 From noreply at sourceforge.net Wed Jan 17 07:48:24 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 22:48:24 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 08:03 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-16 22:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 11:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 10:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 11:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 10:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 09:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 11:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 07:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 07:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 11:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 11:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 11:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 11:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 11:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 06:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 09:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 11:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 06:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 05:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 13:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 12:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 12:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Wed Jan 17 08:00:38 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 23:00:38 -0800 Subject: [ python-Bugs-1598181 ] subprocess.py: O(N**2) bottleneck Message-ID: Bugs item #1598181, was opened at 2006-11-16 22:40 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: Fixed Priority: 5 Private: No Submitted By: Ralf W. Grosse-Kunstleve (rwgk) Assigned to: Peter ?strand (astrand) Summary: subprocess.py: O(N**2) bottleneck Initial Comment: subprocess.py (Python 2.5, current SVN, probably all versions) contains this O(N**2) code: bytes_written = os.write(self.stdin.fileno(), input[:512]) input = input[bytes_written:] For large but reasonable "input" the second line is rate limiting. Luckily, it is very easy to remove this bottleneck. I'll upload a simple patch. Below is a small script that demonstrates the huge speed difference. The output on my machine is: creating input 0.888417959213 slow slicing input 61.1553330421 creating input 0.863168954849 fast slicing input 0.0163860321045 done The numbers are times in seconds. This is the source: import time import sys size = 1000000 t0 = time.time() print "creating input" input = "\n".join([str(i) for i in xrange(size)]) print time.time()-t0 t0 = time.time() print "slow slicing input" n_out_slow = 0 while True: out = input[:512] n_out_slow += 1 input = input[512:] if not input: break print time.time()-t0 t0 = time.time() print "creating input" input = "\n".join([str(i) for i in xrange(size)]) print time.time()-t0 t0 = time.time() print "fast slicing input" n_out_fast = 0 input_done = 0 while True: out = input[input_done:input_done+512] n_out_fast += 1 input_done += 512 if input_done >= len(input): break print time.time()-t0 assert n_out_fast == n_out_slow print "done" ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-16 23:00 Message: Logged In: YES user_id=33168 Originator: NO Peter this is fine for 2.5.1. Please apply and update Misc/NEWS. Thanks! ---------------------------------------------------------------------- Comment By: Ralf W. Grosse-Kunstleve (rwgk) Date: 2007-01-07 07:15 Message: Logged In: YES user_id=71407 Originator: YES Thanks for the fixes! ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-07 06:36 Message: Logged In: YES user_id=344921 Originator: NO Fixed in trunk revision 53295. Is this a good candidate for backporting to 25-maint? ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-04 10:20 Message: Logged In: YES user_id=1611720 Originator: NO I reviewed the patch--the proposed fix looks good. Minor comments: - "input_done" sounds like a flag, not a count of written bytes - buffer() could be used to avoid the 512-byte copy created by the slice ---------------------------------------------------------------------- Comment By: Ralf W. Grosse-Kunstleve (rwgk) Date: 2006-11-16 22:43 Message: Logged In: YES user_id=71407 Originator: YES Sorry, I didn't know the tracker would destroy the indentation. I'm uploading the demo source as a separate file. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470 From noreply at sourceforge.net Wed Jan 17 08:01:32 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 23:01:32 -0800 Subject: [ python-Bugs-1579370 ] Segfault provoked by generators and exceptions Message-ID: Bugs item #1579370, was opened at 2006-10-17 19:23 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None >Priority: 9 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Nobody/Anonymous (nobody) Summary: Segfault provoked by generators and exceptions Initial Comment: A reproducible segfault when using heavily-nested generators and exceptions. Unfortunately, I haven't yet been able to provoke this behaviour with a standalone python2.5 script. There are, however, no third-party c extensions running in the process so I'm fairly confident that it is a problem in the core. The gist of the code is a series of nested generators which leave scope when an exception is raised. This exception is caught and re-raised in an outer loop. The old exception was holding on to the frame which was keeping the generators alive, and the sequence of generator destruction and new finalization caused the segfault. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-16 23:01 Message: Logged In: YES user_id=33168 Originator: NO Bumping priority to see if this should go into 2.5.1. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 02:42 Message: Logged In: YES user_id=21627 Originator: NO Why do frame objects have a thread state in the first place? In particular, why does PyTraceBack_Here get the thread state from the frame, instead of using the current thread? Introduction of f_tstate goes back to r7882, but it is not clear why it was done that way. ---------------------------------------------------------------------- Comment By: Andrew Waters (awaters) Date: 2007-01-04 01:35 Message: Logged In: YES user_id=1418249 Originator: NO This fixes the segfault problem that I was able to reliably reproduce on Linux. We need to get this applied (assuming it is the correct fix) to the source to make Python 2.5 usable for me in production code. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-11-27 10:41 Message: Logged In: YES user_id=1611720 Originator: YES The following patch resets the thread state of the generator when it is resumed, which prevents the segfault for me: Index: Objects/genobject.c =================================================================== --- Objects/genobject.c (revision 52849) +++ Objects/genobject.c (working copy) @@ -77,6 +77,7 @@ Py_XINCREF(tstate->frame); assert(f->f_back == NULL); f->f_back = tstate->frame; + f->f_tstate = tstate; gen->gi_running = 1; result = PyEval_EvalFrameEx(f, exc); ---------------------------------------------------------------------- Comment By: Eric Noyau (eric_noyau) Date: 2006-11-27 10:07 Message: Logged In: YES user_id=1388768 Originator: NO We are experiencing the same segfault in our application, reliably. Running our unit test suite just segfault everytime on both Linux and Mac OS X. Applying Martin's patch fixes the segfault, and makes everything fine and dandy, at the cost of some memory leaks if I understand properly. This particular bug prevents us to upgrade to python 2.5 in production. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-27 22:18 Message: Logged In: YES user_id=31435 > I tried Tim's hope.py on Linux x86_64 and > Mac OS X 10.4 with debug builds and neither > one crashed. Tim's guess looks pretty damn > good too. Neal, note that it's the /Windows/ malloc that fills freed memory with "dangerous bytes" in a debug build -- this really has nothing to do with that it's a debug build of /Python/ apart from that on Windows a debug build of Python also links in the debug version of Microsoft's malloc. The valgrind report is pointing at the same thing. Whether this leads to a crash is purely an accident of when and how the system malloc happens to reuse the freed memory. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-27 21:56 Message: Logged In: YES user_id=33168 Mike, what platform are you having the problem on? I tried Tim's hope.py on Linux x86_64 and Mac OS X 10.4 with debug builds and neither one crashed. Tim's guess looks pretty damn good too. Here's the result of valgrind: Invalid read of size 8 at 0x4CEBFE: PyTraceBack_Here (traceback.c:117) by 0x49C1F1: PyEval_EvalFrameEx (ceval.c:2515) by 0x4F615D: gen_send_ex (genobject.c:82) by 0x4F6326: gen_close (genobject.c:128) by 0x4F645E: gen_del (genobject.c:163) by 0x4F5F00: gen_dealloc (genobject.c:31) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x44534E: dict_dealloc (dictobject.c:801) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x4664FF: subtype_dealloc (typeobject.c:686) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x42325D: instancemethod_dealloc (classobject.c:2287) Address 0x56550C0 is 88 bytes inside a block of size 152 free'd at 0x4A1A828: free (vg_replace_malloc.c:233) by 0x4C3899: tstate_delete_common (pystate.c:256) by 0x4C3926: PyThreadState_DeleteCurrent (pystate.c:282) by 0x4D4043: t_bootstrap (threadmodule.c:448) by 0x4B24C48: pthread_start_thread (in /lib/libpthread-0.10.so) The only way I can think to fix this is to keep a set of active generators in the PyThreadState and calling gen_send_ex(exc=1) for all the active generators before killing the tstate in t_bootstrap. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-10-19 00:58 Message: Logged In: YES user_id=6656 > and for some reason Python uses the system malloc directly > to obtain memory for thread states. This bit is fairly easy: they are allocated without the GIL being held, which breaks an assumption of PyMalloc. No idea about the real problem, sadly. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-18 17:38 Message: Logged In: YES user_id=31435 I've attached a much simplified pure-Python script (hope.py) that reproduces a problem very quickly, on Windows, in a /debug/ build of current trunk. It typically prints: exiting generator joined thread at most twice before crapping out. At the time, the `next` argument to newtracebackobject() is 0xdddddddd, and tracing back a level shows that, in PyTraceBack_Here(), frame->tstate is entirely filled with 0xdd bytes. Note that this is not a debug-build obmalloc gimmick! This is Microsoft's similar debug-build gimmick for their malloc, and for some reason Python uses the system malloc directly to obtain memory for thread states. The Microsoft debug free() fills newly-freed memory with 0xdd, which has the same meaning as the debug-build obmalloc's DEADBYTE (0xdb). So somebody is accessing a thread state here after it's been freed. Best guess is that the generator is getting "cleaned up" after the thread that created it has gone away, so the generator's frame's f_tstate is trash. Note that a PyThreadState (a frame's f_tstate) is /not/ a Python object -- it's just a raw C struct, and its lifetime isn't controlled by refcounts. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 17:12 Message: Logged In: YES user_id=1611720 Despite Tim's reassurrance, I'm afraid that Martin's patch does infact prevent the segfault. Sounds like it also introduces a memleak. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-18 14:57 Message: Logged In: YES user_id=31435 > Can anybody tell why gi_frame *isn't* incref'ed when > the generator is created? As documented (in concrete.tex), PyGen_New(f) steals a reference to the frame passed to it. Its only call site (well, in the core) is in ceval.c, which returns immediately after PyGen_New takes over ownership of the frame the caller created: """ /* Create a new generator that owns the ready to run frame * and return that as the value. */ return PyGen_New(f); """ In short, that PyGen_New() doesn't incref the frame passed to it is intentional. It's possible that the intent is flawed ;-), but offhand I don't see how. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-10-18 14:05 Message: Logged In: YES user_id=21627 Can you please review/try attached patch? Can anybody tell why gi_frame *isn't* incref'ed when the generator is created? ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 12:47 Message: Logged In: YES user_id=1611720 I cannot yet produce an only-python script which reproduces the problem, but I can give an overview. There is a generator running in one thread, an exception being raised in another thread, and as a consequent, the generator in the first thread is garbage-collected (triggering an exception due to the new generator cleanup). The problem is extremely sensitive to timing--often the insertion/removal of print statements, or reordering the code, causes the problem to vanish, which is confounding my ability to create a simple test script. def getdocs(): def f(): while True: f() yield None # ----------------------------------------------------------------------------- class B(object): def __init__(self,): pass def doit(self): # must be an instance var to trigger segfault self.docIter = getdocs() print self.docIter # this is the generator referred-to in the traceback for i, item in enumerate(self.docIter): if i > 9: break print 'exiting generator' class A(object): """ Process entry point / main thread """ def __init__(self): while True: try: self.func() except Exception, e: print 'right after raise' def func(self): b = B() thread = threading.Thread(target=b.doit) thread.start() start_t = time.time() while True: try: if time.time() - start_t > 1: raise Exception except Exception: print 'right before raise' # SIGSEGV here. If this is changed to # 'break', no segfault occurs raise if __name__ == '__main__': A() ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 12:37 Message: Logged In: YES user_id=1611720 I've produced a simplified traceback with a single generator . Note the frame being used in the traceback (#0) is the same frame being dealloc'd (#11). The relevant call in traceback.c is: PyTraceBack_Here(PyFrameObject *frame) { PyThreadState *tstate = frame->f_tstate; PyTracebackObject *oldtb = (PyTracebackObject *) tstate->curexc_traceback; PyTracebackObject *tb = newtracebackobject(oldtb, frame); and I can verify that oldtb contains garbage: (gdb) print frame $1 = (PyFrameObject *) 0x8964d94 (gdb) print frame->f_tstate $2 = (PyThreadState *) 0x895b178 (gdb) print $2->curexc_traceback $3 = (PyObject *) 0x66 #0 0x080e4296 in PyTraceBack_Here (frame=0x8964d94) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x8964d94, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb7cca4ac, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb7cca4ac, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb7cca4ac) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb7cca4ac) at Objects/genobject.c:31 #6 0x080815b9 in dict_dealloc (mp=0xb7cc913c) at Objects/dictobject.c:801 #7 0x080927b2 in subtype_dealloc (self=0xb7cca76c) at Objects/typeobject.c:686 #8 0x0806028d in instancemethod_dealloc (im=0xb7d07f04) at Objects/classobject.c:2285 #9 0x080815b9 in dict_dealloc (mp=0xb7cc90b4) at Objects/dictobject.c:801 #10 0x080927b2 in subtype_dealloc (self=0xb7cca86c) at Objects/typeobject.c:686 #11 0x081028c5 in frame_dealloc (f=0x8964a94) at Objects/frameobject.c:416 #12 0x080e41b1 in tb_dealloc (tb=0xb7cc1fcc) at Python/traceback.c:34 #13 0x080e41c2 in tb_dealloc (tb=0xb7cc1f7c) at Python/traceback.c:33 #14 0x08080dca in insertdict (mp=0xb7f99824, key=0xb7ccd020, hash=1492466088, value=0xb7ccd054) at Objects/dictobject.c:394 #15 0x080811a4 in PyDict_SetItem (op=0xb7f99824, key=0xb7ccd020, value=0xb7ccd054) at Objects/dictobject.c:619 #16 0x08082dc6 in PyDict_SetItemString (v=0xb7f99824, key=0x8129284 "exc_traceback", item=0xb7ccd054) at Objects/dictobject.c:2103 #17 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb7ccd054) at Python/sysmodule.c:82 #18 0x080bc9e5 in PyEval_EvalFrameEx (f=0x895f934, throwflag=0) at Python/ceval.c:2954 ---Type to continue, or q to quit--- #19 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f6ade8, globals=0xb7fafa44, locals=0x0, args=0xb7cc5ff8, argcount=1, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #20 0x08104083 in function_call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/funcobject.c:517 #21 0x0805a660 in PyObject_Call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/abstract.c:1860 ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-17 19:23 Message: Logged In: YES user_id=1611720 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1208400192 (LWP 26235)] 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 94 if ((next != NULL && !PyTraceBack_Check(next)) || (gdb) bt #0 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x9c2d7b4, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb64f880c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb64f880c, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb64f880c) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb64f880c) at Objects/genobject.c:31 #6 0x080b9912 in PyEval_EvalFrameEx (f=0x9c2802c, throwflag=1) at Python/ceval.c:2491 #7 0x08101a40 in gen_send_ex (gen=0xb64f362c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #8 0x08101c0f in gen_close (gen=0xb64f362c, args=0x0) at Objects/genobject.c:128 #9 0x08101cde in gen_del (self=0xb64f362c) at Objects/genobject.c:163 #10 0x0810195b in gen_dealloc (gen=0xb64f362c) at Objects/genobject.c:31 #11 0x080815b9 in dict_dealloc (mp=0xb64f4a44) at Objects/dictobject.c:801 #12 0x080927b2 in subtype_dealloc (self=0xb64f340c) at Objects/typeobject.c:686 #13 0x0806028d in instancemethod_dealloc (im=0xb796a0cc) at Objects/classobject.c:2285 #14 0x080815b9 in dict_dealloc (mp=0xb64f78ac) at Objects/dictobject.c:801 #15 0x080927b2 in subtype_dealloc (self=0xb64f810c) at Objects/typeobject.c:686 #16 0x081028c5 in frame_dealloc (f=0x9c272bc) at Objects/frameobject.c:416 #17 0x080e41b1 in tb_dealloc (tb=0xb799166c) at Python/traceback.c:34 #18 0x080e41c2 in tb_dealloc (tb=0xb4071284) at Python/traceback.c:33 #19 0x080e41c2 in tb_dealloc (tb=0xb7991824) at Python/traceback.c:33 #20 0x08080dca in insertdict (mp=0xb7f56824, key=0xb3fb9930, hash=1492466088, value=0xb3fb9914) at Objects/dictobject.c:394 #21 0x080811a4 in PyDict_SetItem (op=0xb7f56824, key=0xb3fb9930, value=0xb3fb9914) at Objects/dictobject.c:619 #22 0x08082dc6 in PyDict_SetItemString (v=0xb7f56824, key=0x8129284 "exc_traceback", item=0xb3fb9914) at Objects/dictobject.c:2103 #23 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb3fb9914) at Python/sysmodule.c:82 #24 0x080bc9e5 in PyEval_EvalFrameEx (f=0x9c10e7c, throwflag=0) at Python/ceval.c:2954 #25 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc890, globals=0xb7bbe57c, locals=0x0, args=0x9b8e2ac, argcount=1, kws=0x9b8e2b0, kwcount=0, defs=0xb7b7aed8, defcount=1, closure=0x0) at Python/ceval.c:2833 #26 0x080bd62a in PyEval_EvalFrameEx (f=0x9b8e16c, throwflag=0) at Python/ceval.c:3662 #27 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc848, globals=0xb7bbe57c, locals=0x0, args=0xb7af9d58, argcount=1, kws=0x9b7a818, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #28 0x08104083 in function_call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/funcobject.c:517 #29 0x0805a660 in PyObject_Call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/abstract.c:1860 #30 0x080bcb4b in PyEval_EvalFrameEx (f=0x9b82c0c, throwflag=0) at Python/ceval.c:3846 #31 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7cd6608, globals=0xb7cd4934, locals=0x0, args=0x9b7765c, argcount=2, kws=0x9b77664, kwcount=0, defs=0x0, defcount=0, closure=0xb7cfe874) at Python/ceval.c:2833 #32 0x080bd62a in PyEval_EvalFrameEx (f=0x9b7751c, throwflag=0) at Python/ceval.c:3662 #33 0x080bdf70 in PyEval_EvalFrameEx (f=0x9a9646c, throwflag=0) at Python/ceval.c:3652 #34 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39728, globals=0xb7f6ca44, locals=0x0, args=0x9b7a00c, argcount=0, kws=0x9b7a00c, kwcount=0, defs=0x0, defcount=0, closure=0xb796410c) at Python/ceval.c:2833 #35 0x080bd62a in PyEval_EvalFrameEx (f=0x9b79ebc, throwflag=0) at Python/ceval.c:3662 #36 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39770, globals=0xb7f6ca44, locals=0x0, args=0x99086c0, argcount=0, kws=0x99086c0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #37 0x080bd62a in PyEval_EvalFrameEx (f=0x9908584, throwflag=0) at Python/ceval.c:3662 #38 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 ---Type to continue, or q to quit--- #39 0x080bff32 in PyEval_EvalCode (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44) at Python/ceval.c:494 #40 0x080ddff1 in PyRun_FileExFlags (fp=0x98a4008, filename=0xbfffd4a3 "scoreserver.py", start=257, globals=0xb7f6ca44, locals=0xb7f6ca44, closeit=1, flags=0xbfffd298) at Python/pythonrun.c:1264 #41 0x080de321 in PyRun_SimpleFileExFlags (fp=Variable "fp" is not available. ) at Python/pythonrun.c:870 #42 0x08056ac4 in Py_Main (argc=1, argv=0xbfffd334) at Modules/main.c:496 #43 0x00a69d5f in __libc_start_main () from /lib/libc.so.6 #44 0x08056051 in _start () ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 From noreply at sourceforge.net Wed Jan 17 08:02:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 23:02:48 -0800 Subject: [ python-Bugs-1377858 ] segfaults when using __del__ and weakrefs Message-ID: Bugs item #1377858, was opened at 2005-12-10 13:20 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1377858&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None >Priority: 9 Private: No Submitted By: Carl Friedrich Bolz (cfbolz) Assigned to: Michael Hudson (mwh) Summary: segfaults when using __del__ and weakrefs Initial Comment: You can segfault Python by creating a weakref to an object in its __del__ method, storing it somewhere and then trying to dereference the weakref afterwards. the attached file shows the described behaviour. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-16 23:02 Message: Logged In: YES user_id=33168 Originator: NO Brett, Michael, Armin, can we get this patch checked in for 2.5.1? ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2006-08-19 21:31 Message: Logged In: YES user_id=357491 After finally figuring out where *list was made NULL (and adding a comment about it where it occurs), I added a test to test_weakref.py . Didn't try to tackle classic classes. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2006-08-12 04:31 Message: Logged In: YES user_id=4771 The clear_weakref(*list) only clears the first weakref to the object. You need a while loop in your patch. (attached proposed fix) Now we're left with fixing the same bug in old-style classes (surprize surprize), and turning the crasher into a test. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2006-06-29 10:36 Message: Logged In: YES user_id=357491 So after staring at this crasher it seemed to me to be that clearing the new weakrefs w/o calling their finalizers after calling the object's finalizer was the best solution. I couldn't think of any other good way to communicate to the new weakrefs that the object they refer to was no longer viable memory without doing clear_weakref() work by hand. Attached is a patch to do this. Michael, can you have a look? ---------------------------------------------------------------------- Comment By: Georg Brandl (birkenfeld) Date: 2006-01-10 11:29 Message: Logged In: YES user_id=1188172 Added to outstanding_crashes.py. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-01-09 03:58 Message: Logged In: YES user_id=6656 Hmm, maybe the referenced mayhem is more to do with clearing __dict__ than calling __del__. What breaks if we do things in this order: 1. call __del__ 2. clear weakrefs 3. clear __dict__ ? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-01-09 03:54 Message: Logged In: YES user_id=6656 Hmm, I was kind of hoping this report would get more attention. The problem is obvious if you read typeobject.c around line 660: the weakref list is cleared before __del__ is called, so any weakrefs added during the execution of __del__ are never informed of the object's death. One fix for this would be to clear the weakref list _after_ calling __del__ but that led to other mayhem in ways I haven't boethered to understand (see SF bug #742911). I guess we could just clear out any weakrefs created in __del__ without calling their callbacks. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1377858&group_id=5470 From noreply at sourceforge.net Wed Jan 17 08:22:53 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 16 Jan 2007 23:22:53 -0800 Subject: [ python-Bugs-1486663 ] Over-zealous keyword-arguments check for built-in set class Message-ID: Bugs item #1486663, was opened at 2006-05-11 09:17 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 7 Private: No Submitted By: dib (dib_at_work) Assigned to: Georg Brandl (gbrandl) Summary: Over-zealous keyword-arguments check for built-in set class Initial Comment: The fix for bug #1119418 (xrange() builtin accepts keyword arg silently) included in Python 2.4.2c1+ breaks code that passes keyword argument(s) into classes derived from the built-in set class, even if those derived classes explictly accept those keyword arguments and avoid passing them down to the built-in base class. Simplified version of code in attached BuiltinSetKeywordArgumentsCheckBroken.py fails at (G) due to bug #1119418 if version < 2.4.2c1; if version >= 2.4.2c1 (G) passes thanks to that bug fix, but instead (H) incorrectly-in-my-view fails. [Presume similar cases would fail for xrange and the other classes mentioned in #1119418.] -- David Bruce (Tested on 2.4, 2.4.2, 2.5a2 on linux2, win32.) ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-16 23:22 Message: Logged In: YES user_id=33168 Originator: NO Were these changes applied by Raymond? I don't think there were NEWS entries though. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 12:43 Message: Logged In: YES user_id=80475 Originator: NO That looks about right. Please add test cases that fail without the patch and succeed with the patch. Also, put a comment in Misc/NEWS. If the whole test suite passes, go ahead and check-in to Py2.5.1 and the head. Thanks, Raymond ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-11 11:56 Message: Logged In: YES user_id=849994 Originator: NO Attaching patch. File Added: nokeywordchecks.diff ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 10:30 Message: Logged In: YES user_id=80475 Originator: NO I fixed setobject.c in revisions 53380 and 53381. Please apply similar fixes to all the other places being bitten my the pervasive NoKeywords tests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-10 16:49 Message: Logged In: YES user_id=80475 Originator: NO My proposed solution: - if(!PyArg_NoKeywords("set()", kwds) + if(type == &PySet_Type && !PyArg_NoKeywords("set()", kwds) ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-10 13:30 Message: Logged In: YES user_id=849994 Originator: NO I'll do that, only in set_init, you have if (!PyArg_UnpackTuple(args, self->ob_type->tp_name, 0, 1, &iterable)) Changing this to use PyArg_ParseTupleAndKeywords would require a format string of "|O:" + self->ob_type->tp_name Is it worth constructing that string each time set_init() is called or should it just be "|O:set" for sets and frozensets? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-05 18:26 Message: Logged In: YES user_id=80475 Originator: NO I prefer the approach used by list(). ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 18:19 Message: Logged In: YES user_id=1326842 See patch #1491939 ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 13:02 Message: Logged In: YES user_id=1326842 This bug was introduced as part of the fix for bug #1119418. It also affects collections.deque. Can't the _PyArg_NoKeywords check simply be moved to set_init and deque_init as it was done for zipimport.zipimporter? array.array doesn't need to be changed, since it already does all of its initialization in its __new__ method. The rest of the types changed in that fix should not be affected, since they are immutable. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-05-11 10:23 Message: Logged In: YES user_id=849994 Raymond, what to do in this case? Note that other built-in types, such as list(), do accept keyword arguments. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 From noreply at sourceforge.net Wed Jan 17 10:13:35 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 01:13:35 -0800 Subject: [ python-Bugs-1486663 ] Over-zealous keyword-arguments check for built-in set class Message-ID: Bugs item #1486663, was opened at 2006-05-11 16:17 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 7 Private: No Submitted By: dib (dib_at_work) Assigned to: Georg Brandl (gbrandl) Summary: Over-zealous keyword-arguments check for built-in set class Initial Comment: The fix for bug #1119418 (xrange() builtin accepts keyword arg silently) included in Python 2.4.2c1+ breaks code that passes keyword argument(s) into classes derived from the built-in set class, even if those derived classes explictly accept those keyword arguments and avoid passing them down to the built-in base class. Simplified version of code in attached BuiltinSetKeywordArgumentsCheckBroken.py fails at (G) due to bug #1119418 if version < 2.4.2c1; if version >= 2.4.2c1 (G) passes thanks to that bug fix, but instead (H) incorrectly-in-my-view fails. [Presume similar cases would fail for xrange and the other classes mentioned in #1119418.] -- David Bruce (Tested on 2.4, 2.4.2, 2.5a2 on linux2, win32.) ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 09:13 Message: Logged In: YES user_id=849994 Originator: NO I'll create the testcases and commit the patch (as well as NEWS entries :) when I find the time. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 07:22 Message: Logged In: YES user_id=33168 Originator: NO Were these changes applied by Raymond? I don't think there were NEWS entries though. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 20:43 Message: Logged In: YES user_id=80475 Originator: NO That looks about right. Please add test cases that fail without the patch and succeed with the patch. Also, put a comment in Misc/NEWS. If the whole test suite passes, go ahead and check-in to Py2.5.1 and the head. Thanks, Raymond ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-11 19:56 Message: Logged In: YES user_id=849994 Originator: NO Attaching patch. File Added: nokeywordchecks.diff ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 18:30 Message: Logged In: YES user_id=80475 Originator: NO I fixed setobject.c in revisions 53380 and 53381. Please apply similar fixes to all the other places being bitten my the pervasive NoKeywords tests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 00:49 Message: Logged In: YES user_id=80475 Originator: NO My proposed solution: - if(!PyArg_NoKeywords("set()", kwds) + if(type == &PySet_Type && !PyArg_NoKeywords("set()", kwds) ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-10 21:30 Message: Logged In: YES user_id=849994 Originator: NO I'll do that, only in set_init, you have if (!PyArg_UnpackTuple(args, self->ob_type->tp_name, 0, 1, &iterable)) Changing this to use PyArg_ParseTupleAndKeywords would require a format string of "|O:" + self->ob_type->tp_name Is it worth constructing that string each time set_init() is called or should it just be "|O:set" for sets and frozensets? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-06 02:26 Message: Logged In: YES user_id=80475 Originator: NO I prefer the approach used by list(). ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-20 01:19 Message: Logged In: YES user_id=1326842 See patch #1491939 ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 20:02 Message: Logged In: YES user_id=1326842 This bug was introduced as part of the fix for bug #1119418. It also affects collections.deque. Can't the _PyArg_NoKeywords check simply be moved to set_init and deque_init as it was done for zipimport.zipimporter? array.array doesn't need to be changed, since it already does all of its initialization in its __new__ method. The rest of the types changed in that fix should not be affected, since they are immutable. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-05-11 17:23 Message: Logged In: YES user_id=849994 Raymond, what to do in this case? Note that other built-in types, such as list(), do accept keyword arguments. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 From noreply at sourceforge.net Wed Jan 17 10:14:58 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 01:14:58 -0800 Subject: [ python-Bugs-1633630 ] class derived from float evaporates under += Message-ID: Bugs item #1633630, was opened at 2007-01-11 23:49 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633630&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Type/class unification Group: Python 2.5 >Status: Closed >Resolution: Wont Fix Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: class derived from float evaporates under += Initial Comment: [forwarded from http://bugs.debian.org/345373] There seems to be a bug in classes derived from float. For instance, consider the following: >>> class Float(float): ... def __init__(self, v): ... float.__init__(self, v) ... self.x = 1 ... >>> a = Float(2.0) >>> b = Float(3.0) >>> type(a) >>> type(b) >>> a += b >>> type(a) Now, the type of a has silently changed. It was a Float, a derived class with all kinds of properties, and it became a float -- a plain vanilla number. My understanding is that this is incorrect, and certainly unexpected. If it *is* correct, it certainly deserves mention somewhere in the documentation. It seems that Float.__iadd__(a, b) should be called. This defaults to float.__iadd__(a, b), which should increment the float part of the object while leaving the rest intact. A possible explanation for this problem is that float.__iadd__ is not actually defined, and so it falls through to a = float.__add__(a, b), which assigns a float to a. This interpretation seems to be correct, as one can add a destructor to the Float class: >>> class FloatD(float): ... def __init__(self, v): ... float.__init__(self, v) ... self.x = 1 ... def __del__(self): ... print 'Deleting FloatD class, losing x=', self.x ... >>> a = FloatD(2.0) >>> b = FloatD(3.0) >>> a += b Deleting FloatD class, losing x= 1 >>> ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 09:14 Message: Logged In: YES user_id=849994 Originator: NO Okay, closing as "Won't fix". If you still think this should be done differently, please open a feature request. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-16 17:40 Message: Logged In: YES user_id=341410 Originator: NO The current behavior is as designed. Not a bug. Suggested move to RFE or close as "Not a bug". There has been discussion on either the python-dev or python-3000 mailing lists discussing how subclasses of builtin types (int, long, float, str, unicode, list, tuple, ...) should behave when confronted with one of a set of "standard" operators. While there has been general "it would be nice" if 'a + b' produced 'type(a)(a + b)' on certain occasions, this would change the semantics of all such operations in a backwards incompatible way (so has not been implemented). If you want to guarantee such behavior (without writing all of the __special__ methods) I would suggest that you instead create a __getattr__ method to automatically handle the coercion back into your subtype. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-13 17:57 Message: Logged In: YES user_id=849994 Originator: NO You don't need augmented assign for that, just doing "a+b" will give you a float too. ---------------------------------------------------------------------- Comment By: Jim Jewett (jimjjewett) Date: 2007-01-12 21:26 Message: Logged In: YES user_id=764593 Originator: NO Python float objects are immutable and can be shared. Therefore, their values cannot be modified -- which is why it falls back to not-in-place assignment. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633630&group_id=5470 From noreply at sourceforge.net Wed Jan 17 17:22:22 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 08:22:22 -0800 Subject: [ python-Bugs-1637850 ] make_table in difflib does not work with unicode Message-ID: Bugs item #1637850, was opened at 2007-01-18 01:22 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637850&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: y-unno (y-unno) Assigned to: Nobody/Anonymous (nobody) Summary: make_table in difflib does not work with unicode Initial Comment: make_table function in difflib.HtmlDiff does not work correctly when input strings are unicode. This is because the library uses cStringIO.StringIO classes, and cStringIO.StringIO returns strings encoded by the default encoding. When the default encoding is 'ascii', for example, this behaviour becomes a problem because some unicode characters cannot be encoded in 'ascii'. So, please change cStringIO to StringIO in difflib.py. When I use StringIO in difflib.py, this function returns unicode strings and no problems occured. This problem occured in Python 2.5/2.4 on Windows XP. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637850&group_id=5470 From noreply at sourceforge.net Wed Jan 17 18:51:36 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 09:51:36 -0800 Subject: [ python-Feature Requests-1637926 ] Empty class 'Object' Message-ID: Feature Requests item #1637926, was opened at 2007-01-17 18:51 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1637926&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: kxroberto (kxroberto) Assigned to: Nobody/Anonymous (nobody) Summary: Empty class 'Object' Initial Comment: An empty class 'Object' in builtins, which can be instantiated (with optional inline arguments as attributes (like dict)), and attributes added. Convenience - Easy OO variable container - known to pickle etc. http://groups.google.com/group/comp.lang.python/msg/3ff946e7da13dba9 http://groups.google.de/group/comp.lang.python/msg/a02f0eb4efb76b24 idea: class X(object): def __init__(self,_d={},**kwargs): kwargs.update(_d) self.__dict__=kwargs class Y(X): def __repr__(self): return ''%self.__dict__ ------ x=X(spam=1) x.a=3 Robert ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1637926&group_id=5470 From noreply at sourceforge.net Wed Jan 17 19:10:58 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 10:10:58 -0800 Subject: [ python-Bugs-1637943 ] Problem packaging wx application with py2exe. Message-ID: Bugs item #1637943, was opened at 2007-01-17 20:10 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637943&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Indy (indy90) Assigned to: Nobody/Anonymous (nobody) Summary: Problem packaging wx application with py2exe. Initial Comment: I have created a minimal wx application, which runs fine. However, when I package it with py2exe and I try to run the .exe file, an error occurs, the program crashes (before even starting) and a pop-up box says to look at the log file for the error trace. It says that wx/_core_.pyd failed to be loaded (this file exists in my filesystem - I have checked). When I skip "zipfile = None" in the setup() function, another pop-up box also appears, and says that a DLL failed to be loaded. Python 2.5 wxPython 2.8.0.1 py2exe 0.6.6 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637943&group_id=5470 From noreply at sourceforge.net Wed Jan 17 19:15:59 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 10:15:59 -0800 Subject: [ python-Bugs-1637952 ] typo http://www.python.org/doc/current/tut/node10.html Message-ID: Bugs item #1637952, was opened at 2007-01-17 18:15 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637952&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: jim pruett (cellurl) Assigned to: Nobody/Anonymous (nobody) Summary: typo http://www.python.org/doc/current/tut/node10.html Initial Comment: typo http://www.python.org/doc/current/tut/node10.html One [my] also instantiate an exception first before raising it and add any attributes to it as desired. "may" ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637952&group_id=5470 From noreply at sourceforge.net Wed Jan 17 19:26:06 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 10:26:06 -0800 Subject: [ python-Bugs-1637967 ] langref: missing item in numeric op list Message-ID: Bugs item #1637967, was opened at 2007-01-17 18:26 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637967&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: langref: missing item in numeric op list Initial Comment: Language ref manual sec 3.4.7 "Emulating numeric types", the section documenting __iadd__, __imul__, etc. says "These methods are called to implement the augmented arithmetic operations (+=, -=, *=, /=, %=, **=, <<=, ...)". /= is for truediv and %= is for mod. //= (floordiv) should also be in this list since it is one of the operations described. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637967&group_id=5470 From noreply at sourceforge.net Wed Jan 17 19:38:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 10:38:33 -0800 Subject: [ python-Bugs-1377858 ] segfaults when using __del__ and weakrefs Message-ID: Bugs item #1377858, was opened at 2005-12-10 13:20 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1377858&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: Carl Friedrich Bolz (cfbolz) Assigned to: Michael Hudson (mwh) Summary: segfaults when using __del__ and weakrefs Initial Comment: You can segfault Python by creating a weakref to an object in its __del__ method, storing it somewhere and then trying to dereference the weakref afterwards. the attached file shows the described behaviour. ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2007-01-17 10:38 Message: Logged In: YES user_id=357491 Originator: NO I have just been waiting on someone to do a final code review on it. As soon as someone else signs off I will commit it. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-16 23:02 Message: Logged In: YES user_id=33168 Originator: NO Brett, Michael, Armin, can we get this patch checked in for 2.5.1? ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2006-08-19 21:31 Message: Logged In: YES user_id=357491 After finally figuring out where *list was made NULL (and adding a comment about it where it occurs), I added a test to test_weakref.py . Didn't try to tackle classic classes. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2006-08-12 04:31 Message: Logged In: YES user_id=4771 The clear_weakref(*list) only clears the first weakref to the object. You need a while loop in your patch. (attached proposed fix) Now we're left with fixing the same bug in old-style classes (surprize surprize), and turning the crasher into a test. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2006-06-29 10:36 Message: Logged In: YES user_id=357491 So after staring at this crasher it seemed to me to be that clearing the new weakrefs w/o calling their finalizers after calling the object's finalizer was the best solution. I couldn't think of any other good way to communicate to the new weakrefs that the object they refer to was no longer viable memory without doing clear_weakref() work by hand. Attached is a patch to do this. Michael, can you have a look? ---------------------------------------------------------------------- Comment By: Georg Brandl (birkenfeld) Date: 2006-01-10 11:29 Message: Logged In: YES user_id=1188172 Added to outstanding_crashes.py. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-01-09 03:58 Message: Logged In: YES user_id=6656 Hmm, maybe the referenced mayhem is more to do with clearing __dict__ than calling __del__. What breaks if we do things in this order: 1. call __del__ 2. clear weakrefs 3. clear __dict__ ? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-01-09 03:54 Message: Logged In: YES user_id=6656 Hmm, I was kind of hoping this report would get more attention. The problem is obvious if you read typeobject.c around line 660: the weakref list is cleared before __del__ is called, so any weakrefs added during the execution of __del__ are never informed of the object's death. One fix for this would be to clear the weakref list _after_ calling __del__ but that led to other mayhem in ways I haven't boethered to understand (see SF bug #742911). I guess we could just clear out any weakrefs created in __del__ without calling their callbacks. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1377858&group_id=5470 From noreply at sourceforge.net Wed Jan 17 19:40:03 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 10:40:03 -0800 Subject: [ python-Bugs-1637943 ] Problem packaging wx application with py2exe. Message-ID: Bugs item #1637943, was opened at 2007-01-17 10:10 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637943&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: None Group: Python 2.5 >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Indy (indy90) Assigned to: Nobody/Anonymous (nobody) Summary: Problem packaging wx application with py2exe. Initial Comment: I have created a minimal wx application, which runs fine. However, when I package it with py2exe and I try to run the .exe file, an error occurs, the program crashes (before even starting) and a pop-up box says to look at the log file for the error trace. It says that wx/_core_.pyd failed to be loaded (this file exists in my filesystem - I have checked). When I skip "zipfile = None" in the setup() function, another pop-up box also appears, and says that a DLL failed to be loaded. Python 2.5 wxPython 2.8.0.1 py2exe 0.6.6 ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2007-01-17 10:40 Message: Logged In: YES user_id=357491 Originator: NO This is the bug tracker for the Python programming language. Please report this issue to the py2exe development team. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637943&group_id=5470 From noreply at sourceforge.net Wed Jan 17 20:56:23 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 11:56:23 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 14:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 01:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 14:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 13:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 14:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Wed Jan 17 21:53:55 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 12:53:55 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 15:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 14:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 01:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 14:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 13:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 14:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:05:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:05:10 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 15:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 14:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 01:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 14:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 13:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 14:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:06:18 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:06:18 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:06 Message: Logged In: YES user_id=11375 Originator: NO Add mailbox-unified-patch. File Added: mailbox-unified-patch.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 15:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 14:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 01:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 14:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 13:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 14:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:10:07 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:10:07 -0800 Subject: [ python-Bugs-1637967 ] langref: missing item in numeric op list Message-ID: Bugs item #1637967, was opened at 2007-01-17 18:26 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637967&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: paul rubin (phr) Assigned to: Nobody/Anonymous (nobody) Summary: langref: missing item in numeric op list Initial Comment: Language ref manual sec 3.4.7 "Emulating numeric types", the section documenting __iadd__, __imul__, etc. says "These methods are called to implement the augmented arithmetic operations (+=, -=, *=, /=, %=, **=, <<=, ...)". /= is for truediv and %= is for mod. //= (floordiv) should also be in this list since it is one of the operations described. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 21:10 Message: Logged In: YES user_id=849994 Originator: NO Thanks, fixed in rev. 53475, 53476 (2.5). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637967&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:12:01 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:12:01 -0800 Subject: [ python-Bugs-1637952 ] typo http://www.python.org/doc/current/tut/node10.html Message-ID: Bugs item #1637952, was opened at 2007-01-17 18:15 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637952&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed >Resolution: Out of Date Priority: 5 Private: No Submitted By: jim pruett (cellurl) Assigned to: Nobody/Anonymous (nobody) Summary: typo http://www.python.org/doc/current/tut/node10.html Initial Comment: typo http://www.python.org/doc/current/tut/node10.html One [my] also instantiate an exception first before raising it and add any attributes to it as desired. "may" ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 21:11 Message: Logged In: YES user_id=849994 Originator: NO Thanks for reporting, this seems to be already fixed in the development docs. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637952&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:13:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:13:09 -0800 Subject: [ python-Bugs-1637850 ] make_table in difflib does not work with unicode Message-ID: Bugs item #1637850, was opened at 2007-01-17 16:22 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637850&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: y-unno (y-unno) Assigned to: Nobody/Anonymous (nobody) Summary: make_table in difflib does not work with unicode Initial Comment: make_table function in difflib.HtmlDiff does not work correctly when input strings are unicode. This is because the library uses cStringIO.StringIO classes, and cStringIO.StringIO returns strings encoded by the default encoding. When the default encoding is 'ascii', for example, this behaviour becomes a problem because some unicode characters cannot be encoded in 'ascii'. So, please change cStringIO to StringIO in difflib.py. When I use StringIO in difflib.py, this function returns unicode strings and no problems occured. This problem occured in Python 2.5/2.4 on Windows XP. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 21:13 Message: Logged In: YES user_id=849994 Originator: NO I don't know. Perhaps we should rather fix cStringIO to accept Unicode strings. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637850&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:13:39 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:13:39 -0800 Subject: [ python-Feature Requests-1635335 ] Add registry functions to windows postinstall Message-ID: Feature Requests item #1635335, was opened at 2007-01-14 20:00 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Distutils >Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Nobody/Anonymous (nobody) Summary: Add registry functions to windows postinstall Initial Comment: It would be useful to add regkey_created() or regkey_modified() to windows postinstall scripts along with directory_created() and file_created(). Useful for adding installed package to App Paths. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:20:53 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:20:53 -0800 Subject: [ python-Bugs-1629125 ] Incorrect type in PyDict_Next() example code Message-ID: Bugs item #1629125, was opened at 2007-01-05 23:15 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629125&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Jason Evans (jasonevans) Assigned to: Neal Norwitz (nnorwitz) Summary: Incorrect type in PyDict_Next() example code Initial Comment: In the PyDict_Next() documentation, there are two example snippets of code. In both snippets, the line: int pos = 0; should instead be: ssize_t pos = 0; or perhaps: Py_ssize_t pos = 0; On an LP64 system, the unfixed snippets will cause a compiler warning due to size mismatch between int and ssize_t. Using Python 2.5 on RHEL WS 4, x86_64. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 21:20 Message: Logged In: YES user_id=849994 Originator: NO Yep, it has to be Py_ssize_t. Fixed in rev. 53477, 53478 (2.5). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629125&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:24:04 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:24:04 -0800 Subject: [ python-Bugs-1627036 ] website issue reporter down Message-ID: Bugs item #1627036, was opened at 2007-01-03 15:01 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627036&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Jim Jewett (jimjjewett) Assigned to: Nobody/Anonymous (nobody) Summary: website issue reporter down Initial Comment: To request an update for python.org, the procedure seems to be to create a ticket via: http://wiki.python.org/moin/PythonWebsiteCreatingNewTickets which says that self registration is disabled, but sends you to: http://pydotorg.python.org/pydotorg/newticket which says that admin privs are required to create a new ticket. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 21:24 Message: Logged In: YES user_id=849994 Originator: NO It only says "TICKET_CREATE privileges are required to perform this operation". In any case, this is discussed on the pydotorg,at,python.org mailing list. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627036&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:25:29 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:25:29 -0800 Subject: [ python-Bugs-1624674 ] webbrowser.open_new() suggestion Message-ID: Bugs item #1624674, was opened at 2006-12-30 00:03 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1624674&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Imre P?ntek (imi1984) >Assigned to: Georg Brandl (gbrandl) Summary: webbrowser.open_new() suggestion Initial Comment: Hello, under Linux if I use webbrowser.open_new('...') a konqueror gets invoked. At the time when invoking konqueror (maybe you probe first, but anyways) you assume that user has a properly installed kde. But if you assume the user has a properly installed KDE you have a better opportunity to open a webpage, even in the browser preferred by the user -- no matter really what it is. Try this one: kfmclient exec http://sourceforge.net/ using this one the client associated with .html in kcontrol gets invoked. I suppose that (becouse of the ability to customize the browser) this way would be better if available than guessing which browser would the user prefer. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-14 23:08 Message: Logged In: YES user_id=1591633 Originator: NO A quick look at the code makes me think that it does try to run kfmclient first. Specifically, line 351 of webbrowser.py tries kfmclient, while like line 363 of webbrowser.py opens konqueror. I don't really run KDE, Gnome, or Windows, so I'm not a lot of help for testing this for you. I can, however, tell you that it does the "right thing" for me, in that it opens Firefox. When I did Python development on Windows, it also "did the right thing" there. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1624674&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:26:43 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:26:43 -0800 Subject: [ python-Bugs-1619659 ] htonl, ntohl don't handle negative longs Message-ID: Bugs item #1619659, was opened at 2006-12-20 18:42 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619659&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Adam Olsen (rhamphoryncus) >Assigned to: Guido van Rossum (gvanrossum) Summary: htonl, ntohl don't handle negative longs Initial Comment: >>> htonl(-5) -67108865 >>> htonl(-5L) Traceback (most recent call last): File "", line 1, in ? OverflowError: can't convert negative value to unsigned long It works fine in 2.1 and 2.2, but fails in 2.3, 2.4, 2.5. htons, ntohs do not appear to have the bug, but I'm not 100% sure. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 21:26 Message: Logged In: YES user_id=849994 Originator: NO Guido, you applied the patch, can this bug be closed? ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-14 07:36 Message: Logged In: YES user_id=1591633 Originator: NO It is here: https://sourceforge.net/tracker/index.php?func=detail&aid=1635058&group_id=5470&atid=305470 I apologize for not getting to this sooner, but I've been working like a frenzied devil at work. Things have been really hectic with our customers wanting year end reports. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-14 00:08 Message: Logged In: YES user_id=6380 Originator: NO mark-roberts, where's your patch? ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2006-12-30 02:15 Message: Logged In: YES user_id=1591633 Originator: NO Hmmm, yes, I see a problem. At the very least, I think we may be wanting some consistency between the acceptance of ints and longs. Also, I think we should return an unsigned long instead of just a long (which can be negative). I've got a patch right now to make htonl, ntohl, htons, and ntohs never return a negative number. I'm rather waffling to the idea of whether we should accept negative numbers at all in any of the functions. The behavior is undefined, and it is, afterall, better not to guess what a user intended. However, consistency should be a desirable goal, and we should accept make the interface consistent for both ints and longs. Mark ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2006-12-28 21:37 Message: Logged In: YES user_id=12364 Originator: YES I forgot to mention it, but the only reason htonl should get passed a negative number is that it (and possibly struct?) produce a negative number. Changing them to always produce positive numbers may be an alternative solution. Or we may want to do both, always producing positive while also accepting negative. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2006-12-26 09:24 Message: Logged In: YES user_id=1591633 Originator: NO >From man page for htonl and friends: #include uint32_t htonl(uint32_t hostlong); uint16_t htons(uint16_t hostshort); uint32_t ntohl(uint32_t netlong); uint16_t ntohs(uint16_t netshort); Python does call these underlying functions in Modules/socketmodule.c. The problem comes from that PyLong_AsUnsignedLong() called in socket_htonl() specifically checks to see that the value cannot be less than 0. The error checking was rather exquisite, I might add. - Mark ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619659&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:29:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:29:13 -0800 Subject: [ python-Bugs-1566611 ] Idle 1.2 - Calltips Hotkey does not work Message-ID: Bugs item #1566611, was opened at 2006-09-27 20:24 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1566611&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: IDLE Group: None >Status: Pending Resolution: None Priority: 5 Private: No Submitted By: fladd (fladd710) Assigned to: Kurt B. Kaiser (kbk) Summary: Idle 1.2 - Calltips Hotkey does not work Initial Comment: Hitting CTRL+Backslash does not show the calltip (which is not shown by default) on Windows Xp with Python 1.5 Final. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 21:29 Message: Logged In: YES user_id=849994 Originator: NO fladd: Please supply the additional information asked for so that we are able to process this bug. Setting status to Pending. ---------------------------------------------------------------------- Comment By: Tal Einat (taleinat) Date: 2006-12-09 17:45 Message: Logged In: YES user_id=1330769 Originator: NO You mean 2.5 final is suppose... Works for me, Python 2.5 final, WinXP Pro. Does this never work or only sometimes? Have you checked you key definitions? Does it work in the Shell window? Please be more specific... ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1566611&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:45:25 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:45:25 -0800 Subject: [ python-Bugs-1619659 ] htonl, ntohl don't handle negative longs Message-ID: Bugs item #1619659, was opened at 2006-12-20 13:42 Message generated for change (Settings changed) made by gvanrossum You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619659&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: Adam Olsen (rhamphoryncus) Assigned to: Guido van Rossum (gvanrossum) Summary: htonl, ntohl don't handle negative longs Initial Comment: >>> htonl(-5) -67108865 >>> htonl(-5L) Traceback (most recent call last): File "", line 1, in ? OverflowError: can't convert negative value to unsigned long It works fine in 2.1 and 2.2, but fails in 2.3, 2.4, 2.5. htons, ntohs do not appear to have the bug, but I'm not 100% sure. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 16:26 Message: Logged In: YES user_id=849994 Originator: NO Guido, you applied the patch, can this bug be closed? ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-14 02:36 Message: Logged In: YES user_id=1591633 Originator: NO It is here: https://sourceforge.net/tracker/index.php?func=detail&aid=1635058&group_id=5470&atid=305470 I apologize for not getting to this sooner, but I've been working like a frenzied devil at work. Things have been really hectic with our customers wanting year end reports. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2007-01-13 19:08 Message: Logged In: YES user_id=6380 Originator: NO mark-roberts, where's your patch? ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2006-12-29 21:15 Message: Logged In: YES user_id=1591633 Originator: NO Hmmm, yes, I see a problem. At the very least, I think we may be wanting some consistency between the acceptance of ints and longs. Also, I think we should return an unsigned long instead of just a long (which can be negative). I've got a patch right now to make htonl, ntohl, htons, and ntohs never return a negative number. I'm rather waffling to the idea of whether we should accept negative numbers at all in any of the functions. The behavior is undefined, and it is, afterall, better not to guess what a user intended. However, consistency should be a desirable goal, and we should accept make the interface consistent for both ints and longs. Mark ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2006-12-28 16:37 Message: Logged In: YES user_id=12364 Originator: YES I forgot to mention it, but the only reason htonl should get passed a negative number is that it (and possibly struct?) produce a negative number. Changing them to always produce positive numbers may be an alternative solution. Or we may want to do both, always producing positive while also accepting negative. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2006-12-26 04:24 Message: Logged In: YES user_id=1591633 Originator: NO >From man page for htonl and friends: #include uint32_t htonl(uint32_t hostlong); uint16_t htons(uint16_t hostshort); uint32_t ntohl(uint32_t netlong); uint16_t ntohs(uint16_t netshort); Python does call these underlying functions in Modules/socketmodule.c. The problem comes from that PyLong_AsUnsignedLong() called in socket_htonl() specifically checks to see that the value cannot be less than 0. The error checking was rather exquisite, I might add. - Mark ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1619659&group_id=5470 From noreply at sourceforge.net Wed Jan 17 22:58:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 13:58:33 -0800 Subject: [ python-Bugs-1636950 ] Newline skipped in "for line in file" Message-ID: Bugs item #1636950, was opened at 2007-01-16 10:56 Message generated for change (Comment added) made by amonthei You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Andy Monthei (amonthei) Assigned to: Nobody/Anonymous (nobody) Summary: Newline skipped in "for line in file" Initial Comment: When processing huge fixed block files of about 7000 bytes wide and several hundred thousand lines long some pairs of lines get read as one long line with no line break when using "for line in file:". The problem is even worse when using the fileinput module and reading in five or six huge files consisting of 4.8 million records causes several hundred pairs of lines to be read as single lines. When a newline is skipped it is usually followed by several more in the next few hundred lines. I have not noticed any other characters being skipped, only the line break. O.S. Windows (5, 1, 2600, 2, 'Service Pack 2') Python 2.5 ---------------------------------------------------------------------- >Comment By: Andy Monthei (amonthei) Date: 2007-01-17 15:58 Message: Logged In: YES user_id=1693612 Originator: YES I can not upload the files that trigger this because of the data that is in them but I am working on getting around that. In my data line 617391 in a fixed block file of 6990 bytes wide gets read in with the next line after it. The line break is 0d0a (same as the others) where the bug happens so I am wondering if it is a buffer issue where the linebreak falls at the edge, however no other characters are ever missed. The total file is 888420 lines and this happens in four spots. I will hopefully have a file to send soon. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-16 16:33 Message: Logged In: YES user_id=357491 Originator: NO Do you happen to have a sample you could upload that triggers the bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 From noreply at sourceforge.net Thu Jan 18 06:24:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 21:24:47 -0800 Subject: [ python-Bugs-1636950 ] Newline skipped in "for line in file" Message-ID: Bugs item #1636950, was opened at 2007-01-16 10:56 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Andy Monthei (amonthei) Assigned to: Nobody/Anonymous (nobody) Summary: Newline skipped in "for line in file" Initial Comment: When processing huge fixed block files of about 7000 bytes wide and several hundred thousand lines long some pairs of lines get read as one long line with no line break when using "for line in file:". The problem is even worse when using the fileinput module and reading in five or six huge files consisting of 4.8 million records causes several hundred pairs of lines to be read as single lines. When a newline is skipped it is usually followed by several more in the next few hundred lines. I have not noticed any other characters being skipped, only the line break. O.S. Windows (5, 1, 2600, 2, 'Service Pack 2') Python 2.5 ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-17 23:24 Message: Logged In: YES user_id=1591633 Originator: NO How wide are the min and max widths of the lines? This problem is of particular interest to me. ---------------------------------------------------------------------- Comment By: Andy Monthei (amonthei) Date: 2007-01-17 15:58 Message: Logged In: YES user_id=1693612 Originator: YES I can not upload the files that trigger this because of the data that is in them but I am working on getting around that. In my data line 617391 in a fixed block file of 6990 bytes wide gets read in with the next line after it. The line break is 0d0a (same as the others) where the bug happens so I am wondering if it is a buffer issue where the linebreak falls at the edge, however no other characters are ever missed. The total file is 888420 lines and this happens in four spots. I will hopefully have a file to send soon. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-16 16:33 Message: Logged In: YES user_id=357491 Originator: NO Do you happen to have a sample you could upload that triggers the bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 From noreply at sourceforge.net Thu Jan 18 07:17:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 22:17:50 -0800 Subject: [ python-Bugs-776202 ] MacOS9: test_uu fails Message-ID: Bugs item #776202, was opened at 2003-07-23 05:02 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=776202&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Macintosh Group: Python 2.6 Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Jack Jansen (jackjansen) Assigned to: A.M. Kuchling (akuchling) Summary: MacOS9: test_uu fails Initial Comment: test_uu fails on MacPython-OS9: AssertionError: 'The smooth-scaled python crept over the sleeping dog\r' != 'The smooth-scaled python crept over the sleeping dog\n' Presumably it mixes binary and text I/O. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 22:17 Message: Logged In: YES user_id=33168 Originator: NO r53481 partially reverted the changes that were failing on Windows. test_encode (test.test_uu.UUFileTest) ... FAIL ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-22 08:43 Message: Logged In: YES user_id=11375 Originator: NO Applied in rev. 53145. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2006-12-22 07:46 Message: Logged In: YES user_id=45365 Originator: YES MacOS9 is long dead, uuencoded files are probably even longer dead... If the patch looks good: apply it. But I wouldn't spend more than a few milliseconds on the whole issue:-) ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-22 05:33 Message: Logged In: YES user_id=11375 Originator: NO Should the suggested patch be applied, simply for the sake of consistency in test_uu? It's probably difficult to replicate this bug now; does Jack even have a MacOS 9 installation any more? ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2003-08-04 06:33 Message: Logged In: YES user_id=89016 Can you try the following patch (diff.txt)? The patch changes all open() statements to use text mode. I've tested the patch on Windows and Linux. I don't know why the old test mixed text and binary mode. The test should have failed even before the port to PyUnit. ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2003-08-04 04:41 Message: Logged In: YES user_id=45365 It's in test_decode. The log is attached. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2003-08-04 04:21 Message: Logged In: YES user_id=89016 It would help to see a complete traceback (Is the error in test_encode or test_decode?) ---------------------------------------------------------------------- Comment By: Jack Jansen (jackjansen) Date: 2003-07-31 12:50 Message: Logged In: YES user_id=45365 I changed the open call to use 'rU' in stead of 'r' (test_uu rev. 1.6.6.1). I get the distinct impression that this isn't the right fix, though, but that the real problem is elsewhere (mixing up text and binary I/O), so I'd like a second opinion. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=776202&group_id=5470 From noreply at sourceforge.net Thu Jan 18 08:12:35 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 17 Jan 2007 23:12:35 -0800 Subject: [ python-Bugs-1636950 ] Newline skipped in "for line in file" Message-ID: Bugs item #1636950, was opened at 2007-01-16 10:56 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Andy Monthei (amonthei) Assigned to: Nobody/Anonymous (nobody) Summary: Newline skipped in "for line in file" Initial Comment: When processing huge fixed block files of about 7000 bytes wide and several hundred thousand lines long some pairs of lines get read as one long line with no line break when using "for line in file:". The problem is even worse when using the fileinput module and reading in five or six huge files consisting of 4.8 million records causes several hundred pairs of lines to be read as single lines. When a newline is skipped it is usually followed by several more in the next few hundred lines. I have not noticed any other characters being skipped, only the line break. O.S. Windows (5, 1, 2600, 2, 'Service Pack 2') Python 2.5 ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-18 01:12 Message: Logged In: YES user_id=1591633 Originator: NO I don't know if this helps: I spent the last little while creating / reading random files that all (seemingly) matched the description you gave us. None of these files failed to read properly. (e.g., have the right amount of rows with a line length that seemingly was the right line. Definitely no doubling lines). Perusing the file source code found a detailed discussion of fgets vs fgetc for finding the next line in the file. Have you tried reading the file with fp.read(8192) or similar? Hopefully you're able to reproduce the bug with scrubbed data (because I couldn't construct random data to do so). Good luck. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-17 23:24 Message: Logged In: YES user_id=1591633 Originator: NO How wide are the min and max widths of the lines? This problem is of particular interest to me. ---------------------------------------------------------------------- Comment By: Andy Monthei (amonthei) Date: 2007-01-17 15:58 Message: Logged In: YES user_id=1693612 Originator: YES I can not upload the files that trigger this because of the data that is in them but I am working on getting around that. In my data line 617391 in a fixed block file of 6990 bytes wide gets read in with the next line after it. The line break is 0d0a (same as the others) where the bug happens so I am wondering if it is a buffer issue where the linebreak falls at the edge, however no other characters are ever missed. The total file is 888420 lines and this happens in four spots. I will hopefully have a file to send soon. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-16 16:33 Message: Logged In: YES user_id=357491 Originator: NO Do you happen to have a sample you could upload that triggers the bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 From noreply at sourceforge.net Thu Jan 18 10:23:02 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 01:23:02 -0800 Subject: [ python-Bugs-1636950 ] Newline skipped in "for line in file" Message-ID: Bugs item #1636950, was opened at 2007-01-16 17:56 Message generated for change (Comment added) made by doerwalter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Andy Monthei (amonthei) Assigned to: Nobody/Anonymous (nobody) Summary: Newline skipped in "for line in file" Initial Comment: When processing huge fixed block files of about 7000 bytes wide and several hundred thousand lines long some pairs of lines get read as one long line with no line break when using "for line in file:". The problem is even worse when using the fileinput module and reading in five or six huge files consisting of 4.8 million records causes several hundred pairs of lines to be read as single lines. When a newline is skipped it is usually followed by several more in the next few hundred lines. I have not noticed any other characters being skipped, only the line break. O.S. Windows (5, 1, 2600, 2, 'Service Pack 2') Python 2.5 ---------------------------------------------------------------------- >Comment By: Walter D?rwald (doerwalter) Date: 2007-01-18 10:23 Message: Logged In: YES user_id=89016 Originator: NO Are you using any of the unicode reading features (i.e. codecs.EncodedFile etc.) or are you using plain open() for reading the file? ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-18 08:12 Message: Logged In: YES user_id=1591633 Originator: NO I don't know if this helps: I spent the last little while creating / reading random files that all (seemingly) matched the description you gave us. None of these files failed to read properly. (e.g., have the right amount of rows with a line length that seemingly was the right line. Definitely no doubling lines). Perusing the file source code found a detailed discussion of fgets vs fgetc for finding the next line in the file. Have you tried reading the file with fp.read(8192) or similar? Hopefully you're able to reproduce the bug with scrubbed data (because I couldn't construct random data to do so). Good luck. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-18 06:24 Message: Logged In: YES user_id=1591633 Originator: NO How wide are the min and max widths of the lines? This problem is of particular interest to me. ---------------------------------------------------------------------- Comment By: Andy Monthei (amonthei) Date: 2007-01-17 22:58 Message: Logged In: YES user_id=1693612 Originator: YES I can not upload the files that trigger this because of the data that is in them but I am working on getting around that. In my data line 617391 in a fixed block file of 6990 bytes wide gets read in with the next line after it. The line break is 0d0a (same as the others) where the bug happens so I am wondering if it is a buffer issue where the linebreak falls at the edge, however no other characters are ever missed. The total file is 888420 lines and this happens in four spots. I will hopefully have a file to send soon. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-16 23:33 Message: Logged In: YES user_id=357491 Originator: NO Do you happen to have a sample you could upload that triggers the bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 From noreply at sourceforge.net Thu Jan 18 15:18:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 06:18:10 -0800 Subject: [ python-Bugs-1638627 ] Incorrect documentation for random.betavariate() Message-ID: Bugs item #1638627, was opened at 2007-01-18 15:18 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1638627&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Troels Walsted Hansen (troels) Assigned to: Nobody/Anonymous (nobody) Summary: Incorrect documentation for random.betavariate() Initial Comment: Both the documentation at http://docs.python.org/lib/module-random.html and the docstring have the same erroneous input conditions. They claim input must be >-1 when it must in fact be >0. Note also the freak "}" that has snuck into the docstring (copied and pasted from the documentation perhaps?). >>> import random >>> print random.betavariate.__doc__ Beta distribution. Conditions on the parameters are alpha > -1 and beta} > -1. Returned values range between 0 and 1. >>> random.betavariate(0, 0) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/random.py", line 594, in betavariate y = self.gammavariate(alpha, 1.) File "/usr/lib/python2.3/random.py", line 457, in gammavariate raise ValueError, 'gammavariate: alpha and beta must be > 0.0' ValueError: gammavariate: alpha and beta must be > 0.0 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1638627&group_id=5470 From noreply at sourceforge.net Thu Jan 18 15:25:08 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 06:25:08 -0800 Subject: [ python-Bugs-1638627 ] Incorrect documentation for random.betavariate() Message-ID: Bugs item #1638627, was opened at 2007-01-18 14:18 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1638627&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Troels Walsted Hansen (troels) >Assigned to: Raymond Hettinger (rhettinger) Summary: Incorrect documentation for random.betavariate() Initial Comment: Both the documentation at http://docs.python.org/lib/module-random.html and the docstring have the same erroneous input conditions. They claim input must be >-1 when it must in fact be >0. Note also the freak "}" that has snuck into the docstring (copied and pasted from the documentation perhaps?). >>> import random >>> print random.betavariate.__doc__ Beta distribution. Conditions on the parameters are alpha > -1 and beta} > -1. Returned values range between 0 and 1. >>> random.betavariate(0, 0) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/random.py", line 594, in betavariate y = self.gammavariate(alpha, 1.) File "/usr/lib/python2.3/random.py", line 457, in gammavariate raise ValueError, 'gammavariate: alpha and beta must be > 0.0' ValueError: gammavariate: alpha and beta must be > 0.0 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1638627&group_id=5470 From noreply at sourceforge.net Thu Jan 18 16:34:19 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 07:34:19 -0800 Subject: [ python-Bugs-1636950 ] Newline skipped in "for line in file" Message-ID: Bugs item #1636950, was opened at 2007-01-16 10:56 Message generated for change (Comment added) made by amonthei You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Andy Monthei (amonthei) Assigned to: Nobody/Anonymous (nobody) Summary: Newline skipped in "for line in file" Initial Comment: When processing huge fixed block files of about 7000 bytes wide and several hundred thousand lines long some pairs of lines get read as one long line with no line break when using "for line in file:". The problem is even worse when using the fileinput module and reading in five or six huge files consisting of 4.8 million records causes several hundred pairs of lines to be read as single lines. When a newline is skipped it is usually followed by several more in the next few hundred lines. I have not noticed any other characters being skipped, only the line break. O.S. Windows (5, 1, 2600, 2, 'Service Pack 2') Python 2.5 ---------------------------------------------------------------------- >Comment By: Andy Monthei (amonthei) Date: 2007-01-18 09:34 Message: Logged In: YES user_id=1693612 Originator: YES I am using open() for reading the file, no other features. I have also had fileinput.input(fileList) compound the problem. Each file that this has happened to is a fixed block file of either 6990 or 7700 bytes wide but this I think is insignificant. When looking at the file in a hex editor everything looks fine and a small Java program using a buffered reader will give me the correct line count when Python does not. Using something like fp.read(8192) I'm sure might temporarily solve my problem but I will keep working on getting a file I can upload. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2007-01-18 03:23 Message: Logged In: YES user_id=89016 Originator: NO Are you using any of the unicode reading features (i.e. codecs.EncodedFile etc.) or are you using plain open() for reading the file? ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-18 01:12 Message: Logged In: YES user_id=1591633 Originator: NO I don't know if this helps: I spent the last little while creating / reading random files that all (seemingly) matched the description you gave us. None of these files failed to read properly. (e.g., have the right amount of rows with a line length that seemingly was the right line. Definitely no doubling lines). Perusing the file source code found a detailed discussion of fgets vs fgetc for finding the next line in the file. Have you tried reading the file with fp.read(8192) or similar? Hopefully you're able to reproduce the bug with scrubbed data (because I couldn't construct random data to do so). Good luck. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-17 23:24 Message: Logged In: YES user_id=1591633 Originator: NO How wide are the min and max widths of the lines? This problem is of particular interest to me. ---------------------------------------------------------------------- Comment By: Andy Monthei (amonthei) Date: 2007-01-17 15:58 Message: Logged In: YES user_id=1693612 Originator: YES I can not upload the files that trigger this because of the data that is in them but I am working on getting around that. In my data line 617391 in a fixed block file of 6990 bytes wide gets read in with the next line after it. The line break is 0d0a (same as the others) where the bug happens so I am wondering if it is a buffer issue where the linebreak falls at the edge, however no other characters are ever missed. The total file is 888420 lines and this happens in four spots. I will hopefully have a file to send soon. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-16 16:33 Message: Logged In: YES user_id=357491 Originator: NO Do you happen to have a sample you could upload that triggers the bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 From noreply at sourceforge.net Thu Jan 18 19:14:36 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 10:14:36 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: David Watson (baikie) Date: 2007-01-18 18:14 Message: Logged In: YES user_id=1504904 Originator: YES Unfortunately, there is a problem with clearing _toc: it's basically the one I alluded to in my comment of 2006-12-16. Back then I thought it could be caught by the test you issue the warning for, but the application may instead do its second remove() *after* the lock() (so that self._pending is not set at lock() time), using the key carried over from before it called unlock(). As before, this would result in the wrong message being deleted. I've added a test case for this (diff attached), and a bug I found in the process whereby flush() wasn't updating the length, which could cause subsequent flushes to fail (I've got a fix for this). These seem to have turned up a third bug in the MH class, but I haven't looked at that yet. File Added: mailbox-unified2-test.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 21:06 Message: Logged In: YES user_id=11375 Originator: NO Add mailbox-unified-patch. File Added: mailbox-unified-patch.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 21:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 20:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 19:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 06:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 19:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 18:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 19:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 18:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 17:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 19:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 19:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 17:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 19:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 14:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 13:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 21:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 20:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 20:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Thu Jan 18 19:15:52 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 10:15:52 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: David Watson (baikie) Date: 2007-01-18 18:15 Message: Logged In: YES user_id=1504904 Originator: YES This version passes the new tests (it fixes the length checking bug, and no longer clears self._toc), but goes back to failing test_concurrent_add. File Added: mailbox-unified2-module.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 18:14 Message: Logged In: YES user_id=1504904 Originator: YES Unfortunately, there is a problem with clearing _toc: it's basically the one I alluded to in my comment of 2006-12-16. Back then I thought it could be caught by the test you issue the warning for, but the application may instead do its second remove() *after* the lock() (so that self._pending is not set at lock() time), using the key carried over from before it called unlock(). As before, this would result in the wrong message being deleted. I've added a test case for this (diff attached), and a bug I found in the process whereby flush() wasn't updating the length, which could cause subsequent flushes to fail (I've got a fix for this). These seem to have turned up a third bug in the MH class, but I haven't looked at that yet. File Added: mailbox-unified2-test.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 21:06 Message: Logged In: YES user_id=11375 Originator: NO Add mailbox-unified-patch. File Added: mailbox-unified-patch.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 21:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 20:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 19:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 06:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 19:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 18:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 19:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 18:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 17:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 19:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 19:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 17:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 19:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 14:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 13:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 21:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 20:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 20:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Thu Jan 18 21:02:05 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 12:02:05 -0800 Subject: [ python-Bugs-1637120 ] Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Message-ID: Bugs item #1637120, was opened at 2007-01-16 22:06 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Orlando Irrazabal (oirraza) >Assigned to: Thomas Heller (theller) Summary: Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Initial Comment: Initial Comment: Build of Python 2.5 on AIX 5.3 with xlc_r fails with the below error message. The configure line is: ./configure --with-gcc="xlc_r -q64" --with-cxx="xlC_r -q64" --disable-ipv6 AR="ar -X64" [...] building '_ctypes' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -Ibuild/temp.aix-5.3-2.5/libffi/include -Ibuild/temp.aix-5.3-2.5/libffi -I/sw_install/python-2.5/Modules/_ctypes/libffi/src -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes.o "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.15: 1506-276 (S) Syntax error: possible missing '{'? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 248.3: 1506-273 (E) Missing type in declaration of ffi_raw_closure. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.38: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 252.23: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-033 (S) Function ffi_prep_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.23: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.38: 1506-275 (S) Unexpected text ')' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.43: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 258.28: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-033 (S) Function ffi_prep_java_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.28: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.43: 1506-275 (S) Unexpected text ')' encountered. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 71.9: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 77.26: 1506-195 (S) Integral constant expression with a value greater than zero is required. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 2804.31: 1506-068 (E) Operation between types "void*" and "int(*)(void)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3357.28: 1506-280 (E) Function argument assignment between types "int(*)(void)" and "void*" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3417.42: 1506-022 (S) "pcl" is not a member of "struct {...}". "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4749.67: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,const void*,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4750.66: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,int,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4751.69: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const char*,long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4752.64: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(void*,struct _object*,struct _object*)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4754.70: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const unsigned int*,int)" is not allowed. building '_ctypes_test' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.o "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 61.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 68.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 75.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field M must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field N must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field O must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field P must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field Q must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field R must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field S must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_s. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_u. [...] ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 21:02 Message: Logged In: YES user_id=21627 Originator: NO oirraza, can you please try the subversion maintenance branch for Python 2.5 instead and report whether the bug has there? It is at http://svn.python.org/projects/python/branches/release25-maint/ Thomas, can you please take a look at this? If not, unassign. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 From noreply at sourceforge.net Thu Jan 18 21:03:19 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 12:03:19 -0800 Subject: [ python-Bugs-1638627 ] Incorrect documentation for random.betavariate() Message-ID: Bugs item #1638627, was opened at 2007-01-18 15:18 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1638627&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.3 >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Troels Walsted Hansen (troels) Assigned to: Raymond Hettinger (rhettinger) Summary: Incorrect documentation for random.betavariate() Initial Comment: Both the documentation at http://docs.python.org/lib/module-random.html and the docstring have the same erroneous input conditions. They claim input must be >-1 when it must in fact be >0. Note also the freak "}" that has snuck into the docstring (copied and pasted from the documentation perhaps?). >>> import random >>> print random.betavariate.__doc__ Beta distribution. Conditions on the parameters are alpha > -1 and beta} > -1. Returned values range between 0 and 1. >>> random.betavariate(0, 0) Traceback (most recent call last): File "", line 1, in ? File "/usr/lib/python2.3/random.py", line 594, in betavariate y = self.gammavariate(alpha, 1.) File "/usr/lib/python2.3/random.py", line 457, in gammavariate raise ValueError, 'gammavariate: alpha and beta must be > 0.0' ValueError: gammavariate: alpha and beta must be > 0.0 ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 21:03 Message: Logged In: YES user_id=21627 Originator: NO This seems to be a duplicate of 1635892 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1638627&group_id=5470 From noreply at sourceforge.net Thu Jan 18 21:08:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 12:08:13 -0800 Subject: [ python-Bugs-1634774 ] locale 1251 does not convert to upper case properly Message-ID: Bugs item #1634774, was opened at 2007-01-13 18:30 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ivan Dobrokotov (dobrokot) Assigned to: Nobody/Anonymous (nobody) Summary: locale 1251 does not convert to upper case properly Initial Comment:
 # -*- coding: 1251 -*-

import locale

locale.setlocale(locale.LC_ALL, ".1251") #locale name may be Windows specific?

#-----------------------------------------------
print chr(184), chr(168)
assert  chr(255).upper() == chr(223) #OK
assert  chr(184).upper() == chr(168) #fail
#-----------------------------------------------
assert  'q'.upper() == 'Q' #OK 
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  u'?'.upper() == u'?' #OK (locale independent)
assert  '?'.upper() == '?' #fail
I suppose incorrect realization of uppercase like
if ('a' <= c && c <= '?')
  return c+'?'-'?'
symbol '?' (184 in cp1251) is not in range 'a'-'?' ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 21:08 Message: Logged In: YES user_id=21627 Originator: NO You can see the implementation of .upper in http://svn.python.org/projects/python/tags/r25/Objects/stringobject.c (function string_upper) Off-hand, I cannot see anything wrong in that code. It definitely does *not* use c+'?'-'?'. ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 22:08 Message: Logged In: YES user_id=1538986 Originator: YES forgot to mention used python version - http://www.python.org/ftp/python/2.5/python-2.5.msi ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:51 Message: Logged In: YES user_id=1538986 Originator: YES sorry, I mean toupper((int)(unsigned char)'?') not just toupper('?') ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:49 Message: Logged In: YES user_id=1538986 Originator: YES C-CRT library fucntion toupper('?') works properly, if I set setlocale(LC_ALL, ".1251") ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 From noreply at sourceforge.net Thu Jan 18 22:18:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 13:18:10 -0800 Subject: [ python-Bugs-1634774 ] locale 1251 does not convert to upper case properly Message-ID: Bugs item #1634774, was opened at 2007-01-13 18:30 Message generated for change (Comment added) made by dobrokot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ivan Dobrokotov (dobrokot) Assigned to: Nobody/Anonymous (nobody) Summary: locale 1251 does not convert to upper case properly Initial Comment:
 # -*- coding: 1251 -*-

import locale

locale.setlocale(locale.LC_ALL, ".1251") #locale name may be Windows specific?

#-----------------------------------------------
print chr(184), chr(168)
assert  chr(255).upper() == chr(223) #OK
assert  chr(184).upper() == chr(168) #fail
#-----------------------------------------------
assert  'q'.upper() == 'Q' #OK 
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  u'?'.upper() == u'?' #OK (locale independent)
assert  '?'.upper() == '?' #fail
I suppose incorrect realization of uppercase like
if ('a' <= c && c <= '?')
  return c+'?'-'?'
symbol '?' (184 in cp1251) is not in range 'a'-'?' ---------------------------------------------------------------------- >Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-18 22:18 Message: Logged In: YES user_id=1538986 Originator: YES well, C: ---------------------------- #include #include #include int main() { int i = 184; char *old = setlocale(LC_CTYPE, ".1251"); assert(old); printf("%d -> %d\n", i, _toupper(i)); printf("%d -> %d\n", i, toupper(i)); } ---------------------------- C ouput: 184 -> 152 184 -> 168 so, _toupper and upper are different functions. MSDN does not mention nothing about difference, except that 'toupper' is "ANSI compatible" :( File Added: toupper.zip ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 21:08 Message: Logged In: YES user_id=21627 Originator: NO You can see the implementation of .upper in http://svn.python.org/projects/python/tags/r25/Objects/stringobject.c (function string_upper) Off-hand, I cannot see anything wrong in that code. It definitely does *not* use c+'?'-'?'. ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 22:08 Message: Logged In: YES user_id=1538986 Originator: YES forgot to mention used python version - http://www.python.org/ftp/python/2.5/python-2.5.msi ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:51 Message: Logged In: YES user_id=1538986 Originator: YES sorry, I mean toupper((int)(unsigned char)'?') not just toupper('?') ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:49 Message: Logged In: YES user_id=1538986 Originator: YES C-CRT library fucntion toupper('?') works properly, if I set setlocale(LC_ALL, ".1251") ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 From noreply at sourceforge.net Thu Jan 18 22:53:59 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 13:53:59 -0800 Subject: [ python-Feature Requests-1639002 ] add type defintion support Message-ID: Feature Requests item #1639002, was opened at 2007-01-18 22:53 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1639002&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: djnet (djnet) Assigned to: Nobody/Anonymous (nobody) Summary: add type defintion support Initial Comment: Hi, I'm used to java language. When i use a good java ide, the autocompletion is very effective (python cannot be such effective). ex, if i enter following text: Date lDate=new Date(); lDate.[TAB_KEY] then the editor can display all the methods available for my 'lDate' object; it can because it knows the object's type. This is very convenient and allows to use a new API without browsing the API documentation ! I think such autocompletion could be achieved in python simply: it only need a "type definition" syntax. Of course, the type definition should NOT be MANDATORY ! So, is this a good idea ? David ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1639002&group_id=5470 From noreply at sourceforge.net Thu Jan 18 22:59:41 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 13:59:41 -0800 Subject: [ python-Bugs-1634774 ] locale 1251 does not convert to upper case properly Message-ID: Bugs item #1634774, was opened at 2007-01-13 18:30 Message generated for change (Comment added) made by dobrokot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ivan Dobrokotov (dobrokot) Assigned to: Nobody/Anonymous (nobody) Summary: locale 1251 does not convert to upper case properly Initial Comment:
 # -*- coding: 1251 -*-

import locale

locale.setlocale(locale.LC_ALL, ".1251") #locale name may be Windows specific?

#-----------------------------------------------
print chr(184), chr(168)
assert  chr(255).upper() == chr(223) #OK
assert  chr(184).upper() == chr(168) #fail
#-----------------------------------------------
assert  'q'.upper() == 'Q' #OK 
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  '?'.upper() == '?' #OK
assert  u'?'.upper() == u'?' #OK (locale independent)
assert  '?'.upper() == '?' #fail
I suppose incorrect realization of uppercase like
if ('a' <= c && c <= '?')
  return c+'?'-'?'
symbol '?' (184 in cp1251) is not in range 'a'-'?' ---------------------------------------------------------------------- >Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-18 22:59 Message: Logged In: YES user_id=1538986 Originator: YES ---------------------------------------------- standard header ctype.h: #define _toupper(_c) ( (_c)-'a'+'A' ) ---------------------------------------------- CRT file toupper.c: /* define function-like macro equivalent to _toupper() */ #define mkupper(c) ( (c)-'a'+'A' ) int __cdecl _toupper ( int c ) { return(mkupper(c)); } ( http://www.everfall.com/paste/id.php?j13ernl40i9e ) suggestion: replace _toupper with toupper. Performance may degrade ( a lot thread locks/MultiByteToWideChar/other code for every non-ASCII lowercase symbol). Sugestion for optimization: setup "int toupper_table[256]" (and other tables) in everycall to setlocale. ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-18 22:18 Message: Logged In: YES user_id=1538986 Originator: YES well, C: ---------------------------- #include #include #include int main() { int i = 184; char *old = setlocale(LC_CTYPE, ".1251"); assert(old); printf("%d -> %d\n", i, _toupper(i)); printf("%d -> %d\n", i, toupper(i)); } ---------------------------- C ouput: 184 -> 152 184 -> 168 so, _toupper and upper are different functions. MSDN does not mention nothing about difference, except that 'toupper' is "ANSI compatible" :( File Added: toupper.zip ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 21:08 Message: Logged In: YES user_id=21627 Originator: NO You can see the implementation of .upper in http://svn.python.org/projects/python/tags/r25/Objects/stringobject.c (function string_upper) Off-hand, I cannot see anything wrong in that code. It definitely does *not* use c+'?'-'?'. ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 22:08 Message: Logged In: YES user_id=1538986 Originator: YES forgot to mention used python version - http://www.python.org/ftp/python/2.5/python-2.5.msi ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:51 Message: Logged In: YES user_id=1538986 Originator: YES sorry, I mean toupper((int)(unsigned char)'?') not just toupper('?') ---------------------------------------------------------------------- Comment By: Ivan Dobrokotov (dobrokot) Date: 2007-01-13 18:49 Message: Logged In: YES user_id=1538986 Originator: YES C-CRT library fucntion toupper('?') works properly, if I set setlocale(LC_ALL, ".1251") ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634774&group_id=5470 From noreply at sourceforge.net Fri Jan 19 01:41:32 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 16:41:32 -0800 Subject: [ python-Bugs-1630894 ] Garbage output to file of specific size Message-ID: Bugs item #1630894, was opened at 2007-01-08 15:40 Message generated for change (Comment added) made by mculbert You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630894&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: Python 2.4 >Status: Deleted Resolution: None Priority: 5 Private: No Submitted By: Michael Culbertson (mculbert) Assigned to: Nobody/Anonymous (nobody) Summary: Garbage output to file of specific size Initial Comment: The attached script inexplicably fills the output file with garbage using the input file available at: http://cs.wheaton.edu/~mculbert/StdDetVol_Scaled_SMDS.dat (4.6Mb) If the string outputed in line 26 is changed to f.write("bla "), the output file is legible. If the expression is changed from f.write("%g " % k) to f.write("%f " % k) or f.write("%e " % k), the file is legible. If, however, the expression is changed to f.write('x'*len(str(k))+" "), the file remains illegible. Adding a print statement: print "%g " % k before line 26 indicates that k is assuming the correct values and that the string interpolation is functioning properly. This suggests that the problem causing the garbage may be related to the specific file size created with this particular set of data. The problem occurs with Python 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)] under Windows XP. The problem doesn't occur with the same script and input file using Python 2.3.5 on Mac OS 10.4.8. ---------------------------------------------------------------------- >Comment By: Michael Culbertson (mculbert) Date: 2007-01-18 19:41 Message: Logged In: YES user_id=1686784 Originator: YES After some more observation, I've decided this is probably a Windows XP issue, not a Python one. I transfered the illegible file to a unix machine and was able to read it appropriately, so the python output itself seems to be fine. Sorry for the trouble. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-09 13:31 Message: Logged In: YES user_id=21627 Originator: NO Can you please report what the expected output is? Mine (created on Linux) starts with 40 40 32 64 followed by many "0.0 " values. Also, can you please report what the actual output is that you get? In what way is it "illegible"? What version of Numeric are you using? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1630894&group_id=5470 From noreply at sourceforge.net Fri Jan 19 06:47:15 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 21:47:15 -0800 Subject: [ python-Bugs-1633863 ] AIX: configure ignores $CC; problems with C++ style comments Message-ID: Bugs item #1633863, was opened at 2007-01-12 00:46 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633863&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: AIX: configure ignores $CC; problems with C++ style comments Initial Comment: CC=xlc_r ./configure does not work on AIX-5.1, because configure unconditionally sets $CC to "cc_r": case $ac_sys_system in AIX*) CC=cc_r without_gcc=;; It would be better to leave $CC and just add "-qthreaded" to $CFLAGS. Furthermore, much of the C source code of Python uses C++ /C99 comments. This is an error with the standard AIX compiler. Please add the compiler flag "-qcpluscmt". An alternative would be to use a default of "xlc_r" for CC on AIX. This calls the compiler in a mode that both accepts C++ comments and generates reentrant code. Regards, Johannes ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-18 21:47 Message: Logged In: YES user_id=33168 Originator: NO There shouldn't be any C++ comments in the Python code. If there are, it is a mistake. I did see some get removed recently. Could you let me know where you see the C++ comments? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633863&group_id=5470 From noreply at sourceforge.net Fri Jan 19 06:54:39 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 21:54:39 -0800 Subject: [ python-Bugs-1635217 ] Little mistake in docs Message-ID: Bugs item #1635217, was opened at 2007-01-14 07:09 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635217&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Nobody/Anonymous (nobody) Summary: Little mistake in docs Initial Comment: It would be nice to see example of setup() call on the page with "requires" keywords argument description http://docs.python.org/dist/node10.html Like: setup(..., requires=["somepackage (>1.0, !=1.5)"], provides=["mypkg (1.1)"] ) There seems to be mistake in table with examples for "provides" keyword on the same page - it looks like: mypkg (1.1 shouldn't this be mypkg (1.1)? ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-18 21:54 Message: Logged In: YES user_id=33168 Originator: NO Thanks for the report. I fixed the unbalanced paren. I'll leave this open in case someone is ambitious to add more doc. Committed revision 53487. (2.5) Committed revision 53488. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635217&group_id=5470 From noreply at sourceforge.net Fri Jan 19 07:00:58 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 22:00:58 -0800 Subject: [ python-Bugs-1635353 ] expanduser tests in test_posixpath fail if $HOME ends in a / Message-ID: Bugs item #1635353, was opened at 2007-01-14 12:28 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635353&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Marien Zwart (marienz) Assigned to: Nobody/Anonymous (nobody) Summary: expanduser tests in test_posixpath fail if $HOME ends in a / Initial Comment: test_expanduser in test_posixpath checks if expanduser('~/') equals expanduser('~') + '/'. expanduser checks if the home dir location ends in a / and skips the first character of the appended path if it does (so expanduser('~/foo') with HOME=/spork/ becomes /spork/foo, not /spork//foo). This means that if you run test_posixpath with HOME=/spork/ expanduser('~') and expanduser('~/') both return '/spork/' and the test fails because '/spork//' != '/spork/'. Possible fixes I can think of: either have expanduser strip the trailing slash from the home directory instead of skipping the first slash from the appended path (so still with HOME=/spork/ expanduser('~') would be '/spork'), or have the test check if expanduser('~') ends in a backslash and check if expanduser('~') is equal to expanduser('~/') in that case. ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-18 22:00 Message: Logged In: YES user_id=33168 Originator: NO What version of Python and what platform (Windows? Unix? etc)? I tried this on Linux with Python 2.5 and test_posixpath passed. neal at janus ~/build/python/svn/r25 $ HOME=~/ ./python -tt ./Lib/test/regrtest.py test_posixpath test_posixpath 1 test OK. neal at janus ~/build/python/svn/r25 $ HOME=/home/neal//// ./python -tt ./Lib/test/regrtest.py test_posixpath test_posixpath 1 test OK. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635353&group_id=5470 From noreply at sourceforge.net Fri Jan 19 07:43:54 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 18 Jan 2007 22:43:54 -0800 Subject: [ python-Bugs-1637022 ] Python-2.5 segfault with tktreectrl Message-ID: Bugs item #1637022, was opened at 2007-01-16 19:46 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637022&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: AST >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: klappnase (klappnase) Assigned to: Nobody/Anonymous (nobody) Summary: Python-2.5 segfault with tktreectrl Initial Comment: Python-2.5 segfaults when using the tktreectrl widget. As Anton Hartl pointed out (see http://groups.google.com/group/comp.lang.python/browse_thread/thread/37536988c8499708/aed1d725d8e84ed8?lnk=raot#aed1d725d8e84ed8) this is because both Python-2.5 and tktreectrl use a global symbol "Ellipsis". Changing "Ellipsis" in ast.c and Python-ast.c into something like "PyAst_Ellipsis" fixes this. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-19 07:43 Message: Logged In: YES user_id=21627 Originator: NO Thanks for the report. Fixed in 53489 and 53490. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637022&group_id=5470 From noreply at sourceforge.net Fri Jan 19 13:51:30 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 04:51:30 -0800 Subject: [ python-Bugs-1635353 ] expanduser tests in test_posixpath fail if $HOME ends in a / Message-ID: Bugs item #1635353, was opened at 2007-01-14 21:28 Message generated for change (Comment added) made by marienz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635353&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Marien Zwart (marienz) Assigned to: Nobody/Anonymous (nobody) Summary: expanduser tests in test_posixpath fail if $HOME ends in a / Initial Comment: test_expanduser in test_posixpath checks if expanduser('~/') equals expanduser('~') + '/'. expanduser checks if the home dir location ends in a / and skips the first character of the appended path if it does (so expanduser('~/foo') with HOME=/spork/ becomes /spork/foo, not /spork//foo). This means that if you run test_posixpath with HOME=/spork/ expanduser('~') and expanduser('~/') both return '/spork/' and the test fails because '/spork//' != '/spork/'. Possible fixes I can think of: either have expanduser strip the trailing slash from the home directory instead of skipping the first slash from the appended path (so still with HOME=/spork/ expanduser('~') would be '/spork'), or have the test check if expanduser('~') ends in a backslash and check if expanduser('~') is equal to expanduser('~/') in that case. ---------------------------------------------------------------------- >Comment By: Marien Zwart (marienz) Date: 2007-01-19 13:51 Message: Logged In: YES user_id=857292 Originator: YES I was testing 2.5, looks like it's already fixed in svn (rev 52067). This is a duplicate of 1566602. Sorry for wasting your time. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-19 07:00 Message: Logged In: YES user_id=33168 Originator: NO What version of Python and what platform (Windows? Unix? etc)? I tried this on Linux with Python 2.5 and test_posixpath passed. neal at janus ~/build/python/svn/r25 $ HOME=~/ ./python -tt ./Lib/test/regrtest.py test_posixpath test_posixpath 1 test OK. neal at janus ~/build/python/svn/r25 $ HOME=/home/neal//// ./python -tt ./Lib/test/regrtest.py test_posixpath test_posixpath 1 test OK. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635353&group_id=5470 From noreply at sourceforge.net Fri Jan 19 16:24:41 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 07:24:41 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-19 10:24 Message: Logged In: YES user_id=11375 Originator: NO After reflection, I don't think the potential changing actually makes things any worse. _generate() always starts numbering keys with 1, so if a message's key changes because of lock()'s, re-reading, that means someone else has already modified the mailbox. Without the ToC clearing, you're already fated to have a corrupted mailbox because the new mailbox will be written using outdated file offsets. With the ToC clearing, you delete the wrong message. Neither outcome is good, but data is lost either way. The new behaviour is maybe a little bit better in that you're losing a single message but still generating a well-formed mailbox, and not a randomly jumbled mailbox. I suggest applying the patch to clear self._toc, and noting in the documentation that keys might possibly change after doing a lock(). ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 13:15 Message: Logged In: YES user_id=1504904 Originator: YES This version passes the new tests (it fixes the length checking bug, and no longer clears self._toc), but goes back to failing test_concurrent_add. File Added: mailbox-unified2-module.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 13:14 Message: Logged In: YES user_id=1504904 Originator: YES Unfortunately, there is a problem with clearing _toc: it's basically the one I alluded to in my comment of 2006-12-16. Back then I thought it could be caught by the test you issue the warning for, but the application may instead do its second remove() *after* the lock() (so that self._pending is not set at lock() time), using the key carried over from before it called unlock(). As before, this would result in the wrong message being deleted. I've added a test case for this (diff attached), and a bug I found in the process whereby flush() wasn't updating the length, which could cause subsequent flushes to fail (I've got a fix for this). These seem to have turned up a third bug in the MH class, but I haven't looked at that yet. File Added: mailbox-unified2-test.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:06 Message: Logged In: YES user_id=11375 Originator: NO Add mailbox-unified-patch. File Added: mailbox-unified-patch.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 15:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 14:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 01:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 14:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 13:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 14:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Fri Jan 19 16:43:55 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 07:43:55 -0800 Subject: [ python-Bugs-1482402 ] Forwarding events and Tk.mainloop problem Message-ID: Bugs item #1482402, was opened at 2006-05-05 11:15 Message generated for change (Comment added) made by mkiever You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1482402&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tkinter Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Kievernagel (mkiever) Assigned to: Martin v. L?wis (loewis) Summary: Forwarding events and Tk.mainloop problem Initial Comment: (Python 2.4.1, tcl/tk 8.4 on Linux) I try to create a widget class (Frame2 in the example) containing a Listbox. This should report an event '<>' when the Listbox produces '<>' or when the selection changes using the Up/Down keys. (see example script) Binding '<>' to the Frame2 widget produces the following traceback when the event is generated: ------------------ listbox select event generated Traceback (most recent call last): File "testevent.py", line 98, in ? tk.mainloop () File "/usr/local/lib/python2.4/lib-tk/Tkinter.py", line 965, in mainloop self.tk.mainloop(n) File "/usr/local/lib/python2.4/lib-tk/Tkinter.py", line 1349, in __call__ self.widget._report_exception() File "/usr/local/lib/python2.4/lib-tk/Tkinter.py", line 1112, in _report_exception root = self._root() AttributeError: Tk instance has no __call__ method ----------------- So Tkinter tries to report an exception caused by the event, but fails to do so by a second exception in _report_exception. (not quite sure I did understand this) The first exception may be a problem with my code or tcl/tk but at least the second is a problem of Tkinter. If you bind '<>' to Tk itself the example works fine. ---------------------------------------------------------------------- >Comment By: Matthias Kievernagel (mkiever) Date: 2007-01-19 15:43 Message: Logged In: YES user_id=1477880 Originator: YES I just found the time to re-investigate my reported bug and found out that it is due to a subclassing error of my own (redefine of Misc._root()). Sorry, for the false report. Can someone delete/reject or shoot it down? Greetings, Matthias Kievernagel ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1482402&group_id=5470 From noreply at sourceforge.net Fri Jan 19 19:07:51 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 10:07:51 -0800 Subject: [ python-Bugs-1635892 ] description of the beta distribution is incorrect Message-ID: Bugs item #1635892, was opened at 2007-01-15 08:59 Message generated for change (Comment added) made by rhettinger You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635892&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.6 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: elgordo (azgordo) Assigned to: Nobody/Anonymous (nobody) Summary: description of the beta distribution is incorrect Initial Comment: In the random module, the documentation is incorrect. Specifically, the limits on the parameters for the beta-distribution should be changed from ">-1" to ">0". This parallels to (correct) limits on the parameters for the gamma-distribution. ---------------------------------------------------------------------- >Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-19 13:07 Message: Logged In: YES user_id=80475 Originator: NO Fixed in revs 53498 and 53499. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635892&group_id=5470 From noreply at sourceforge.net Fri Jan 19 19:10:55 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 10:10:55 -0800 Subject: [ python-Feature Requests-1639002 ] add type defintion support Message-ID: Feature Requests item #1639002, was opened at 2007-01-18 21:53 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1639002&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.6 >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: djnet (djnet) Assigned to: Nobody/Anonymous (nobody) Summary: add type defintion support Initial Comment: Hi, I'm used to java language. When i use a good java ide, the autocompletion is very effective (python cannot be such effective). ex, if i enter following text: Date lDate=new Date(); lDate.[TAB_KEY] then the editor can display all the methods available for my 'lDate' object; it can because it knows the object's type. This is very convenient and allows to use a new API without browsing the API documentation ! I think such autocompletion could be achieved in python simply: it only need a "type definition" syntax. Of course, the type definition should NOT be MANDATORY ! So, is this a good idea ? David ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-19 18:10 Message: Logged In: YES user_id=849994 Originator: NO If what you're suggesting is static typing, please go to the python-ideas mailing list and discuss it there. Changes of a scope that large shouldn't be discussed in a issue tracker. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1639002&group_id=5470 From noreply at sourceforge.net Fri Jan 19 19:35:30 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 10:35:30 -0800 Subject: [ python-Bugs-1602742 ] itemconfigure returns incorrect text property of text items Message-ID: Bugs item #1602742, was opened at 2006-11-25 16:27 Message generated for change (Comment added) made by mkiever You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1602742&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tkinter Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Wojciech Mula (wmula) Assigned to: Martin v. L?wis (loewis) Summary: itemconfigure returns incorrect text property of text items Initial Comment: Tkinter: canvas itemconfigure bug Consider following code: -- tkbug.py --- from Tkinter import * root = Tk() canvas = Canvas(root) text = "sample text with spaces" id = canvas.create_text(0, 0, text=text) text2 = canvas.itemconfigure(id)['text'][-1] print text print text2 --- eof --- This toy prints: sample text with spaces ('sample', 'text', 'with', 'spaces') The returned value is not a string -- Tk returns the same string as passed on creating item, but Tkinter split it. To fix this problem, internal method '_configure' have to be changed a bit: *** Tkinter.py.old 2006-11-20 16:48:27.000000000 +0100 --- Tkinter.py 2006-11-20 17:00:13.000000000 +0100 *************** *** 1122,1129 **** cnf = _cnfmerge(cnf) if cnf is None: cnf = {} ! for x in self.tk.split( self.tk.call(_flatten((self._w, cmd)))): cnf[x[0][1:]] = (x[0][1:],) + x[1:] return cnf if type(cnf) is StringType: --- 1122,1134 ---- cnf = _cnfmerge(cnf) if cnf is None: cnf = {} ! for x in self.tk.splitlist( self.tk.call(_flatten((self._w, cmd)))): + if type(x) is StringType: + if x.startswith('-text '): + x = self.tk.splitlist(x) + else: + x = self.tk.split(x) cnf[x[0][1:]] = (x[0][1:],) + x[1:] return cnf if type(cnf) is StringType: Maybe better/faster way is to provide Canvas method, that return a 'text' property for text items: --- def get_text(self, text_id): try: r = self.tk.call(self._w, 'itemconfigure', text_id, '-text') return self.tk.splitlist(r)[-1] except TclError: return '' --- ---------------------------------------------------------------------- Comment By: Matthias Kievernagel (mkiever) Date: 2007-01-19 18:35 Message: Logged In: YES user_id=1477880 Originator: NO There is a simple workaround: use itemcget. The error applies to other options as well: dash, activedash, disableddash, tags, arrowshape, font These options also may contain a space in their value. I collected this information from 'man n Canvas' from Tk 8.4.6 I hope I didn't miss any. BTW the itemconfigure document string is broken. Greetings, Matthias Kievernagel ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1602742&group_id=5470 From noreply at sourceforge.net Fri Jan 19 19:47:27 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 10:47:27 -0800 Subject: [ python-Bugs-1626545 ] Would you mind renaming object.h to pyobject.h? Message-ID: Bugs item #1626545, was opened at 2007-01-02 16:03 Message generated for change (Comment added) made by atropashko You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626545&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Feature Request Status: Open Resolution: None Priority: 5 Private: No Submitted By: Anton Tropashko (atropashko) Assigned to: Nobody/Anonymous (nobody) Summary: Would you mind renaming object.h to pyobject.h? Initial Comment: Would be nice if you could change object.h to pyobject.h or something like that. object.h is a common name found in kjs and Qt :-( Thank you! The patch is against 2.4 --- Makefile.pre.in 2 Jan 2007 20:03:09 -0000 1.3 +++ Makefile.pre.in 2 Jan 2007 23:52:47 -0000 @@ -522,7 +522,7 @@ Include/methodobject.h \ Include/modsupport.h \ Include/moduleobject.h \ - Include/object.h \ + Include/pyobject.h \ Include/objimpl.h \ Include/patchlevel.h \ Include/pydebug.h \ Index: configure =================================================================== RCS file: /cvsroot/faultline/python/configure,v retrieving revision 1.2 diff -d -u -r1.2 configure --- configure 30 Dec 2006 02:55:53 -0000 1.2 +++ configure 2 Jan 2007 23:52:49 -0000 @@ -1,5 +1,5 @@ #! /bin/sh -# From configure.in Revision: 1.1.1.1 . +# From configure.in Revision: 1.2 . # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.59 for python 2.4. # @@ -274,7 +274,7 @@ PACKAGE_STRING='python 2.4' PACKAGE_BUGREPORT='http://www.python.org/python-bugs' -ac_unique_file="Include/object.h" +ac_unique_file="Include/pyobject.h" # Factoring default headers for most tests. ac_includes_default="\ #include Index: configure.in =================================================================== RCS file: /cvsroot/faultline/python/configure.in,v retrieving revision 1.2 diff -d -u -r1.2 configure.in --- configure.in 30 Dec 2006 02:55:53 -0000 1.2 +++ configure.in 2 Jan 2007 23:52:49 -0000 @@ -6,7 +6,7 @@ AC_REVISION($Revision: 1.2 $) AC_PREREQ(2.53) AC_INIT(python, PYTHON_VERSION, http://www.python.org/python-bugs) -AC_CONFIG_SRCDIR([Include/object.h]) +AC_CONFIG_SRCDIR([Include/pyobject.h]) AC_CONFIG_HEADER(pyconfig.h) dnl This is for stuff that absolutely must end up in pyconfig.h. Index: Include/Python.h =================================================================== RCS file: /cvsroot/faultline/python/Include/Python.h,v retrieving revision 1.1.1.1 diff -d -u -r1.1.1.1 Python.h --- Include/Python.h 28 Dec 2006 18:35:20 -0000 1.1.1.1 +++ Include/Python.h 2 Jan 2007 23:52:51 -0000 @@ -73,7 +73,7 @@ #endif #include "pymem.h" -#include "object.h" +#include "pyobject.h" #include "objimpl.h" #include "pydebug.h" Index: Parser/tokenizer.h =================================================================== RCS file: /cvsroot/faultline/python/Parser/tokenizer.h,v retrieving revision 1.1.1.1 diff -d -u -r1.1.1.1 tokenizer.h --- Parser/tokenizer.h 28 Dec 2006 18:35:31 -0000 1.1.1.1 +++ Parser/tokenizer.h 2 Jan 2007 23:52:54 -0000 @@ -4,7 +4,7 @@ extern "C" { #endif -#include "object.h" +#include "pyobject.h" /* Tokenizer interface */ ---------------------------------------------------------------------- >Comment By: Anton Tropashko (atropashko) Date: 2007-01-19 10:47 Message: Logged In: YES user_id=1681954 Originator: YES slots member conflicts with Qt. I renamed it also. Patch follows: --- Include/pyobject.h 3 Jan 2007 00:06:11 -0000 1.1 +++ Include/pyobject.h 19 Jan 2007 18:43:13 -0000 @@ -340,7 +340,7 @@ a given operator (e.g. __getitem__). see add_operators() in typeobject.c . */ PyBufferProcs as_buffer; - PyObject *name, *slots; + PyObject *name, *slots_; /* here are optional user slots, followed by the members. */ } PyHeapTypeObject; Index: Objects/typeobject.c =================================================================== RCS file: /cvsroot/faultline/python/Objects/typeobject.c,v retrieving revision 1.1.1.1 diff -d -u -r1.1.1.1 typeobject.c --- Objects/typeobject.c 28 Dec 2006 18:35:24 -0000 1.1.1.1 +++ Objects/typeobject.c 19 Jan 2007 18:43:13 -0000 @@ -1811,7 +1811,7 @@ et = (PyHeapTypeObject *)type; Py_INCREF(name); et->name = name; - et->slots = slots; + et->slots_ = slots; /* Initialize tp_flags */ type->tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HEAPTYPE | @@ -2116,7 +2116,7 @@ Py_XDECREF(type->tp_subclasses); PyObject_Free(type->tp_doc); Py_XDECREF(et->name); - Py_XDECREF(et->slots); + Py_XDECREF(et->slots_); type->ob_type->tp_free((PyObject *)type); } ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626545&group_id=5470 From noreply at sourceforge.net Fri Jan 19 19:49:14 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 10:49:14 -0800 Subject: [ python-Bugs-1626545 ] Would you mind renaming object.h to pyobject.h? Message-ID: Bugs item #1626545, was opened at 2007-01-02 16:03 Message generated for change (Settings changed) made by atropashko You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626545&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Python Library Group: Feature Request Status: Open Resolution: None Priority: 5 Private: No Submitted By: Anton Tropashko (atropashko) Assigned to: Nobody/Anonymous (nobody) Summary: Would you mind renaming object.h to pyobject.h? Initial Comment: Would be nice if you could change object.h to pyobject.h or something like that. object.h is a common name found in kjs and Qt :-( Thank you! The patch is against 2.4 --- Makefile.pre.in 2 Jan 2007 20:03:09 -0000 1.3 +++ Makefile.pre.in 2 Jan 2007 23:52:47 -0000 @@ -522,7 +522,7 @@ Include/methodobject.h \ Include/modsupport.h \ Include/moduleobject.h \ - Include/object.h \ + Include/pyobject.h \ Include/objimpl.h \ Include/patchlevel.h \ Include/pydebug.h \ Index: configure =================================================================== RCS file: /cvsroot/faultline/python/configure,v retrieving revision 1.2 diff -d -u -r1.2 configure --- configure 30 Dec 2006 02:55:53 -0000 1.2 +++ configure 2 Jan 2007 23:52:49 -0000 @@ -1,5 +1,5 @@ #! /bin/sh -# From configure.in Revision: 1.1.1.1 . +# From configure.in Revision: 1.2 . # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.59 for python 2.4. # @@ -274,7 +274,7 @@ PACKAGE_STRING='python 2.4' PACKAGE_BUGREPORT='http://www.python.org/python-bugs' -ac_unique_file="Include/object.h" +ac_unique_file="Include/pyobject.h" # Factoring default headers for most tests. ac_includes_default="\ #include Index: configure.in =================================================================== RCS file: /cvsroot/faultline/python/configure.in,v retrieving revision 1.2 diff -d -u -r1.2 configure.in --- configure.in 30 Dec 2006 02:55:53 -0000 1.2 +++ configure.in 2 Jan 2007 23:52:49 -0000 @@ -6,7 +6,7 @@ AC_REVISION($Revision: 1.2 $) AC_PREREQ(2.53) AC_INIT(python, PYTHON_VERSION, http://www.python.org/python-bugs) -AC_CONFIG_SRCDIR([Include/object.h]) +AC_CONFIG_SRCDIR([Include/pyobject.h]) AC_CONFIG_HEADER(pyconfig.h) dnl This is for stuff that absolutely must end up in pyconfig.h. Index: Include/Python.h =================================================================== RCS file: /cvsroot/faultline/python/Include/Python.h,v retrieving revision 1.1.1.1 diff -d -u -r1.1.1.1 Python.h --- Include/Python.h 28 Dec 2006 18:35:20 -0000 1.1.1.1 +++ Include/Python.h 2 Jan 2007 23:52:51 -0000 @@ -73,7 +73,7 @@ #endif #include "pymem.h" -#include "object.h" +#include "pyobject.h" #include "objimpl.h" #include "pydebug.h" Index: Parser/tokenizer.h =================================================================== RCS file: /cvsroot/faultline/python/Parser/tokenizer.h,v retrieving revision 1.1.1.1 diff -d -u -r1.1.1.1 tokenizer.h --- Parser/tokenizer.h 28 Dec 2006 18:35:31 -0000 1.1.1.1 +++ Parser/tokenizer.h 2 Jan 2007 23:52:54 -0000 @@ -4,7 +4,7 @@ extern "C" { #endif -#include "object.h" +#include "pyobject.h" /* Tokenizer interface */ ---------------------------------------------------------------------- Comment By: Anton Tropashko (atropashko) Date: 2007-01-19 10:47 Message: Logged In: YES user_id=1681954 Originator: YES slots member conflicts with Qt. I renamed it also. Patch follows: --- Include/pyobject.h 3 Jan 2007 00:06:11 -0000 1.1 +++ Include/pyobject.h 19 Jan 2007 18:43:13 -0000 @@ -340,7 +340,7 @@ a given operator (e.g. __getitem__). see add_operators() in typeobject.c . */ PyBufferProcs as_buffer; - PyObject *name, *slots; + PyObject *name, *slots_; /* here are optional user slots, followed by the members. */ } PyHeapTypeObject; Index: Objects/typeobject.c =================================================================== RCS file: /cvsroot/faultline/python/Objects/typeobject.c,v retrieving revision 1.1.1.1 diff -d -u -r1.1.1.1 typeobject.c --- Objects/typeobject.c 28 Dec 2006 18:35:24 -0000 1.1.1.1 +++ Objects/typeobject.c 19 Jan 2007 18:43:13 -0000 @@ -1811,7 +1811,7 @@ et = (PyHeapTypeObject *)type; Py_INCREF(name); et->name = name; - et->slots = slots; + et->slots_ = slots; /* Initialize tp_flags */ type->tp_flags = Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HEAPTYPE | @@ -2116,7 +2116,7 @@ Py_XDECREF(type->tp_subclasses); PyObject_Free(type->tp_doc); Py_XDECREF(et->name); - Py_XDECREF(et->slots); + Py_XDECREF(et->slots_); type->ob_type->tp_free((PyObject *)type); } ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1626545&group_id=5470 From noreply at sourceforge.net Fri Jan 19 20:48:02 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 11:48:02 -0800 Subject: [ python-Bugs-1600182 ] Tix ComboBox entry is blank when not editable Message-ID: Bugs item #1600182, was opened at 2006-11-21 05:27 Message generated for change (Comment added) made by mkiever You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1600182&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tkinter Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Tim Wegener (twegener) Assigned to: Martin v. L?wis (loewis) Summary: Tix ComboBox entry is blank when not editable Initial Comment: When setting editable=False for Tix.ComboBox, when selecting an item from the combo box, the selected item should appear in the entry field. In Windows this does not happen, and the entry field is dark grey and blank. When editable=True the label is visible. Problem occurs in: Python 2.3.5 (Windows) Python 2.4.4 (Windows) (the above appear to use tk 8.4) Works fine in: Python 2.2.2 (Red Hat 9) Python 2.3.5 (Red Hat 9) Python 2.4.1 (Red Hat 9) Python 2.5 (Red Hat 9) (all of the above with tk 8.3.5, tix 8.1.4) ---------------------------------------------------------------------- Comment By: Matthias Kievernagel (mkiever) Date: 2007-01-19 19:48 Message: Logged In: YES user_id=1477880 Originator: NO Or it might just be a problem with default colours. On my Linux box the selected item is hard to read. The shades of grey are very similar. Try changing the colours (disabledforeground/disabledbackground/readonlybackground). This is most probably no Python bug, as options are sent to the Tcl-Interpreter mostly without any change or magic. Greetings, Matthias Kievernagel ---------------------------------------------------------------------- Comment By: Tim Wegener (twegener) Date: 2006-11-21 05:43 Message: Logged In: YES user_id=434490 Originator: YES The following workaround does the job: entry = combobox.subwidget_list['entry'] entry.config(state='readonly') It appears that when doing ComboBox(editable=False) the Entry widget is being set to DISABLED rather than 'readonly'. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1600182&group_id=5470 From noreply at sourceforge.net Fri Jan 19 20:55:24 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 11:55:24 -0800 Subject: [ python-Bugs-1600182 ] Tix ComboBox entry is blank when not editable Message-ID: Bugs item #1600182, was opened at 2006-11-21 05:27 Message generated for change (Comment added) made by mkiever You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1600182&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tkinter Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Tim Wegener (twegener) Assigned to: Martin v. L?wis (loewis) Summary: Tix ComboBox entry is blank when not editable Initial Comment: When setting editable=False for Tix.ComboBox, when selecting an item from the combo box, the selected item should appear in the entry field. In Windows this does not happen, and the entry field is dark grey and blank. When editable=True the label is visible. Problem occurs in: Python 2.3.5 (Windows) Python 2.4.4 (Windows) (the above appear to use tk 8.4) Works fine in: Python 2.2.2 (Red Hat 9) Python 2.3.5 (Red Hat 9) Python 2.4.1 (Red Hat 9) Python 2.5 (Red Hat 9) (all of the above with tk 8.3.5, tix 8.1.4) ---------------------------------------------------------------------- Comment By: Matthias Kievernagel (mkiever) Date: 2007-01-19 19:55 Message: Logged In: YES user_id=1477880 Originator: NO Or it might just be a problem with default colours. On my Linux box the selected item is hard to read. The shades of grey are very similar. Try changing the colours (disabledforeground/disabledbackground/readonlybackground). This is most probably no Python bug, as options are sent to the Tcl-Interpreter mostly without any change or magic. Greetings, Matthias Kievernagel ---------------------------------------------------------------------- Comment By: Matthias Kievernagel (mkiever) Date: 2007-01-19 19:48 Message: Logged In: YES user_id=1477880 Originator: NO Or it might just be a problem with default colours. On my Linux box the selected item is hard to read. The shades of grey are very similar. Try changing the colours (disabledforeground/disabledbackground/readonlybackground). This is most probably no Python bug, as options are sent to the Tcl-Interpreter mostly without any change or magic. Greetings, Matthias Kievernagel ---------------------------------------------------------------------- Comment By: Tim Wegener (twegener) Date: 2006-11-21 05:43 Message: Logged In: YES user_id=434490 Originator: YES The following workaround does the job: entry = combobox.subwidget_list['entry'] entry.config(state='readonly') It appears that when doing ComboBox(editable=False) the Entry widget is being set to DISABLED rather than 'readonly'. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1600182&group_id=5470 From noreply at sourceforge.net Fri Jan 19 21:50:46 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 12:50:46 -0800 Subject: [ python-Bugs-1581476 ] Text search gives bad count if called from variable trace Message-ID: Bugs item #1581476, was opened at 2006-10-20 19:26 Message generated for change (Comment added) made by mkiever You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1581476&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tkinter Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Russell Owen (reowen) Assigned to: Martin v. L?wis (loewis) Summary: Text search gives bad count if called from variable trace Initial Comment: If Text search is called from a variable trace then the count variable is not be updated. I see this with Python 2.4.3 and Aqua Tcl/Tk 8.4.11 on MacOS X 10.4.7. I have not tested it elsewhere. Note that this works fine in tcl/tk so this appears to be a Tkinter issue. To see the problem run the attached python script. (The script also includes the equivalent tcl/tk code in its comments, so you can easily test the issue directly in tcl/tk if desired.) ---------------------------------------------------------------------- Comment By: Matthias Kievernagel (mkiever) Date: 2007-01-19 20:50 Message: Logged In: YES user_id=1477880 Originator: NO Same behaviour on Linux and current Python trunk. In addition I get an IndexError, if I delete the last character of the search string. Does Tk allow calling search with an empty pattern? Tkinter could handle this (with a correct result) with the following change in Tkinter.py / Text.search(): if pattern[0] == '-': args.append('--') -> if pattern and pattern[0] == '-': args.append('--') Greetings, Matthias Kievernagel ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1581476&group_id=5470 From noreply at sourceforge.net Sat Jan 20 00:44:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 15:44:13 -0800 Subject: [ python-Bugs-1482402 ] Forwarding events and Tk.mainloop problem Message-ID: Bugs item #1482402, was opened at 2006-05-05 13:15 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1482402&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tkinter Group: None >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Matthias Kievernagel (mkiever) Assigned to: Martin v. L?wis (loewis) Summary: Forwarding events and Tk.mainloop problem Initial Comment: (Python 2.4.1, tcl/tk 8.4 on Linux) I try to create a widget class (Frame2 in the example) containing a Listbox. This should report an event '<>' when the Listbox produces '<>' or when the selection changes using the Up/Down keys. (see example script) Binding '<>' to the Frame2 widget produces the following traceback when the event is generated: ------------------ listbox select event generated Traceback (most recent call last): File "testevent.py", line 98, in ? tk.mainloop () File "/usr/local/lib/python2.4/lib-tk/Tkinter.py", line 965, in mainloop self.tk.mainloop(n) File "/usr/local/lib/python2.4/lib-tk/Tkinter.py", line 1349, in __call__ self.widget._report_exception() File "/usr/local/lib/python2.4/lib-tk/Tkinter.py", line 1112, in _report_exception root = self._root() AttributeError: Tk instance has no __call__ method ----------------- So Tkinter tries to report an exception caused by the event, but fails to do so by a second exception in _report_exception. (not quite sure I did understand this) The first exception may be a problem with my code or tcl/tk but at least the second is a problem of Tkinter. If you bind '<>' to Tk itself the example works fine. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 00:44 Message: Logged In: YES user_id=21627 Originator: NO Closing as invalid, as requested. ---------------------------------------------------------------------- Comment By: Matthias Kievernagel (mkiever) Date: 2007-01-19 16:43 Message: Logged In: YES user_id=1477880 Originator: YES I just found the time to re-investigate my reported bug and found out that it is due to a subclassing error of my own (redefine of Misc._root()). Sorry, for the false report. Can someone delete/reject or shoot it down? Greetings, Matthias Kievernagel ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1482402&group_id=5470 From noreply at sourceforge.net Sat Jan 20 02:16:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 17:16:10 -0800 Subject: [ python-Bugs-1629566 ] documentation of email.utils.parsedate is confusing Message-ID: Bugs item #1629566, was opened at 2007-01-06 15:37 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629566&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Nicholas Riley (nriley) Assigned to: Nobody/Anonymous (nobody) Summary: documentation of email.utils.parsedate is confusing Initial Comment: This sentence in the documentation for email.utils.parsedate confused me: "Note that fields 6, 7, and 8 of the result tuple are not usable." These indices are zero-based, so it's actually fields 7, 8 and 9 that they are talking about (in normal English). Either this should be changed to 7-9 or be re-expressed in a way that makes it clear it's zero-based, for example by using Python indexing notation. Thanks. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-19 19:16 Message: Logged In: YES user_id=1591633 Originator: NO Link to document in question: http://www.python.org/doc/lib/module-email.utils.html www.python.org/sf/1639973 for doc patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629566&group_id=5470 From noreply at sourceforge.net Sat Jan 20 02:51:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 19 Jan 2007 17:51:28 -0800 Subject: [ python-Bugs-1620945 ] minor inconsistency in socket.close Message-ID: Bugs item #1620945, was opened at 2006-12-22 12:05 Message generated for change (Comment added) made by mark-roberts You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1620945&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jonathan Ellis (ellisj) Assigned to: Nobody/Anonymous (nobody) Summary: minor inconsistency in socket.close Initial Comment: In python 2.5 socket.close, all methods are delegated to _dummy, which raises an error. It would be more consistent to delegate each method to its counterpart in _closedsocket; in particular re-closing a closed socket is not intended to raise: def close(self): self._sock.close() self._sock = _closedsocket() for method in _delegate_methods: setattr(self, method, getattr(self._sock, method)) ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-19 19:51 Message: Logged In: YES user_id=1591633 Originator: NO On trunk: >>> import socket >>> s=socket.socket() >>> s.close() >>> s.close() >>> It also seems that the following line will make even that remapping not useful? Isn't it better just to avoid the layer of indirection and directly proceed with assigning to _dummy? line 145: send = recv = recv_into = sendto = recvfrom = recvfrom_into = _dummy ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1620945&group_id=5470 From noreply at sourceforge.net Sat Jan 20 11:55:44 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 02:55:44 -0800 Subject: [ python-Feature Requests-1635335 ] Add registry functions to windows postinstall Message-ID: Feature Requests item #1635335, was opened at 2007-01-14 21:00 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Nobody/Anonymous (nobody) Summary: Add registry functions to windows postinstall Initial Comment: It would be useful to add regkey_created() or regkey_modified() to windows postinstall scripts along with directory_created() and file_created(). Useful for adding installed package to App Paths. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 11:55 Message: Logged In: YES user_id=21627 Originator: NO Can you please elaborate? Where should these functions be defined, what should they do, and when should they be invoked (by what code)? Also, what is a "windows postinstall script"? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 From noreply at sourceforge.net Sat Jan 20 14:16:36 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 05:16:36 -0800 Subject: [ python-Bugs-1568240 ] Tix is not included in 2.5 for Windows Message-ID: Bugs item #1568240, was opened at 2006-09-30 11:19 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1568240&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: Christos Georgiou (tzot) Assigned to: Martin v. L?wis (loewis) Summary: Tix is not included in 2.5 for Windows Initial Comment: (I hope "Build" is more precise than "Extension Modules" and "Tkinter" for this specific bug.) At least the following files are missing from 2.5 for Windows: DLLs\tix8184.dll tcl\tix8184.lib tcl\tix8.1\* ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 14:16 Message: Logged In: YES user_id=21627 Originator: NO It seems that I can provide Tix binaries only for x86, not for AMD64 or Itanium. Is that sufficient? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-03 15:59 Message: Logged In: YES user_id=21627 Originator: NO Ah, ok. No, assigning this report to Neal or bumping its priority should not be done. ---------------------------------------------------------------------- Comment By: Christos Georgiou (tzot) Date: 2007-01-02 11:22 Message: Logged In: YES user_id=539787 Originator: YES Neal's message is this: http://mail.python.org/pipermail/python-dev/2006-December/070406.html and it refers to the 2.5.1 release, not prior to it. As you see, I refrained from both increasing the priority and assigning it to Neal, and actually just added a comment to the case with a related question, since I know you are the one responsible for the windows build and you already had assigned the bug to you. My adding this comment to the bug was nothing more or less than the action that felt appropriate, and still does feel appropriate to me (ie I didn't overstep any limits). The "we" was just all parties interested, and in this case, the ones I know are at least you (responsible for the windows build) and I (a user of Tix on windows). Happy new year, Martin! ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-12-29 23:26 Message: Logged In: YES user_id=21627 Originator: NO I haven't read Neal's message yet, but I wonder what he could do about it. I plan to fix this with 2.5.1, there is absolutely no way to fix this earlier. I'm not sure who "we" is who would like to bump the bug, and what precisely this bumping would do; tzot, please refrain from changing the priority to higher than 7. These priorities are reserved to the release manager. ---------------------------------------------------------------------- Comment By: Christos Georgiou (tzot) Date: 2006-12-27 18:46 Message: Logged In: YES user_id=539787 Originator: YES Should we bump the bug up and/or assign it to Neal Norwitz as he requested on Python-Dev? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1568240&group_id=5470 From noreply at sourceforge.net Sat Jan 20 15:26:43 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 06:26:43 -0800 Subject: [ python-Feature Requests-1635335 ] Add registry functions to windows postinstall Message-ID: Feature Requests item #1635335, was opened at 2007-01-14 20:00 Message generated for change (Comment added) made by techtonik You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Nobody/Anonymous (nobody) Summary: Add registry functions to windows postinstall Initial Comment: It would be useful to add regkey_created() or regkey_modified() to windows postinstall scripts along with directory_created() and file_created(). Useful for adding installed package to App Paths. ---------------------------------------------------------------------- >Comment By: anatoly techtonik (techtonik) Date: 2007-01-20 14:26 Message: Logged In: YES user_id=669020 Originator: YES Windows postinstall script is bundled with installation, launched after installation and just before uninstall. It is described here. http://docs.python.org/dist/postinstallation-script.html#SECTION005310000000000000000 Where these should be defined? I do not know - there are already some functions that are said to be "available as additional built-in functions in the installation script." on the page above. The purpose is to be able to create/delete registry keys during installation. This should also be reflected in installation log file with appropriate status code so that users could be aware of what's going on. I think the functions needed are already defined in http://docs.python.org/lib/module--winreg.html but the module is very low-level. I'd rather use Autoit like API - http://www.autoitscript.com/autoit3/docs/functions/RegRead.htm http://www.autoitscript.com/autoit3/docs/functions/RegWrite.htm http://www.autoitscript.com/autoit3/docs/functions/RegDelete.htm ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 10:55 Message: Logged In: YES user_id=21627 Originator: NO Can you please elaborate? Where should these functions be defined, what should they do, and when should they be invoked (by what code)? Also, what is a "windows postinstall script"? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 From noreply at sourceforge.net Sat Jan 20 19:07:43 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 10:07:43 -0800 Subject: [ python-Feature Requests-1635335 ] Add registry functions to windows postinstall Message-ID: Feature Requests item #1635335, was opened at 2007-01-14 21:00 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) >Assigned to: Thomas Heller (theller) Summary: Add registry functions to windows postinstall Initial Comment: It would be useful to add regkey_created() or regkey_modified() to windows postinstall scripts along with directory_created() and file_created(). Useful for adding installed package to App Paths. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 19:07 Message: Logged In: YES user_id=21627 Originator: NO Thomas, what do you think? ---------------------------------------------------------------------- Comment By: anatoly techtonik (techtonik) Date: 2007-01-20 15:26 Message: Logged In: YES user_id=669020 Originator: YES Windows postinstall script is bundled with installation, launched after installation and just before uninstall. It is described here. http://docs.python.org/dist/postinstallation-script.html#SECTION005310000000000000000 Where these should be defined? I do not know - there are already some functions that are said to be "available as additional built-in functions in the installation script." on the page above. The purpose is to be able to create/delete registry keys during installation. This should also be reflected in installation log file with appropriate status code so that users could be aware of what's going on. I think the functions needed are already defined in http://docs.python.org/lib/module--winreg.html but the module is very low-level. I'd rather use Autoit like API - http://www.autoitscript.com/autoit3/docs/functions/RegRead.htm http://www.autoitscript.com/autoit3/docs/functions/RegWrite.htm http://www.autoitscript.com/autoit3/docs/functions/RegDelete.htm ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 11:55 Message: Logged In: YES user_id=21627 Originator: NO Can you please elaborate? Where should these functions be defined, what should they do, and when should they be invoked (by what code)? Also, what is a "windows postinstall script"? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 From noreply at sourceforge.net Sat Jan 20 19:20:49 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 10:20:49 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: David Watson (baikie) Date: 2007-01-20 18:20 Message: Logged In: YES user_id=1504904 Originator: YES Hang on. If a message's key changes after recreating _toc, that does not mean that another process has modified the mailbox. If the application removes a message and then (inadvertently) causes _toc to be regenerated, the keys of all subsequent messages will be decremented by one, due only to the application's own actions. That's what happens in the "broken locking" test case: the program intends to remove message 0, flush, and then remove message 1, but because _toc is regenerated in between, message 1 is renumbered as 0, message 2 is renumbered as 1, and so the program deletes message 2 instead. To clear _toc in such code without attempting to preserve the message keys turns possible data loss (in the case that another process modified the mailbox) into certain data loss. That's what I'm concerned about. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-19 15:24 Message: Logged In: YES user_id=11375 Originator: NO After reflection, I don't think the potential changing actually makes things any worse. _generate() always starts numbering keys with 1, so if a message's key changes because of lock()'s, re-reading, that means someone else has already modified the mailbox. Without the ToC clearing, you're already fated to have a corrupted mailbox because the new mailbox will be written using outdated file offsets. With the ToC clearing, you delete the wrong message. Neither outcome is good, but data is lost either way. The new behaviour is maybe a little bit better in that you're losing a single message but still generating a well-formed mailbox, and not a randomly jumbled mailbox. I suggest applying the patch to clear self._toc, and noting in the documentation that keys might possibly change after doing a lock(). ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 18:15 Message: Logged In: YES user_id=1504904 Originator: YES This version passes the new tests (it fixes the length checking bug, and no longer clears self._toc), but goes back to failing test_concurrent_add. File Added: mailbox-unified2-module.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 18:14 Message: Logged In: YES user_id=1504904 Originator: YES Unfortunately, there is a problem with clearing _toc: it's basically the one I alluded to in my comment of 2006-12-16. Back then I thought it could be caught by the test you issue the warning for, but the application may instead do its second remove() *after* the lock() (so that self._pending is not set at lock() time), using the key carried over from before it called unlock(). As before, this would result in the wrong message being deleted. I've added a test case for this (diff attached), and a bug I found in the process whereby flush() wasn't updating the length, which could cause subsequent flushes to fail (I've got a fix for this). These seem to have turned up a third bug in the MH class, but I haven't looked at that yet. File Added: mailbox-unified2-test.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 21:06 Message: Logged In: YES user_id=11375 Originator: NO Add mailbox-unified-patch. File Added: mailbox-unified-patch.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 21:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 20:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 19:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 06:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 19:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 18:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 19:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 18:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 17:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 19:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 19:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 17:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 19:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 14:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 13:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 21:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 20:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 20:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Sat Jan 20 19:27:08 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 10:27:08 -0800 Subject: [ python-Bugs-1637943 ] Problem packaging wx application with py2exe. Message-ID: Bugs item #1637943, was opened at 2007-01-17 10:10 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637943&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Python 2.5 Status: Closed Resolution: Invalid Priority: 5 Private: No Submitted By: Indy (indy90) Assigned to: Nobody/Anonymous (nobody) Summary: Problem packaging wx application with py2exe. Initial Comment: I have created a minimal wx application, which runs fine. However, when I package it with py2exe and I try to run the .exe file, an error occurs, the program crashes (before even starting) and a pop-up box says to look at the log file for the error trace. It says that wx/_core_.pyd failed to be loaded (this file exists in my filesystem - I have checked). When I skip "zipfile = None" in the setup() function, another pop-up box also appears, and says that a DLL failed to be loaded. Python 2.5 wxPython 2.8.0.1 py2exe 0.6.6 ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-20 10:27 Message: Logged In: YES user_id=341410 Originator: NO For reference, this may or may not be related to having gdiplus.dll and/or the msvcrt71.dll (or something like that) in the same path as the executable that runs your program. Try to move those into the same path as the executable you are trying to run and see if that helps. If it doesn't, feel free to post on the wxPython-users at lists.wxwidgets.org mailing list. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-17 10:40 Message: Logged In: YES user_id=357491 Originator: NO This is the bug tracker for the Python programming language. Please report this issue to the py2exe development team. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637943&group_id=5470 From noreply at sourceforge.net Sat Jan 20 19:35:53 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 10:35:53 -0800 Subject: [ python-Feature Requests-1637926 ] Empty class 'Object' Message-ID: Feature Requests item #1637926, was opened at 2007-01-17 09:51 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1637926&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: kxroberto (kxroberto) Assigned to: Nobody/Anonymous (nobody) Summary: Empty class 'Object' Initial Comment: An empty class 'Object' in builtins, which can be instantiated (with optional inline arguments as attributes (like dict)), and attributes added. Convenience - Easy OO variable container - known to pickle etc. http://groups.google.com/group/comp.lang.python/msg/3ff946e7da13dba9 http://groups.google.de/group/comp.lang.python/msg/a02f0eb4efb76b24 idea: class X(object): def __init__(self,_d={},**kwargs): kwargs.update(_d) self.__dict__=kwargs class Y(X): def __repr__(self): return ''%self.__dict__ ------ x=X(spam=1) x.a=3 Robert ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-20 10:35 Message: Logged In: YES user_id=341410 Originator: NO This has been requested in various forms over the years. See the "bunch" discussion on the python-dev mailing list from over a year ago. There may have even been a PEP. I believe the general consensus was "it would be convenient sometimes, but it is *trivial* to implement it as necessary". Also, not every X-line function or class should be included with Python. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1637926&group_id=5470 From noreply at sourceforge.net Sat Jan 20 19:39:30 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 10:39:30 -0800 Subject: [ python-Feature Requests-1567331 ] logging.RotatingFileHandler has no "infinite" backupCount Message-ID: Feature Requests item #1567331, was opened at 2006-09-28 14:36 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1567331&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Skip Montanaro (montanaro) Assigned to: Vinay Sajip (vsajip) Summary: logging.RotatingFileHandler has no "infinite" backupCount Initial Comment: It seems to me that logging.RotatingFileHandler should have a way to spell "never delete old log files". This is useful in situations where you want an external process (manual or automatic) make decisions about deleting log files. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-20 10:39 Message: Logged In: YES user_id=341410 Originator: NO What about an optional different semantic for log renaming? Rather than log -> log.1, log -> log.+1, so if you have log, log.1, log.2; log -> log.3 and log gets created anew. I've used a similar semantic in other logging packages, and it works pretty well. It would also allow for users to have an "infinite" count of logfiles (if that is what they want). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2007-01-15 08:44 Message: Logged In: YES user_id=308438 Originator: NO The problem with this is that on rollover, RotatingFileHandler renames old logs: rollover.log.3 -> rollover.log.4, rollover.log.2 -> rollover.log.3, rollover.log.1 -> rollover.log.2, rollover.log -> rollover.log.1, and a new rollover.log is opened. With an arbitrary number of old log files, this leads to arbitrary renaming time - which could cause long pauses due to logging, not a good idea. If you are using e.g. logrotate or newsyslog, or a custom program to do logfile rotation, you can use the new logging.handlers.WatchedFileHandler handler (meant for use on Unix/Linux only - on Windows, logfiles can't be renamed or moved while in use and so the requirement doesn't arise) which watches the logged-to file to see when it changes. This has recently been checked into SVN trunk. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1567331&group_id=5470 From noreply at sourceforge.net Sat Jan 20 19:43:31 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 10:43:31 -0800 Subject: [ python-Feature Requests-1639002 ] add type defintion support Message-ID: Feature Requests item #1639002, was opened at 2007-01-18 13:53 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1639002&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.6 Status: Closed Resolution: None Priority: 5 Private: No Submitted By: djnet (djnet) Assigned to: Nobody/Anonymous (nobody) Summary: add type defintion support Initial Comment: Hi, I'm used to java language. When i use a good java ide, the autocompletion is very effective (python cannot be such effective). ex, if i enter following text: Date lDate=new Date(); lDate.[TAB_KEY] then the editor can display all the methods available for my 'lDate' object; it can because it knows the object's type. This is very convenient and allows to use a new API without browsing the API documentation ! I think such autocompletion could be achieved in python simply: it only need a "type definition" syntax. Of course, the type definition should NOT be MANDATORY ! So, is this a good idea ? David ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-20 10:43 Message: Logged In: YES user_id=341410 Originator: NO FYI, WingIDE and a few other Python IDEs/editors offer a pseudo-syntax for defining such things to help with such introspection. Sometimes it is code that is actually executed when the program is run, sometimes it is comments. You may consider looking into this stuff further before posting on the python-ideas list. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-19 10:10 Message: Logged In: YES user_id=849994 Originator: NO If what you're suggesting is static typing, please go to the python-ideas mailing list and discuss it there. Changes of a scope that large shouldn't be discussed in a issue tracker. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1639002&group_id=5470 From noreply at sourceforge.net Sat Jan 20 23:53:15 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 14:53:15 -0800 Subject: [ python-Bugs-1636950 ] Newline skipped in "for line in file" Message-ID: Bugs item #1636950, was opened at 2007-01-16 10:56 Message generated for change (Comment added) made by amonthei You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Andy Monthei (amonthei) Assigned to: Nobody/Anonymous (nobody) Summary: Newline skipped in "for line in file" Initial Comment: When processing huge fixed block files of about 7000 bytes wide and several hundred thousand lines long some pairs of lines get read as one long line with no line break when using "for line in file:". The problem is even worse when using the fileinput module and reading in five or six huge files consisting of 4.8 million records causes several hundred pairs of lines to be read as single lines. When a newline is skipped it is usually followed by several more in the next few hundred lines. I have not noticed any other characters being skipped, only the line break. O.S. Windows (5, 1, 2600, 2, 'Service Pack 2') Python 2.5 ---------------------------------------------------------------------- >Comment By: Andy Monthei (amonthei) Date: 2007-01-20 16:53 Message: Logged In: YES user_id=1693612 Originator: YES I have had no luck creating random data to reproduce the problem which leaves me to come to the conclusion that it was the data itself. Using a hex editor I find no problem with the line breaks. The data that triggers this bug is transferred several time before it gets to me. It originates on a Unix box, then goes to an IBM mainframe, then to my Windows machine and through many updates along the way. It may be an EBCDIC/ASCII conversion or possibly something to do with the mainframe to PC transfer. Whatever it is, it's in the data itself. The only thing that bothers me is that Java somehow is not affected by this bad data. ---------------------------------------------------------------------- Comment By: Andy Monthei (amonthei) Date: 2007-01-18 09:34 Message: Logged In: YES user_id=1693612 Originator: YES I am using open() for reading the file, no other features. I have also had fileinput.input(fileList) compound the problem. Each file that this has happened to is a fixed block file of either 6990 or 7700 bytes wide but this I think is insignificant. When looking at the file in a hex editor everything looks fine and a small Java program using a buffered reader will give me the correct line count when Python does not. Using something like fp.read(8192) I'm sure might temporarily solve my problem but I will keep working on getting a file I can upload. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2007-01-18 03:23 Message: Logged In: YES user_id=89016 Originator: NO Are you using any of the unicode reading features (i.e. codecs.EncodedFile etc.) or are you using plain open() for reading the file? ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-18 01:12 Message: Logged In: YES user_id=1591633 Originator: NO I don't know if this helps: I spent the last little while creating / reading random files that all (seemingly) matched the description you gave us. None of these files failed to read properly. (e.g., have the right amount of rows with a line length that seemingly was the right line. Definitely no doubling lines). Perusing the file source code found a detailed discussion of fgets vs fgetc for finding the next line in the file. Have you tried reading the file with fp.read(8192) or similar? Hopefully you're able to reproduce the bug with scrubbed data (because I couldn't construct random data to do so). Good luck. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-17 23:24 Message: Logged In: YES user_id=1591633 Originator: NO How wide are the min and max widths of the lines? This problem is of particular interest to me. ---------------------------------------------------------------------- Comment By: Andy Monthei (amonthei) Date: 2007-01-17 15:58 Message: Logged In: YES user_id=1693612 Originator: YES I can not upload the files that trigger this because of the data that is in them but I am working on getting around that. In my data line 617391 in a fixed block file of 6990 bytes wide gets read in with the next line after it. The line break is 0d0a (same as the others) where the bug happens so I am wondering if it is a buffer issue where the linebreak falls at the edge, however no other characters are ever missed. The total file is 888420 lines and this happens in four spots. I will hopefully have a file to send soon. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-16 16:33 Message: Logged In: YES user_id=357491 Originator: NO Do you happen to have a sample you could upload that triggers the bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 From noreply at sourceforge.net Sun Jan 21 01:46:35 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 16:46:35 -0800 Subject: [ python-Bugs-1636950 ] Newline skipped in "for line in file" Message-ID: Bugs item #1636950, was opened at 2007-01-16 08:56 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Andy Monthei (amonthei) Assigned to: Nobody/Anonymous (nobody) Summary: Newline skipped in "for line in file" Initial Comment: When processing huge fixed block files of about 7000 bytes wide and several hundred thousand lines long some pairs of lines get read as one long line with no line break when using "for line in file:". The problem is even worse when using the fileinput module and reading in five or six huge files consisting of 4.8 million records causes several hundred pairs of lines to be read as single lines. When a newline is skipped it is usually followed by several more in the next few hundred lines. I have not noticed any other characters being skipped, only the line break. O.S. Windows (5, 1, 2600, 2, 'Service Pack 2') Python 2.5 ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2007-01-20 16:46 Message: Logged In: YES user_id=357491 Originator: NO Well, with Andy saying he can't reproduce the problem I am going to close as invalid. Andy, if you ever happen to be able to upload data that triggers it, then please re-open this bug. ---------------------------------------------------------------------- Comment By: Andy Monthei (amonthei) Date: 2007-01-20 14:53 Message: Logged In: YES user_id=1693612 Originator: YES I have had no luck creating random data to reproduce the problem which leaves me to come to the conclusion that it was the data itself. Using a hex editor I find no problem with the line breaks. The data that triggers this bug is transferred several time before it gets to me. It originates on a Unix box, then goes to an IBM mainframe, then to my Windows machine and through many updates along the way. It may be an EBCDIC/ASCII conversion or possibly something to do with the mainframe to PC transfer. Whatever it is, it's in the data itself. The only thing that bothers me is that Java somehow is not affected by this bad data. ---------------------------------------------------------------------- Comment By: Andy Monthei (amonthei) Date: 2007-01-18 07:34 Message: Logged In: YES user_id=1693612 Originator: YES I am using open() for reading the file, no other features. I have also had fileinput.input(fileList) compound the problem. Each file that this has happened to is a fixed block file of either 6990 or 7700 bytes wide but this I think is insignificant. When looking at the file in a hex editor everything looks fine and a small Java program using a buffered reader will give me the correct line count when Python does not. Using something like fp.read(8192) I'm sure might temporarily solve my problem but I will keep working on getting a file I can upload. ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2007-01-18 01:23 Message: Logged In: YES user_id=89016 Originator: NO Are you using any of the unicode reading features (i.e. codecs.EncodedFile etc.) or are you using plain open() for reading the file? ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-17 23:12 Message: Logged In: YES user_id=1591633 Originator: NO I don't know if this helps: I spent the last little while creating / reading random files that all (seemingly) matched the description you gave us. None of these files failed to read properly. (e.g., have the right amount of rows with a line length that seemingly was the right line. Definitely no doubling lines). Perusing the file source code found a detailed discussion of fgets vs fgetc for finding the next line in the file. Have you tried reading the file with fp.read(8192) or similar? Hopefully you're able to reproduce the bug with scrubbed data (because I couldn't construct random data to do so). Good luck. ---------------------------------------------------------------------- Comment By: Mark Roberts (mark-roberts) Date: 2007-01-17 21:24 Message: Logged In: YES user_id=1591633 Originator: NO How wide are the min and max widths of the lines? This problem is of particular interest to me. ---------------------------------------------------------------------- Comment By: Andy Monthei (amonthei) Date: 2007-01-17 13:58 Message: Logged In: YES user_id=1693612 Originator: YES I can not upload the files that trigger this because of the data that is in them but I am working on getting around that. In my data line 617391 in a fixed block file of 6990 bytes wide gets read in with the next line after it. The line break is 0d0a (same as the others) where the bug happens so I am wondering if it is a buffer issue where the linebreak falls at the edge, however no other characters are ever missed. The total file is 888420 lines and this happens in four spots. I will hopefully have a file to send soon. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-16 14:33 Message: Logged In: YES user_id=357491 Originator: NO Do you happen to have a sample you could upload that triggers the bug? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1636950&group_id=5470 From noreply at sourceforge.net Sun Jan 21 04:16:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 20 Jan 2007 19:16:09 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-20 22:16 Message: Logged In: YES user_id=11375 Originator: NO I'm starting to lose track of all the variations on the bug. Maybe we should just add more warnings to the documentation about locking the mailbox when modifying it and not try to fix this at all. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-20 13:20 Message: Logged In: YES user_id=1504904 Originator: YES Hang on. If a message's key changes after recreating _toc, that does not mean that another process has modified the mailbox. If the application removes a message and then (inadvertently) causes _toc to be regenerated, the keys of all subsequent messages will be decremented by one, due only to the application's own actions. That's what happens in the "broken locking" test case: the program intends to remove message 0, flush, and then remove message 1, but because _toc is regenerated in between, message 1 is renumbered as 0, message 2 is renumbered as 1, and so the program deletes message 2 instead. To clear _toc in such code without attempting to preserve the message keys turns possible data loss (in the case that another process modified the mailbox) into certain data loss. That's what I'm concerned about. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-19 10:24 Message: Logged In: YES user_id=11375 Originator: NO After reflection, I don't think the potential changing actually makes things any worse. _generate() always starts numbering keys with 1, so if a message's key changes because of lock()'s, re-reading, that means someone else has already modified the mailbox. Without the ToC clearing, you're already fated to have a corrupted mailbox because the new mailbox will be written using outdated file offsets. With the ToC clearing, you delete the wrong message. Neither outcome is good, but data is lost either way. The new behaviour is maybe a little bit better in that you're losing a single message but still generating a well-formed mailbox, and not a randomly jumbled mailbox. I suggest applying the patch to clear self._toc, and noting in the documentation that keys might possibly change after doing a lock(). ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 13:15 Message: Logged In: YES user_id=1504904 Originator: YES This version passes the new tests (it fixes the length checking bug, and no longer clears self._toc), but goes back to failing test_concurrent_add. File Added: mailbox-unified2-module.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 13:14 Message: Logged In: YES user_id=1504904 Originator: YES Unfortunately, there is a problem with clearing _toc: it's basically the one I alluded to in my comment of 2006-12-16. Back then I thought it could be caught by the test you issue the warning for, but the application may instead do its second remove() *after* the lock() (so that self._pending is not set at lock() time), using the key carried over from before it called unlock(). As before, this would result in the wrong message being deleted. I've added a test case for this (diff attached), and a bug I found in the process whereby flush() wasn't updating the length, which could cause subsequent flushes to fail (I've got a fix for this). These seem to have turned up a third bug in the MH class, but I haven't looked at that yet. File Added: mailbox-unified2-test.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:06 Message: Logged In: YES user_id=11375 Originator: NO Add mailbox-unified-patch. File Added: mailbox-unified-patch.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 15:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 14:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 01:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 14:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 13:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 14:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Sun Jan 21 11:29:24 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 02:29:24 -0800 Subject: [ python-Bugs-1486663 ] Over-zealous keyword-arguments check for built-in set class Message-ID: Bugs item #1486663, was opened at 2006-05-11 16:17 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 7 Private: No Submitted By: dib (dib_at_work) Assigned to: Georg Brandl (gbrandl) Summary: Over-zealous keyword-arguments check for built-in set class Initial Comment: The fix for bug #1119418 (xrange() builtin accepts keyword arg silently) included in Python 2.4.2c1+ breaks code that passes keyword argument(s) into classes derived from the built-in set class, even if those derived classes explictly accept those keyword arguments and avoid passing them down to the built-in base class. Simplified version of code in attached BuiltinSetKeywordArgumentsCheckBroken.py fails at (G) due to bug #1119418 if version < 2.4.2c1; if version >= 2.4.2c1 (G) passes thanks to that bug fix, but instead (H) incorrectly-in-my-view fails. [Presume similar cases would fail for xrange and the other classes mentioned in #1119418.] -- David Bruce (Tested on 2.4, 2.4.2, 2.5a2 on linux2, win32.) ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-21 10:29 Message: Logged In: YES user_id=849994 Originator: NO Committed as rev. 53509, 53510 (2.5). ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-17 09:13 Message: Logged In: YES user_id=849994 Originator: NO I'll create the testcases and commit the patch (as well as NEWS entries :) when I find the time. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 07:22 Message: Logged In: YES user_id=33168 Originator: NO Were these changes applied by Raymond? I don't think there were NEWS entries though. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 20:43 Message: Logged In: YES user_id=80475 Originator: NO That looks about right. Please add test cases that fail without the patch and succeed with the patch. Also, put a comment in Misc/NEWS. If the whole test suite passes, go ahead and check-in to Py2.5.1 and the head. Thanks, Raymond ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-11 19:56 Message: Logged In: YES user_id=849994 Originator: NO Attaching patch. File Added: nokeywordchecks.diff ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 18:30 Message: Logged In: YES user_id=80475 Originator: NO I fixed setobject.c in revisions 53380 and 53381. Please apply similar fixes to all the other places being bitten my the pervasive NoKeywords tests. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-11 00:49 Message: Logged In: YES user_id=80475 Originator: NO My proposed solution: - if(!PyArg_NoKeywords("set()", kwds) + if(type == &PySet_Type && !PyArg_NoKeywords("set()", kwds) ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-10 21:30 Message: Logged In: YES user_id=849994 Originator: NO I'll do that, only in set_init, you have if (!PyArg_UnpackTuple(args, self->ob_type->tp_name, 0, 1, &iterable)) Changing this to use PyArg_ParseTupleAndKeywords would require a format string of "|O:" + self->ob_type->tp_name Is it worth constructing that string each time set_init() is called or should it just be "|O:set" for sets and frozensets? ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2007-01-06 02:26 Message: Logged In: YES user_id=80475 Originator: NO I prefer the approach used by list(). ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-20 01:19 Message: Logged In: YES user_id=1326842 See patch #1491939 ---------------------------------------------------------------------- Comment By: ?iga Seilnacht (zseil) Date: 2006-05-19 20:02 Message: Logged In: YES user_id=1326842 This bug was introduced as part of the fix for bug #1119418. It also affects collections.deque. Can't the _PyArg_NoKeywords check simply be moved to set_init and deque_init as it was done for zipimport.zipimporter? array.array doesn't need to be changed, since it already does all of its initialization in its __new__ method. The rest of the types changed in that fix should not be affected, since they are immutable. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-05-11 17:23 Message: Logged In: YES user_id=849994 Raymond, what to do in this case? Note that other built-in types, such as list(), do accept keyword arguments. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1486663&group_id=5470 From noreply at sourceforge.net Sun Jan 21 11:36:05 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 02:36:05 -0800 Subject: [ python-Bugs-1601399 ] urllib2 does not close sockets properly Message-ID: Bugs item #1601399, was opened at 2006-11-22 21:04 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1601399&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Brendan Jurd (direvus) Assigned to: Nobody/Anonymous (nobody) Summary: urllib2 does not close sockets properly Initial Comment: Python 2.5 (release25-maint, Oct 29 2006, 12:44:11) [GCC 4.1.2 20061026 (prerelease) (Debian 4.1.1-18)] on linux2 I first noticed this when a program of mine (which makes a brief HTTPS connection every 20 seconds) started having some weird crashes. It turned out that the process had a massive number of file descriptors open. I did some debugging, and it became clear that the program was opening two file descriptors for every HTTPS connection it made with urllib2, and it wasn't closing them, even though I was reading all data from the response objects and then explictly calling close() on them. I found I could easily reproduce the behaviour using the interactive console. Try this while keeping an eye on the file descriptors held open by the python process: To begin with, the process will have the usual FDs 0, 1 and 2 open for std(in|out|err), plus one other. >>> import urllib2 >>> f = urllib2.urlopen("http://www.google.com") Now at this point the process has opened two more sockets. >>> f.read() [... HTML ensues ...] >>> f.close() The two extra sockets are still open. >>> del f The two extra sockets are STILL open. >>> f = urllib2.urlopen("http://www.python.org") >>> f.read() [...] >>> f.close() And now we have a total of four abandoned sockets open. It's not until you terminate the process entirely, or the OS (eventually) closes the socket on idle timeout, that they are closed. Note that if you do the same thing with httplib, the sockets are properly closed: >>> import httplib >>> c = httlib.HTTPConnection("www.google.com", 80) >>> c.connect() A socket has been opened. >>> c.putrequest("GET", "/") >>> c.endheaders() >>> r = c.getresponse() >>> r.read() [...] >>> r.close() And the socket has been closed. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-21 10:36 Message: Logged In: YES user_id=849994 Originator: NO Committed patch in rev. 53511, 53512 (2.5). ---------------------------------------------------------------------- Comment By: John J Lee (jjlee) Date: 2007-01-03 23:54 Message: Logged In: YES user_id=261020 Originator: NO Confirmed. The cause is the (ab)use of socket._fileobject by urllib2.AbstractHTTPHandler to provide .readline() and .readlines() methods. _fileobject simply does not close the socket on _fileobject.close() (since in the original intended use of _fileobject, _socketobject "owns" the socket, and _fileobject only has a reference to it). The bug was introduced with the upgrade to HTTP/1.1 in revision 36871. The patch here fixes it: http://python.org/sf/1627441 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1601399&group_id=5470 From noreply at sourceforge.net Sun Jan 21 16:31:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 07:31:47 -0800 Subject: [ python-Bugs-1603907 ] subprocess: error redirecting i/o from non-console process Message-ID: Bugs item #1603907, was opened at 2006-11-27 18:20 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603907&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: None >Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Oren Tirosh (orenti) Assigned to: Peter ?strand (astrand) Summary: subprocess: error redirecting i/o from non-console process Initial Comment: In IDLE, PythonWin or other non-console interactive Python under Windows: >>> from subprocess import * >>> Popen('cmd', stdout=PIPE) Traceback (most recent call last): File "", line 1, in -toplevel- Popen('', stdout=PIPE) File "C:\python24\lib\subprocess.py", line 533, in __init__ (p2cread, p2cwrite, File "C:\python24\lib\subprocess.py", line 593, in _get_handles p2cread = self._make_inheritable(p2cread) File "C:\python24\lib\subprocess.py", line 634, in _make_inheritable DUPLICATE_SAME_ACCESS) TypeError: an integer is required The same command in a console windows is successful. Why it happens: subprocess assumes that GetStdHandle always succeeds but when there is no console it returns None. DuplicateHandle then complains about getting a non-integer. This problem does not happen when redirecting all three standard handles. Solution: Replace None with -1 (INVALID_HANDLE_VALUE) in _make_inheritable. Patch attached. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-21 16:31 Message: Logged In: YES user_id=344921 Originator: NO This the suggested patches are not ready for commit, I'm moving this issue to "bugs" instead. ---------------------------------------------------------------------- Comment By: Oren Tirosh (orenti) Date: 2007-01-07 19:13 Message: Logged In: YES user_id=562624 Originator: YES Oops. The new patch does not solve it in all cases in the win32api version, either... ---------------------------------------------------------------------- Comment By: Oren Tirosh (orenti) Date: 2007-01-07 19:09 Message: Logged In: YES user_id=562624 Originator: YES If you duplicate INVALID_HANDLE_VALUE you get a new valid handle to nothing :-) I guess the code really should not rely on this undocumented behavior. The reason I didn't return INVALID_HANDLE_VALUE directly is because DuplicateHandle returns a _subprocess_handle object, not an int. It's expected to have a .Close() method elsewhere in the code. Because of subtle difference between in the behavior of the _subprocess and win32api implementations of GetStdHandle in this case solving this issue this gets quite messy! File Added: subprocess-noconsole2.patch ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-07 11:58 Message: Logged In: YES user_id=344921 Originator: NO This patch looks very interesting. However, it feels a little bit strange to call DuplicateHandle with a handle of -1. Is this really allowed? What will DuplicateHandle return in this case? INVALID_HANDLE_VALUE? In that case, isn't it better to return INVALID_HANDLE_VALUE directly? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603907&group_id=5470 From noreply at sourceforge.net Sun Jan 21 16:37:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 07:37:10 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 16:46 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Florent Rougon (frougon) >Assigned to: Peter ?strand (astrand) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-21 16:37 Message: Logged In: YES user_id=344921 Originator: NO >That's the only thing I managed to get with the C version. But with the >Python version, if I don't list the contents of /proc//fd immediately >after the transcode process started, I find it very hard to believe that just listing the contents of a kernel-virtual directory can change the behaviour of an application. I think it's much more likely that you have a timing issue. Since nothing indicates that there's actually a problem with the subprocess module, I'm closing this bug for now. After all, it's transcode that runs slowly and gives errors. This suggests that the problem is actually in transcode rather than in the Python subprocess module. Please re-open this bug if you find any indication of that it's actually subprocess that does something wrong. ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-14 00:37 Message: Logged In: YES user_id=310088 Originator: YES Hi Peter, At the very beginning, it seems the fds are the same in the child processes running transcode in each implementation (C, Python). With the C version, I got: total 5 dr-x------ 2 flo users 0 2007-01-14 00:12 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:12 .. lrwx------ 1 flo users 64 2007-01-14 00:12 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:12 1 -> /home/flo/tmp/transcode-test/extract_frame.output l-wx------ 1 flo users 64 2007-01-14 00:12 2 -> /home/flo/tmp/transcode-test/extract_frame.output lr-x------ 1 flo users 64 2007-01-14 00:12 3 -> pipe:[41339] lr-x------ 1 flo users 64 2007-01-14 00:12 4 -> pipe:[41340] With the Python version, I got: total 5 dr-x------ 2 flo users 0 2007-01-14 00:05 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:05 .. lrwx------ 1 flo users 64 2007-01-14 00:05 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:05 1 -> /home/flo/tmp/transcode-test/extract_frame.py.output l-wx------ 1 flo users 64 2007-01-14 00:05 2 -> /home/flo/tmp/transcode-test/extract_frame.py.output lr-x------ 1 flo users 64 2007-01-14 00:05 3 -> pipe:[40641] lr-x------ 1 flo users 64 2007-01-14 00:05 4 -> pipe:[40642] That's the only thing I managed to get with the C version. But with the Python version, if I don't list the contents of /proc//fd immediately after the transcode process started, I get this instead: total 3 dr-x------ 2 flo users 0 2007-01-14 00:05 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:05 .. lrwx------ 1 flo users 64 2007-01-14 00:05 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:05 1 -> /home/flo/tmp/transcode-test/extract_frame.py.output l-wx------ 1 flo users 64 2007-01-14 00:05 2 -> /home/flo/tmp/transcode-test/extract_frame.py.output No pipes anymore. Only the 3 standard fds. Note: I performed these tests with the .mpg file that does *not* cause the "Broken pipe" message to appear; therefore, the broken pipe in question is probably unrelated to those we saw disappear in this experiment (transcode launches several processes such as tcdecode, tcextract, etc. all communicating via pipes; I suppose the "Broken pipe" message shows up when one of these programs fails, for reasons we have yet to discover). Regarding your mentioning of close_fds, if I am not mistaken, it's only an optional argument of subrocess.Popen(). I did try to set it to True when first running into the problem, and it didn't help. But now, I am using basic fork() and execvp() (see the attachments), so there is no such close_fds option, right? Thanks. Florent ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-13 23:14 Message: Logged In: YES user_id=344921 Originator: NO The first thing to check is if the subprocesses have different sets up file descriptors when you launch them from Python and C, respectively. On Linux, do /proc/$thepid/fd in both cases and compare the output. Does it matter if you use close_fds=1? ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:51 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:50 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sun Jan 21 16:45:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 07:45:48 -0800 Subject: [ python-Bugs-1598181 ] subprocess.py: O(N**2) bottleneck Message-ID: Bugs item #1598181, was opened at 2006-11-17 07:40 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Ralf W. Grosse-Kunstleve (rwgk) Assigned to: Peter ?strand (astrand) Summary: subprocess.py: O(N**2) bottleneck Initial Comment: subprocess.py (Python 2.5, current SVN, probably all versions) contains this O(N**2) code: bytes_written = os.write(self.stdin.fileno(), input[:512]) input = input[bytes_written:] For large but reasonable "input" the second line is rate limiting. Luckily, it is very easy to remove this bottleneck. I'll upload a simple patch. Below is a small script that demonstrates the huge speed difference. The output on my machine is: creating input 0.888417959213 slow slicing input 61.1553330421 creating input 0.863168954849 fast slicing input 0.0163860321045 done The numbers are times in seconds. This is the source: import time import sys size = 1000000 t0 = time.time() print "creating input" input = "\n".join([str(i) for i in xrange(size)]) print time.time()-t0 t0 = time.time() print "slow slicing input" n_out_slow = 0 while True: out = input[:512] n_out_slow += 1 input = input[512:] if not input: break print time.time()-t0 t0 = time.time() print "creating input" input = "\n".join([str(i) for i in xrange(size)]) print time.time()-t0 t0 = time.time() print "fast slicing input" n_out_fast = 0 input_done = 0 while True: out = input[input_done:input_done+512] n_out_fast += 1 input_done += 512 if input_done >= len(input): break print time.time()-t0 assert n_out_fast == n_out_slow print "done" ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-21 16:45 Message: Logged In: YES user_id=344921 Originator: NO Backported to 2.5, in rev. 53513. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 08:00 Message: Logged In: YES user_id=33168 Originator: NO Peter this is fine for 2.5.1. Please apply and update Misc/NEWS. Thanks! ---------------------------------------------------------------------- Comment By: Ralf W. Grosse-Kunstleve (rwgk) Date: 2007-01-07 16:15 Message: Logged In: YES user_id=71407 Originator: YES Thanks for the fixes! ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-07 15:36 Message: Logged In: YES user_id=344921 Originator: NO Fixed in trunk revision 53295. Is this a good candidate for backporting to 25-maint? ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2007-01-04 19:20 Message: Logged In: YES user_id=1611720 Originator: NO I reviewed the patch--the proposed fix looks good. Minor comments: - "input_done" sounds like a flag, not a count of written bytes - buffer() could be used to avoid the 512-byte copy created by the slice ---------------------------------------------------------------------- Comment By: Ralf W. Grosse-Kunstleve (rwgk) Date: 2006-11-17 07:43 Message: Logged In: YES user_id=71407 Originator: YES Sorry, I didn't know the tracker would destroy the indentation. I'm uploading the demo source as a separate file. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1598181&group_id=5470 From noreply at sourceforge.net Sun Jan 21 17:24:37 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 08:24:37 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 15:46 Message generated for change (Comment added) made by frougon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed >Resolution: Works For Me Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Peter ?strand (astrand) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Florent Rougon (frougon) Date: 2007-01-21 16:24 Message: Logged In: YES user_id=310088 Originator: YES I never wrote that it was the listing of /proc//fd that was changing the behavior of transcode. Please don't put words in my mouth. I wrote that some fds are open soon after the transcode process is started, and quickly closed afterwards, when run from the Python test script. The rest of your answer again shows that you didn't read the bug report. I'll repeat a last time. The title of this bug report is "Problem running a subprocess". It is *not* "Problem with subprocess.py", although it of course happens with subprocess.py, since this module relies (on POSIX operating systems) on os.fork() and os.exec*(). Yes, I could have reported the problem against subprocess.py, since the problem does exist there. But I tried to be a good citizen and do my part of the job---to the point where I wasn't able to go further. I figured out the problem existed in the basic building blocks of subprocess.py, i.e. os.fork() and os.exec*(), and thus spared you the time needed to find this out on your own. I wrote and attached minimal example programs that reproduce the bug. These programs show that, with a particular program (transcode), a simple fork() + execvp() works fine in C but does not work in Python. *That* is a problem for the Python Library. Finally, to justify the closing of this bug, you wrote: "After all, it's transcode that runs slowly and gives errors. This suggests that the problem is actually in transcode rather than in the Python subprocess module." No, no, no. Transcode works perfectly fine when launched by my shell or the test program in C that I attached to this bug report. It is when run from Python that the aforementioned problems happen. You also wrote in the end: "Please re-open this bug if you find any indication of that it's actually subprocess that does something wrong." Of course it's a problem in subprocess, since it's not able to run a program the same way as a simple fork() + execvp() does in C... because of the underlying machinery (fork() and execvp() wrappers, particular settings of the Python process---I don't know), not because of subprocess.py, AFAICT. Hence the title I used for the report. But I won't reopen the bug myself: there is no point in doing so if you don't read the initial report nor the comments added later. If you're willing to read the reports you're claiming to take care of, reopen it yourself. Otherwise, please stop wasting my time and rendering the Python BTS useless. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-21 15:37 Message: Logged In: YES user_id=344921 Originator: NO >That's the only thing I managed to get with the C version. But with the >Python version, if I don't list the contents of /proc//fd immediately >after the transcode process started, I find it very hard to believe that just listing the contents of a kernel-virtual directory can change the behaviour of an application. I think it's much more likely that you have a timing issue. Since nothing indicates that there's actually a problem with the subprocess module, I'm closing this bug for now. After all, it's transcode that runs slowly and gives errors. This suggests that the problem is actually in transcode rather than in the Python subprocess module. Please re-open this bug if you find any indication of that it's actually subprocess that does something wrong. ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 23:37 Message: Logged In: YES user_id=310088 Originator: YES Hi Peter, At the very beginning, it seems the fds are the same in the child processes running transcode in each implementation (C, Python). With the C version, I got: total 5 dr-x------ 2 flo users 0 2007-01-14 00:12 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:12 .. lrwx------ 1 flo users 64 2007-01-14 00:12 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:12 1 -> /home/flo/tmp/transcode-test/extract_frame.output l-wx------ 1 flo users 64 2007-01-14 00:12 2 -> /home/flo/tmp/transcode-test/extract_frame.output lr-x------ 1 flo users 64 2007-01-14 00:12 3 -> pipe:[41339] lr-x------ 1 flo users 64 2007-01-14 00:12 4 -> pipe:[41340] With the Python version, I got: total 5 dr-x------ 2 flo users 0 2007-01-14 00:05 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:05 .. lrwx------ 1 flo users 64 2007-01-14 00:05 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:05 1 -> /home/flo/tmp/transcode-test/extract_frame.py.output l-wx------ 1 flo users 64 2007-01-14 00:05 2 -> /home/flo/tmp/transcode-test/extract_frame.py.output lr-x------ 1 flo users 64 2007-01-14 00:05 3 -> pipe:[40641] lr-x------ 1 flo users 64 2007-01-14 00:05 4 -> pipe:[40642] That's the only thing I managed to get with the C version. But with the Python version, if I don't list the contents of /proc//fd immediately after the transcode process started, I get this instead: total 3 dr-x------ 2 flo users 0 2007-01-14 00:05 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:05 .. lrwx------ 1 flo users 64 2007-01-14 00:05 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:05 1 -> /home/flo/tmp/transcode-test/extract_frame.py.output l-wx------ 1 flo users 64 2007-01-14 00:05 2 -> /home/flo/tmp/transcode-test/extract_frame.py.output No pipes anymore. Only the 3 standard fds. Note: I performed these tests with the .mpg file that does *not* cause the "Broken pipe" message to appear; therefore, the broken pipe in question is probably unrelated to those we saw disappear in this experiment (transcode launches several processes such as tcdecode, tcextract, etc. all communicating via pipes; I suppose the "Broken pipe" message shows up when one of these programs fails, for reasons we have yet to discover). Regarding your mentioning of close_fds, if I am not mistaken, it's only an optional argument of subrocess.Popen(). I did try to set it to True when first running into the problem, and it didn't help. But now, I am using basic fork() and execvp() (see the attachments), so there is no such close_fds option, right? Thanks. Florent ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-13 22:14 Message: Logged In: YES user_id=344921 Originator: NO The first thing to check is if the subprocesses have different sets up file descriptors when you launch them from Python and C, respectively. On Linux, do /proc/$thepid/fd in both cases and compare the output. Does it matter if you use close_fds=1? ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:51 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:50 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sun Jan 21 17:59:07 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 08:59:07 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 15:46 Message generated for change (Settings changed) made by frougon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Peter ?strand (astrand) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-21 16:24 Message: Logged In: YES user_id=310088 Originator: YES I never wrote that it was the listing of /proc//fd that was changing the behavior of transcode. Please don't put words in my mouth. I wrote that some fds are open soon after the transcode process is started, and quickly closed afterwards, when run from the Python test script. The rest of your answer again shows that you didn't read the bug report. I'll repeat a last time. The title of this bug report is "Problem running a subprocess". It is *not* "Problem with subprocess.py", although it of course happens with subprocess.py, since this module relies (on POSIX operating systems) on os.fork() and os.exec*(). Yes, I could have reported the problem against subprocess.py, since the problem does exist there. But I tried to be a good citizen and do my part of the job---to the point where I wasn't able to go further. I figured out the problem existed in the basic building blocks of subprocess.py, i.e. os.fork() and os.exec*(), and thus spared you the time needed to find this out on your own. I wrote and attached minimal example programs that reproduce the bug. These programs show that, with a particular program (transcode), a simple fork() + execvp() works fine in C but does not work in Python. *That* is a problem for the Python Library. Finally, to justify the closing of this bug, you wrote: "After all, it's transcode that runs slowly and gives errors. This suggests that the problem is actually in transcode rather than in the Python subprocess module." No, no, no. Transcode works perfectly fine when launched by my shell or the test program in C that I attached to this bug report. It is when run from Python that the aforementioned problems happen. You also wrote in the end: "Please re-open this bug if you find any indication of that it's actually subprocess that does something wrong." Of course it's a problem in subprocess, since it's not able to run a program the same way as a simple fork() + execvp() does in C... because of the underlying machinery (fork() and execvp() wrappers, particular settings of the Python process---I don't know), not because of subprocess.py, AFAICT. Hence the title I used for the report. But I won't reopen the bug myself: there is no point in doing so if you don't read the initial report nor the comments added later. If you're willing to read the reports you're claiming to take care of, reopen it yourself. Otherwise, please stop wasting my time and rendering the Python BTS useless. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-21 15:37 Message: Logged In: YES user_id=344921 Originator: NO >That's the only thing I managed to get with the C version. But with the >Python version, if I don't list the contents of /proc//fd immediately >after the transcode process started, I find it very hard to believe that just listing the contents of a kernel-virtual directory can change the behaviour of an application. I think it's much more likely that you have a timing issue. Since nothing indicates that there's actually a problem with the subprocess module, I'm closing this bug for now. After all, it's transcode that runs slowly and gives errors. This suggests that the problem is actually in transcode rather than in the Python subprocess module. Please re-open this bug if you find any indication of that it's actually subprocess that does something wrong. ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 23:37 Message: Logged In: YES user_id=310088 Originator: YES Hi Peter, At the very beginning, it seems the fds are the same in the child processes running transcode in each implementation (C, Python). With the C version, I got: total 5 dr-x------ 2 flo users 0 2007-01-14 00:12 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:12 .. lrwx------ 1 flo users 64 2007-01-14 00:12 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:12 1 -> /home/flo/tmp/transcode-test/extract_frame.output l-wx------ 1 flo users 64 2007-01-14 00:12 2 -> /home/flo/tmp/transcode-test/extract_frame.output lr-x------ 1 flo users 64 2007-01-14 00:12 3 -> pipe:[41339] lr-x------ 1 flo users 64 2007-01-14 00:12 4 -> pipe:[41340] With the Python version, I got: total 5 dr-x------ 2 flo users 0 2007-01-14 00:05 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:05 .. lrwx------ 1 flo users 64 2007-01-14 00:05 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:05 1 -> /home/flo/tmp/transcode-test/extract_frame.py.output l-wx------ 1 flo users 64 2007-01-14 00:05 2 -> /home/flo/tmp/transcode-test/extract_frame.py.output lr-x------ 1 flo users 64 2007-01-14 00:05 3 -> pipe:[40641] lr-x------ 1 flo users 64 2007-01-14 00:05 4 -> pipe:[40642] That's the only thing I managed to get with the C version. But with the Python version, if I don't list the contents of /proc//fd immediately after the transcode process started, I get this instead: total 3 dr-x------ 2 flo users 0 2007-01-14 00:05 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:05 .. lrwx------ 1 flo users 64 2007-01-14 00:05 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:05 1 -> /home/flo/tmp/transcode-test/extract_frame.py.output l-wx------ 1 flo users 64 2007-01-14 00:05 2 -> /home/flo/tmp/transcode-test/extract_frame.py.output No pipes anymore. Only the 3 standard fds. Note: I performed these tests with the .mpg file that does *not* cause the "Broken pipe" message to appear; therefore, the broken pipe in question is probably unrelated to those we saw disappear in this experiment (transcode launches several processes such as tcdecode, tcextract, etc. all communicating via pipes; I suppose the "Broken pipe" message shows up when one of these programs fails, for reasons we have yet to discover). Regarding your mentioning of close_fds, if I am not mistaken, it's only an optional argument of subrocess.Popen(). I did try to set it to True when first running into the problem, and it didn't help. But now, I am using basic fork() and execvp() (see the attachments), so there is no such close_fds option, right? Thanks. Florent ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-13 22:14 Message: Logged In: YES user_id=344921 Originator: NO The first thing to check is if the subprocesses have different sets up file descriptors when you launch them from Python and C, respectively. On Linux, do /proc/$thepid/fd in both cases and compare the output. Does it matter if you use close_fds=1? ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:51 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:50 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 15:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sun Jan 21 20:43:31 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 11:43:31 -0800 Subject: [ python-Bugs-1546442 ] subprocess.Popen can't read file object as stdin after seek Message-ID: Bugs item #1546442, was opened at 2006-08-25 07:52 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1546442&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: GaryD (gazzadee) Assigned to: Peter ?strand (astrand) Summary: subprocess.Popen can't read file object as stdin after seek Initial Comment: When I use an existing file object as stdin for a call to subprocess.Popen, then Popen cannot read the file if I have called seek on it more than once. eg. in the following python code: >>> import subprocess >>> rawfile = file('hello.txt', 'rb') >>> rawfile.readline() 'line 1\n' >>> rawfile.seek(0) >>> rawfile.readline() 'line 1\n' >>> rawfile.seek(0) >>> process_object = subprocess.Popen(["cat"], stdin=rawfile, stdout=subprocess.PIPE, stderr=subprocess.PIPE) process_object.stdout now contains nothing, implying that nothing was on process_object.stdin. Note that this only applies for a non-trivial seek (ie. where the file-pointer actually changes). Calling seek(0) multiple times in a row does not change anything (obviously). I have not investigated whether this reveals a problem with seek not changing the underlying file descriptor, or a problem with Popen not handling the file descriptor properly. I have attached some complete python scripts that demonstrate this problem. One shows cat working after calling seek once, the other shows cat failing after calling seek twice. Python version being used: Python 2.4.2 (#1, Nov 3 2005, 12:41:57) [GCC 3.4.3-20050110 (Gentoo Linux 3.4.3.20050110, ssp-3.4.3.20050110-0, pie-8.7 on linux2 ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-21 20:43 Message: Logged In: YES user_id=344921 Originator: NO It's not obvious that the subprocess module is doing anything wrong here. Mixing streams and file descriptors is always problematic and should best be avoided (http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_232.html). However, the subprocess module *does* accept a file object (based on a libc stream), for convenience. For things to work correctly, the application and the subprocess module needs to cooperate. I admit that the documentation needs improvement on this topic, though. It's quite easy to demonstrate the problem, you don't need to use seek at all. Here's a simple test case: import subprocess rawfile = file('hello.txt', 'rb') rawfile.readline() p = subprocess.Popen(["cat"], stdin=rawfile, stdout=subprocess.PIPE, stderr=subprocess.PIPE) print "File contents from Popen() call to cat:" print p.stdout.read() p.wait() The descriptor offset is at the end, since the stream buffers. http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_233.html describes the need for "cleaning up" a stream, when you switch from stream functions to descriptor functions. This is described at http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_235.html#SEC244. The documentation recommends the fclean() function, but it's only available on GNU systems and not in Python. As I understand it, fflush() works good for cleaning an output stream. For input streams, however, things are difficult. fflush() might work sometimes, but to be sure, you must set the file pointer as well. And, this does not work for files that are not random access, since there's no way of move the buffered data back to the operating system. So, since subprocess cannot reliable deal with this situation, I believe it shouldn't try. I think it makes more sense that the application prepares the file object for low-level operations. There are many other Python modules that uses the .fileno() method, for example the select() module, and as far as I understand, this module doesn't try to clean streams or anything like that. To summarize: I'm leaning towards a documentation solution. ---------------------------------------------------------------------- Comment By: lplatypus (ldeller) Date: 2006-08-25 09:13 Message: Logged In: YES user_id=1534394 I found the cause of this bug: A libc FILE* (used by python file objects) may hold a different file offset than the underlying OS file descriptor. The posix version of Popen._get_handles does not take this into account, resulting in this bug. The following patch against svn trunk fixes the problem. I don't have permission to attach files to this item, so I'll have to paste the patch here: Index: subprocess.py =================================================================== --- subprocess.py (revision 51581) +++ subprocess.py (working copy) @@ -907,6 +907,12 @@ else: # Assuming file-like object p2cread = stdin.fileno() + # OS file descriptor's file offset does not necessarily match + # the file offset in the file-like object, so do an lseek: + try: + os.lseek(p2cread, stdin.tell(), 0) + except OSError: + pass # file descriptor does not support seek/tell if stdout is None: pass @@ -917,6 +923,12 @@ else: # Assuming file-like object c2pwrite = stdout.fileno() + # OS file descriptor's file offset does not necessarily match + # the file offset in the file-like object, so do an lseek: + try: + os.lseek(c2pwrite, stdout.tell(), 0) + except OSError: + pass # file descriptor does not support seek/tell if stderr is None: pass @@ -929,6 +941,12 @@ else: # Assuming file-like object errwrite = stderr.fileno() + # OS file descriptor's file offset does not necessarily match + # the file offset in the file-like object, so do an lseek: + try: + os.lseek(errwrite, stderr.tell(), 0) + except OSError: + pass # file descriptor does not support seek/tell return (p2cread, p2cwrite, c2pread, c2pwrite, ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1546442&group_id=5470 From noreply at sourceforge.net Sun Jan 21 21:22:06 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 12:22:06 -0800 Subject: [ python-Bugs-1634739 ] Problem running a subprocess Message-ID: Bugs item #1634739, was opened at 2007-01-13 16:46 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Closed Resolution: Invalid Priority: 5 Private: No Submitted By: Florent Rougon (frougon) Assigned to: Peter ?strand (astrand) Summary: Problem running a subprocess Initial Comment: Hello, I have a problem running a subprocess from Python (see below). I first ran into it with the subprocess module, but it's also triggered by a simple os.fork() followed by os.execvp(). So, what is the problem, exactly? I have written the exact same minimal program in C and in Python, which uses fork() and execvp() in the most straightforward way to run the following command: transcode -i /tmp/file.mpg -c 100-101 -o snapshot -y im,null -F png (whose effect is to extract the 100th frame of /tmp/file.mpg and store it into snapshot.png) The C program runs fast with no error, while the one in Python takes from 60 to 145 times longer (!), and triggers error messages from transcode. This shouldn't happen, since both programs are merely calling transcode in the same way to perform the exact same thing. Experiments ------------ 1. First, I run the C program (extract_frame) twice on a first .mpg file (MPEG 2 PS) [the first time fills the block IO cache], and store the output in extract_frame.output: % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.82s user 0.33s system 53% cpu 2.175 total % time ./extract_frame >extract_frame.output 2>&1 ./extract_frame > extract_frame.output 2>& 1 0.79s user 0.29s system 96% cpu 1.118 total Basically, this takes 1 or 2 seconds. extract_frame.output is attached. Second, I run the Python program (extract_frame.py) on the same .mpg file, and store the output in extract_frame.py.output: % time ./extract_frame.py >extract_frame.py.output 2>&1 ./extract_frame.py > extract_frame.py.output 2>& 1 81.59s user 25.98s system 66% cpu 2:42.51 total This takes more than 2 *minutes*, not seconds! (of course, the system is idle for all tests) In extract_frame.py.output, the following error message appears quickly after the process is started: failed to write Y plane of frame(demuxer.c) write program stream packet: Broken pipe which is in fact composed of two error messages, the second one starting at "(demuxer.c)". Once these messages are printed, the transcode subprocesses[1] seem to hang (with relatively high CPU usage), but eventually complete, after 2 minutes or so. There are no such error messages in extract_frame.output. 2. Same test with another .mpg file. As far as time is concerned, we have the same problem: [C program] % time ./extract_frame >extract_frame.output2 2>&1 ./extract_frame > extract_frame.output2 2>& 1 0.73s user 0.28s system 43% cpu 2.311 total [Python program] % time ./extract_frame.py >extract_frame.py.output2 2>&1 ./extract_frame.py > extract_frame.py.output2 2>& 1 92.84s user 12.20s system 76% cpu 2:18.14 total We also get the first error message in extract_frame.py.output2: failed to write Y plane of frame when running extract_frame.py, but this time, we do *not* have the second error message: (demuxer.c) write program stream packet: Broken pipe All this is reproducible with Python 2.3, 2.4 and 2.5 (Debian packages in sarge for 2.3 and 2.4, vanilla Python 2.5). % python2.5 -c 'import sys; print sys.version; print "%x" % sys.hexversion' 2.5 (r25:51908, Jan 5 2007, 17:35:09) [GCC 3.3.5 (Debian 1:3.3.5-13)] 20500f0 % transcode --version transcode v1.0.2 (C) 2001-2003 Thomas Oestreich, 2003-2004 T. Bitterberg I'd hazard that Python is tweaking some process or threading parameter that is inherited by subprocesses and disturbs transcode, which doesn't happen when calling fork() and execvp() from a C program, but am unfortunately unable to precisely diagnose the problem. Many thanks for considering. Regards, Florent [1] Plural because the transcode process spawns several childs: tcextract, tcdemux, etc. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-21 21:22 Message: Logged In: YES user_id=344921 Originator: NO >I never wrote that it was the listing of /proc//fd that was changing >the behavior of transcode. Please don't put words in my mouth. I wrote that >some fds are open soon after the transcode process is started, and quickly >closed afterwards, when run from the Python test script. Sorry about that, I did misunderstand you. >I wrote and attached minimal example programs that reproduce the bug. The problem is that although extract_frame.py is short and "minimal", it relies on the transcode program, which is a very complex piece of software. I cannot reproduce your problem on my machine (but I haven't tried very hard). >I wrote and attached minimal example programs that reproduce the bug. >These programs show that, with a particular program (transcode), a simple >fork() + execvp() works fine in C but does not work in Python. *That* is a >problem for the Python Library. Again, there's no clear evidence that there's actually some problem with the Python library or even Python itself. Python is a very mature software. The fork() and execvp() are very heavily used. It's unlikely that there's something fundamentally wrong with them - if so, problems would probably have turned up for other subprocesses as well, not just transcode. It might be some minor difference between your C program and Python. But: It's your job to point out what you think it's wrong. >Transcode works perfectly fine when launched by my shell or the test program in C that I attached to this bug report. This doesn't prove that Python is guilty. This only proves that you have failed to reproduce the problem from your C test program and the shell. It might still be a bug in transcode that's only triggered by some corner case or timing issue. Say, a race condition. >Of course it's a problem in subprocess, since it's not able to run a >program the same way as a simple fork() + execvp() does in C. >From what you have described, the program is executed exactly in the same way from Python as from your C program (wrt open file descriptors after launch, program arguments etc). >Otherwise, please stop wasting my time and rendering the Python BTS useless. The problem with me wasting your time can easily be solved, I'll just stop trying to help you... ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-21 17:24 Message: Logged In: YES user_id=310088 Originator: YES I never wrote that it was the listing of /proc//fd that was changing the behavior of transcode. Please don't put words in my mouth. I wrote that some fds are open soon after the transcode process is started, and quickly closed afterwards, when run from the Python test script. The rest of your answer again shows that you didn't read the bug report. I'll repeat a last time. The title of this bug report is "Problem running a subprocess". It is *not* "Problem with subprocess.py", although it of course happens with subprocess.py, since this module relies (on POSIX operating systems) on os.fork() and os.exec*(). Yes, I could have reported the problem against subprocess.py, since the problem does exist there. But I tried to be a good citizen and do my part of the job---to the point where I wasn't able to go further. I figured out the problem existed in the basic building blocks of subprocess.py, i.e. os.fork() and os.exec*(), and thus spared you the time needed to find this out on your own. I wrote and attached minimal example programs that reproduce the bug. These programs show that, with a particular program (transcode), a simple fork() + execvp() works fine in C but does not work in Python. *That* is a problem for the Python Library. Finally, to justify the closing of this bug, you wrote: "After all, it's transcode that runs slowly and gives errors. This suggests that the problem is actually in transcode rather than in the Python subprocess module." No, no, no. Transcode works perfectly fine when launched by my shell or the test program in C that I attached to this bug report. It is when run from Python that the aforementioned problems happen. You also wrote in the end: "Please re-open this bug if you find any indication of that it's actually subprocess that does something wrong." Of course it's a problem in subprocess, since it's not able to run a program the same way as a simple fork() + execvp() does in C... because of the underlying machinery (fork() and execvp() wrappers, particular settings of the Python process---I don't know), not because of subprocess.py, AFAICT. Hence the title I used for the report. But I won't reopen the bug myself: there is no point in doing so if you don't read the initial report nor the comments added later. If you're willing to read the reports you're claiming to take care of, reopen it yourself. Otherwise, please stop wasting my time and rendering the Python BTS useless. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-21 16:37 Message: Logged In: YES user_id=344921 Originator: NO >That's the only thing I managed to get with the C version. But with the >Python version, if I don't list the contents of /proc//fd immediately >after the transcode process started, I find it very hard to believe that just listing the contents of a kernel-virtual directory can change the behaviour of an application. I think it's much more likely that you have a timing issue. Since nothing indicates that there's actually a problem with the subprocess module, I'm closing this bug for now. After all, it's transcode that runs slowly and gives errors. This suggests that the problem is actually in transcode rather than in the Python subprocess module. Please re-open this bug if you find any indication of that it's actually subprocess that does something wrong. ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-14 00:37 Message: Logged In: YES user_id=310088 Originator: YES Hi Peter, At the very beginning, it seems the fds are the same in the child processes running transcode in each implementation (C, Python). With the C version, I got: total 5 dr-x------ 2 flo users 0 2007-01-14 00:12 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:12 .. lrwx------ 1 flo users 64 2007-01-14 00:12 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:12 1 -> /home/flo/tmp/transcode-test/extract_frame.output l-wx------ 1 flo users 64 2007-01-14 00:12 2 -> /home/flo/tmp/transcode-test/extract_frame.output lr-x------ 1 flo users 64 2007-01-14 00:12 3 -> pipe:[41339] lr-x------ 1 flo users 64 2007-01-14 00:12 4 -> pipe:[41340] With the Python version, I got: total 5 dr-x------ 2 flo users 0 2007-01-14 00:05 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:05 .. lrwx------ 1 flo users 64 2007-01-14 00:05 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:05 1 -> /home/flo/tmp/transcode-test/extract_frame.py.output l-wx------ 1 flo users 64 2007-01-14 00:05 2 -> /home/flo/tmp/transcode-test/extract_frame.py.output lr-x------ 1 flo users 64 2007-01-14 00:05 3 -> pipe:[40641] lr-x------ 1 flo users 64 2007-01-14 00:05 4 -> pipe:[40642] That's the only thing I managed to get with the C version. But with the Python version, if I don't list the contents of /proc//fd immediately after the transcode process started, I get this instead: total 3 dr-x------ 2 flo users 0 2007-01-14 00:05 . dr-xr-xr-x 4 flo users 0 2007-01-14 00:05 .. lrwx------ 1 flo users 64 2007-01-14 00:05 0 -> /dev/pts/0 l-wx------ 1 flo users 64 2007-01-14 00:05 1 -> /home/flo/tmp/transcode-test/extract_frame.py.output l-wx------ 1 flo users 64 2007-01-14 00:05 2 -> /home/flo/tmp/transcode-test/extract_frame.py.output No pipes anymore. Only the 3 standard fds. Note: I performed these tests with the .mpg file that does *not* cause the "Broken pipe" message to appear; therefore, the broken pipe in question is probably unrelated to those we saw disappear in this experiment (transcode launches several processes such as tcdecode, tcextract, etc. all communicating via pipes; I suppose the "Broken pipe" message shows up when one of these programs fails, for reasons we have yet to discover). Regarding your mentioning of close_fds, if I am not mistaken, it's only an optional argument of subrocess.Popen(). I did try to set it to True when first running into the problem, and it didn't help. But now, I am using basic fork() and execvp() (see the attachments), so there is no such close_fds option, right? Thanks. Florent ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-13 23:14 Message: Logged In: YES user_id=344921 Originator: NO The first thing to check is if the subprocesses have different sets up file descriptors when you launch them from Python and C, respectively. On Linux, do /proc/$thepid/fd in both cases and compare the output. Does it matter if you use close_fds=1? ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:52 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:51 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output2 ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:50 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.output ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: extract_frame.py ---------------------------------------------------------------------- Comment By: Florent Rougon (frougon) Date: 2007-01-13 16:49 Message: Logged In: YES user_id=310088 Originator: YES File Added: Makefile ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1634739&group_id=5470 From noreply at sourceforge.net Sun Jan 21 23:10:25 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 14:10:25 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: David Watson (baikie) Date: 2007-01-21 22:10 Message: Logged In: YES user_id=1504904 Originator: YES Hold on, I have a plan. If _toc is only regenerated on locking, or at the end of a flush(), then the only way self._pending can be set at that time is if the application has made modifications before calling lock(). If we make that an exception-raising offence, then we can assume that self._toc is a faithful representation of the last known contents of the file. That means we can preserve the existing message keys on a reread without any of that _user_toc nonsense. Diff attached, to apply on top of mailbox-unified2. It's probably had even less review and testing than the previous version, but it appears to pass all the regression tests and doesn't change any existing semantics. File Added: mailbox-update-toc-new.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-21 03:16 Message: Logged In: YES user_id=11375 Originator: NO I'm starting to lose track of all the variations on the bug. Maybe we should just add more warnings to the documentation about locking the mailbox when modifying it and not try to fix this at all. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-20 18:20 Message: Logged In: YES user_id=1504904 Originator: YES Hang on. If a message's key changes after recreating _toc, that does not mean that another process has modified the mailbox. If the application removes a message and then (inadvertently) causes _toc to be regenerated, the keys of all subsequent messages will be decremented by one, due only to the application's own actions. That's what happens in the "broken locking" test case: the program intends to remove message 0, flush, and then remove message 1, but because _toc is regenerated in between, message 1 is renumbered as 0, message 2 is renumbered as 1, and so the program deletes message 2 instead. To clear _toc in such code without attempting to preserve the message keys turns possible data loss (in the case that another process modified the mailbox) into certain data loss. That's what I'm concerned about. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-19 15:24 Message: Logged In: YES user_id=11375 Originator: NO After reflection, I don't think the potential changing actually makes things any worse. _generate() always starts numbering keys with 1, so if a message's key changes because of lock()'s, re-reading, that means someone else has already modified the mailbox. Without the ToC clearing, you're already fated to have a corrupted mailbox because the new mailbox will be written using outdated file offsets. With the ToC clearing, you delete the wrong message. Neither outcome is good, but data is lost either way. The new behaviour is maybe a little bit better in that you're losing a single message but still generating a well-formed mailbox, and not a randomly jumbled mailbox. I suggest applying the patch to clear self._toc, and noting in the documentation that keys might possibly change after doing a lock(). ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 18:15 Message: Logged In: YES user_id=1504904 Originator: YES This version passes the new tests (it fixes the length checking bug, and no longer clears self._toc), but goes back to failing test_concurrent_add. File Added: mailbox-unified2-module.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 18:14 Message: Logged In: YES user_id=1504904 Originator: YES Unfortunately, there is a problem with clearing _toc: it's basically the one I alluded to in my comment of 2006-12-16. Back then I thought it could be caught by the test you issue the warning for, but the application may instead do its second remove() *after* the lock() (so that self._pending is not set at lock() time), using the key carried over from before it called unlock(). As before, this would result in the wrong message being deleted. I've added a test case for this (diff attached), and a bug I found in the process whereby flush() wasn't updating the length, which could cause subsequent flushes to fail (I've got a fix for this). These seem to have turned up a third bug in the MH class, but I haven't looked at that yet. File Added: mailbox-unified2-test.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 21:06 Message: Logged In: YES user_id=11375 Originator: NO Add mailbox-unified-patch. File Added: mailbox-unified-patch.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 21:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 20:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 19:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 06:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 19:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 18:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 19:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 18:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 17:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 19:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 19:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 17:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 19:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 14:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 13:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 21:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 20:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 20:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Mon Jan 22 00:34:30 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 15:34:30 -0800 Subject: [ python-Bugs-1641109 ] 2.3.6.4 Error in append and extend descriptions Message-ID: Bugs item #1641109, was opened at 2007-01-21 23:34 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1641109&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: ilalopoulos (arafin) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3.6.4 Error in append and extend descriptions Initial Comment: 2.3.6.4 Mutable Sequence Types (2.4.4 Python Doc) Error in the table describing append and extend operations for the list type. specificaly: s.append(x) same as s[len(s):len(s)] = [x] (2) s.extend(x) same as s[len(s):len(s)] = x (3) should be: s.append(x) same as s[len(s):len(s)] = x (2) s.extend(x) same as s[len(s):len(s)] = [x] (3) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1641109&group_id=5470 From noreply at sourceforge.net Mon Jan 22 02:23:34 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 17:23:34 -0800 Subject: [ python-Bugs-1546442 ] subprocess.Popen can't read file object as stdin after seek Message-ID: Bugs item #1546442, was opened at 2006-08-25 15:52 Message generated for change (Comment added) made by ldeller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1546442&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: GaryD (gazzadee) Assigned to: Peter ?strand (astrand) Summary: subprocess.Popen can't read file object as stdin after seek Initial Comment: When I use an existing file object as stdin for a call to subprocess.Popen, then Popen cannot read the file if I have called seek on it more than once. eg. in the following python code: >>> import subprocess >>> rawfile = file('hello.txt', 'rb') >>> rawfile.readline() 'line 1\n' >>> rawfile.seek(0) >>> rawfile.readline() 'line 1\n' >>> rawfile.seek(0) >>> process_object = subprocess.Popen(["cat"], stdin=rawfile, stdout=subprocess.PIPE, stderr=subprocess.PIPE) process_object.stdout now contains nothing, implying that nothing was on process_object.stdin. Note that this only applies for a non-trivial seek (ie. where the file-pointer actually changes). Calling seek(0) multiple times in a row does not change anything (obviously). I have not investigated whether this reveals a problem with seek not changing the underlying file descriptor, or a problem with Popen not handling the file descriptor properly. I have attached some complete python scripts that demonstrate this problem. One shows cat working after calling seek once, the other shows cat failing after calling seek twice. Python version being used: Python 2.4.2 (#1, Nov 3 2005, 12:41:57) [GCC 3.4.3-20050110 (Gentoo Linux 3.4.3.20050110, ssp-3.4.3.20050110-0, pie-8.7 on linux2 ---------------------------------------------------------------------- Comment By: lplatypus (ldeller) Date: 2007-01-22 12:23 Message: Logged In: YES user_id=1534394 Originator: NO Fair enough, that's probably cleaner and more efficient than playing games with fflush and lseek anyway. If file objects are not supported properly then maybe they shouldn't be accepted at all, forcing the application to call fileno() if that's what is wanted. That might break a lot of existing code though. Then again it may be beneficial to get everyone to review code which passes file objects to Popen in light of this behaviour. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-22 06:43 Message: Logged In: YES user_id=344921 Originator: NO It's not obvious that the subprocess module is doing anything wrong here. Mixing streams and file descriptors is always problematic and should best be avoided (http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_232.html). However, the subprocess module *does* accept a file object (based on a libc stream), for convenience. For things to work correctly, the application and the subprocess module needs to cooperate. I admit that the documentation needs improvement on this topic, though. It's quite easy to demonstrate the problem, you don't need to use seek at all. Here's a simple test case: import subprocess rawfile = file('hello.txt', 'rb') rawfile.readline() p = subprocess.Popen(["cat"], stdin=rawfile, stdout=subprocess.PIPE, stderr=subprocess.PIPE) print "File contents from Popen() call to cat:" print p.stdout.read() p.wait() The descriptor offset is at the end, since the stream buffers. http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_233.html describes the need for "cleaning up" a stream, when you switch from stream functions to descriptor functions. This is described at http://ftp.gnu.org/gnu/Manuals/glibc-2.2.3/html_node/libc_235.html#SEC244. The documentation recommends the fclean() function, but it's only available on GNU systems and not in Python. As I understand it, fflush() works good for cleaning an output stream. For input streams, however, things are difficult. fflush() might work sometimes, but to be sure, you must set the file pointer as well. And, this does not work for files that are not random access, since there's no way of move the buffered data back to the operating system. So, since subprocess cannot reliable deal with this situation, I believe it shouldn't try. I think it makes more sense that the application prepares the file object for low-level operations. There are many other Python modules that uses the .fileno() method, for example the select() module, and as far as I understand, this module doesn't try to clean streams or anything like that. To summarize: I'm leaning towards a documentation solution. ---------------------------------------------------------------------- Comment By: lplatypus (ldeller) Date: 2006-08-25 17:13 Message: Logged In: YES user_id=1534394 I found the cause of this bug: A libc FILE* (used by python file objects) may hold a different file offset than the underlying OS file descriptor. The posix version of Popen._get_handles does not take this into account, resulting in this bug. The following patch against svn trunk fixes the problem. I don't have permission to attach files to this item, so I'll have to paste the patch here: Index: subprocess.py =================================================================== --- subprocess.py (revision 51581) +++ subprocess.py (working copy) @@ -907,6 +907,12 @@ else: # Assuming file-like object p2cread = stdin.fileno() + # OS file descriptor's file offset does not necessarily match + # the file offset in the file-like object, so do an lseek: + try: + os.lseek(p2cread, stdin.tell(), 0) + except OSError: + pass # file descriptor does not support seek/tell if stdout is None: pass @@ -917,6 +923,12 @@ else: # Assuming file-like object c2pwrite = stdout.fileno() + # OS file descriptor's file offset does not necessarily match + # the file offset in the file-like object, so do an lseek: + try: + os.lseek(c2pwrite, stdout.tell(), 0) + except OSError: + pass # file descriptor does not support seek/tell if stderr is None: pass @@ -929,6 +941,12 @@ else: # Assuming file-like object errwrite = stderr.fileno() + # OS file descriptor's file offset does not necessarily match + # the file offset in the file-like object, so do an lseek: + try: + os.lseek(errwrite, stderr.tell(), 0) + except OSError: + pass # file descriptor does not support seek/tell return (p2cread, p2cwrite, c2pread, c2pwrite, ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1546442&group_id=5470 From noreply at sourceforge.net Mon Jan 22 04:20:05 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 19:20:05 -0800 Subject: [ python-Bugs-654766 ] asyncore.py and "handle_expt" Message-ID: <200701220320.l0M3K5xd031803@sc8-sf-db2-new-b.sourceforge.net> Bugs item #654766, was opened at 2002-12-16 10:42 Message generated for change (Comment added) made by sf-robot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=654766&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.2 >Status: Closed Resolution: Out of Date Priority: 5 Private: No Submitted By: Jes?s Cea Avi?n (jcea) Assigned to: Josiah Carlson (josiahcarlson) Summary: asyncore.py and "handle_expt" Initial Comment: Python 2.2.2 here. Asyncore.py doesn't invoke "handle_expt" ever ("handle_expt" is documented in docs). Managing OOB data is imprescindible to handle "connection refused" errors in Windows, for example. ---------------------------------------------------------------------- >Comment By: SourceForge Robot (sf-robot) Date: 2007-01-21 19:20 Message: Logged In: YES user_id=1312539 Originator: NO This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-06 22:18 Message: Logged In: YES user_id=341410 Originator: NO According to the most recent Python trunk, handle_expt() is called when an error is found within a .select() or .poll() call. Is this still an issue for you in Python 2.4 or Python 2.5? Setting status as Pending, Out of Date as I believe this bug is fixed. ---------------------------------------------------------------------- Comment By: Alexey Klimkin (klimkin) Date: 2004-03-04 00:24 Message: Logged In: YES user_id=410460 Patch #909005 fixes the problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=654766&group_id=5470 From noreply at sourceforge.net Mon Jan 22 08:51:20 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 21 Jan 2007 23:51:20 -0800 Subject: [ python-Bugs-1579370 ] Segfault provoked by generators and exceptions Message-ID: Bugs item #1579370, was opened at 2006-10-18 04:23 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Nobody/Anonymous (nobody) Summary: Segfault provoked by generators and exceptions Initial Comment: A reproducible segfault when using heavily-nested generators and exceptions. Unfortunately, I haven't yet been able to provoke this behaviour with a standalone python2.5 script. There are, however, no third-party c extensions running in the process so I'm fairly confident that it is a problem in the core. The gist of the code is a series of nested generators which leave scope when an exception is raised. This exception is caught and re-raised in an outer loop. The old exception was holding on to the frame which was keeping the generators alive, and the sequence of generator destruction and new finalization caused the segfault. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-22 08:51 Message: Logged In: YES user_id=21627 Originator: NO I don't like mklaas' patch, since I think it is conceptually wrong to have PyTraceBack_Here() use the frame's thread state (mklaas describes it as dirty, and I agree). I'm proposing an alternative patch (tr.diff); please test this as well. File Added: tr.diff ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 08:01 Message: Logged In: YES user_id=33168 Originator: NO Bumping priority to see if this should go into 2.5.1. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 11:42 Message: Logged In: YES user_id=21627 Originator: NO Why do frame objects have a thread state in the first place? In particular, why does PyTraceBack_Here get the thread state from the frame, instead of using the current thread? Introduction of f_tstate goes back to r7882, but it is not clear why it was done that way. ---------------------------------------------------------------------- Comment By: Andrew Waters (awaters) Date: 2007-01-04 10:35 Message: Logged In: YES user_id=1418249 Originator: NO This fixes the segfault problem that I was able to reliably reproduce on Linux. We need to get this applied (assuming it is the correct fix) to the source to make Python 2.5 usable for me in production code. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-11-27 19:41 Message: Logged In: YES user_id=1611720 Originator: YES The following patch resets the thread state of the generator when it is resumed, which prevents the segfault for me: Index: Objects/genobject.c =================================================================== --- Objects/genobject.c (revision 52849) +++ Objects/genobject.c (working copy) @@ -77,6 +77,7 @@ Py_XINCREF(tstate->frame); assert(f->f_back == NULL); f->f_back = tstate->frame; + f->f_tstate = tstate; gen->gi_running = 1; result = PyEval_EvalFrameEx(f, exc); ---------------------------------------------------------------------- Comment By: Eric Noyau (eric_noyau) Date: 2006-11-27 19:07 Message: Logged In: YES user_id=1388768 Originator: NO We are experiencing the same segfault in our application, reliably. Running our unit test suite just segfault everytime on both Linux and Mac OS X. Applying Martin's patch fixes the segfault, and makes everything fine and dandy, at the cost of some memory leaks if I understand properly. This particular bug prevents us to upgrade to python 2.5 in production. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-28 07:18 Message: Logged In: YES user_id=31435 > I tried Tim's hope.py on Linux x86_64 and > Mac OS X 10.4 with debug builds and neither > one crashed. Tim's guess looks pretty damn > good too. Neal, note that it's the /Windows/ malloc that fills freed memory with "dangerous bytes" in a debug build -- this really has nothing to do with that it's a debug build of /Python/ apart from that on Windows a debug build of Python also links in the debug version of Microsoft's malloc. The valgrind report is pointing at the same thing. Whether this leads to a crash is purely an accident of when and how the system malloc happens to reuse the freed memory. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-28 06:56 Message: Logged In: YES user_id=33168 Mike, what platform are you having the problem on? I tried Tim's hope.py on Linux x86_64 and Mac OS X 10.4 with debug builds and neither one crashed. Tim's guess looks pretty damn good too. Here's the result of valgrind: Invalid read of size 8 at 0x4CEBFE: PyTraceBack_Here (traceback.c:117) by 0x49C1F1: PyEval_EvalFrameEx (ceval.c:2515) by 0x4F615D: gen_send_ex (genobject.c:82) by 0x4F6326: gen_close (genobject.c:128) by 0x4F645E: gen_del (genobject.c:163) by 0x4F5F00: gen_dealloc (genobject.c:31) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x44534E: dict_dealloc (dictobject.c:801) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x4664FF: subtype_dealloc (typeobject.c:686) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x42325D: instancemethod_dealloc (classobject.c:2287) Address 0x56550C0 is 88 bytes inside a block of size 152 free'd at 0x4A1A828: free (vg_replace_malloc.c:233) by 0x4C3899: tstate_delete_common (pystate.c:256) by 0x4C3926: PyThreadState_DeleteCurrent (pystate.c:282) by 0x4D4043: t_bootstrap (threadmodule.c:448) by 0x4B24C48: pthread_start_thread (in /lib/libpthread-0.10.so) The only way I can think to fix this is to keep a set of active generators in the PyThreadState and calling gen_send_ex(exc=1) for all the active generators before killing the tstate in t_bootstrap. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-10-19 09:58 Message: Logged In: YES user_id=6656 > and for some reason Python uses the system malloc directly > to obtain memory for thread states. This bit is fairly easy: they are allocated without the GIL being held, which breaks an assumption of PyMalloc. No idea about the real problem, sadly. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-19 02:38 Message: Logged In: YES user_id=31435 I've attached a much simplified pure-Python script (hope.py) that reproduces a problem very quickly, on Windows, in a /debug/ build of current trunk. It typically prints: exiting generator joined thread at most twice before crapping out. At the time, the `next` argument to newtracebackobject() is 0xdddddddd, and tracing back a level shows that, in PyTraceBack_Here(), frame->tstate is entirely filled with 0xdd bytes. Note that this is not a debug-build obmalloc gimmick! This is Microsoft's similar debug-build gimmick for their malloc, and for some reason Python uses the system malloc directly to obtain memory for thread states. The Microsoft debug free() fills newly-freed memory with 0xdd, which has the same meaning as the debug-build obmalloc's DEADBYTE (0xdb). So somebody is accessing a thread state here after it's been freed. Best guess is that the generator is getting "cleaned up" after the thread that created it has gone away, so the generator's frame's f_tstate is trash. Note that a PyThreadState (a frame's f_tstate) is /not/ a Python object -- it's just a raw C struct, and its lifetime isn't controlled by refcounts. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-19 02:12 Message: Logged In: YES user_id=1611720 Despite Tim's reassurrance, I'm afraid that Martin's patch does infact prevent the segfault. Sounds like it also introduces a memleak. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-18 23:57 Message: Logged In: YES user_id=31435 > Can anybody tell why gi_frame *isn't* incref'ed when > the generator is created? As documented (in concrete.tex), PyGen_New(f) steals a reference to the frame passed to it. Its only call site (well, in the core) is in ceval.c, which returns immediately after PyGen_New takes over ownership of the frame the caller created: """ /* Create a new generator that owns the ready to run frame * and return that as the value. */ return PyGen_New(f); """ In short, that PyGen_New() doesn't incref the frame passed to it is intentional. It's possible that the intent is flawed ;-), but offhand I don't see how. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-10-18 23:05 Message: Logged In: YES user_id=21627 Can you please review/try attached patch? Can anybody tell why gi_frame *isn't* incref'ed when the generator is created? ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 21:47 Message: Logged In: YES user_id=1611720 I cannot yet produce an only-python script which reproduces the problem, but I can give an overview. There is a generator running in one thread, an exception being raised in another thread, and as a consequent, the generator in the first thread is garbage-collected (triggering an exception due to the new generator cleanup). The problem is extremely sensitive to timing--often the insertion/removal of print statements, or reordering the code, causes the problem to vanish, which is confounding my ability to create a simple test script. def getdocs(): def f(): while True: f() yield None # ----------------------------------------------------------------------------- class B(object): def __init__(self,): pass def doit(self): # must be an instance var to trigger segfault self.docIter = getdocs() print self.docIter # this is the generator referred-to in the traceback for i, item in enumerate(self.docIter): if i > 9: break print 'exiting generator' class A(object): """ Process entry point / main thread """ def __init__(self): while True: try: self.func() except Exception, e: print 'right after raise' def func(self): b = B() thread = threading.Thread(target=b.doit) thread.start() start_t = time.time() while True: try: if time.time() - start_t > 1: raise Exception except Exception: print 'right before raise' # SIGSEGV here. If this is changed to # 'break', no segfault occurs raise if __name__ == '__main__': A() ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 21:37 Message: Logged In: YES user_id=1611720 I've produced a simplified traceback with a single generator . Note the frame being used in the traceback (#0) is the same frame being dealloc'd (#11). The relevant call in traceback.c is: PyTraceBack_Here(PyFrameObject *frame) { PyThreadState *tstate = frame->f_tstate; PyTracebackObject *oldtb = (PyTracebackObject *) tstate->curexc_traceback; PyTracebackObject *tb = newtracebackobject(oldtb, frame); and I can verify that oldtb contains garbage: (gdb) print frame $1 = (PyFrameObject *) 0x8964d94 (gdb) print frame->f_tstate $2 = (PyThreadState *) 0x895b178 (gdb) print $2->curexc_traceback $3 = (PyObject *) 0x66 #0 0x080e4296 in PyTraceBack_Here (frame=0x8964d94) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x8964d94, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb7cca4ac, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb7cca4ac, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb7cca4ac) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb7cca4ac) at Objects/genobject.c:31 #6 0x080815b9 in dict_dealloc (mp=0xb7cc913c) at Objects/dictobject.c:801 #7 0x080927b2 in subtype_dealloc (self=0xb7cca76c) at Objects/typeobject.c:686 #8 0x0806028d in instancemethod_dealloc (im=0xb7d07f04) at Objects/classobject.c:2285 #9 0x080815b9 in dict_dealloc (mp=0xb7cc90b4) at Objects/dictobject.c:801 #10 0x080927b2 in subtype_dealloc (self=0xb7cca86c) at Objects/typeobject.c:686 #11 0x081028c5 in frame_dealloc (f=0x8964a94) at Objects/frameobject.c:416 #12 0x080e41b1 in tb_dealloc (tb=0xb7cc1fcc) at Python/traceback.c:34 #13 0x080e41c2 in tb_dealloc (tb=0xb7cc1f7c) at Python/traceback.c:33 #14 0x08080dca in insertdict (mp=0xb7f99824, key=0xb7ccd020, hash=1492466088, value=0xb7ccd054) at Objects/dictobject.c:394 #15 0x080811a4 in PyDict_SetItem (op=0xb7f99824, key=0xb7ccd020, value=0xb7ccd054) at Objects/dictobject.c:619 #16 0x08082dc6 in PyDict_SetItemString (v=0xb7f99824, key=0x8129284 "exc_traceback", item=0xb7ccd054) at Objects/dictobject.c:2103 #17 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb7ccd054) at Python/sysmodule.c:82 #18 0x080bc9e5 in PyEval_EvalFrameEx (f=0x895f934, throwflag=0) at Python/ceval.c:2954 ---Type to continue, or q to quit--- #19 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f6ade8, globals=0xb7fafa44, locals=0x0, args=0xb7cc5ff8, argcount=1, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #20 0x08104083 in function_call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/funcobject.c:517 #21 0x0805a660 in PyObject_Call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/abstract.c:1860 ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 04:23 Message: Logged In: YES user_id=1611720 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1208400192 (LWP 26235)] 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 94 if ((next != NULL && !PyTraceBack_Check(next)) || (gdb) bt #0 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x9c2d7b4, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb64f880c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb64f880c, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb64f880c) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb64f880c) at Objects/genobject.c:31 #6 0x080b9912 in PyEval_EvalFrameEx (f=0x9c2802c, throwflag=1) at Python/ceval.c:2491 #7 0x08101a40 in gen_send_ex (gen=0xb64f362c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #8 0x08101c0f in gen_close (gen=0xb64f362c, args=0x0) at Objects/genobject.c:128 #9 0x08101cde in gen_del (self=0xb64f362c) at Objects/genobject.c:163 #10 0x0810195b in gen_dealloc (gen=0xb64f362c) at Objects/genobject.c:31 #11 0x080815b9 in dict_dealloc (mp=0xb64f4a44) at Objects/dictobject.c:801 #12 0x080927b2 in subtype_dealloc (self=0xb64f340c) at Objects/typeobject.c:686 #13 0x0806028d in instancemethod_dealloc (im=0xb796a0cc) at Objects/classobject.c:2285 #14 0x080815b9 in dict_dealloc (mp=0xb64f78ac) at Objects/dictobject.c:801 #15 0x080927b2 in subtype_dealloc (self=0xb64f810c) at Objects/typeobject.c:686 #16 0x081028c5 in frame_dealloc (f=0x9c272bc) at Objects/frameobject.c:416 #17 0x080e41b1 in tb_dealloc (tb=0xb799166c) at Python/traceback.c:34 #18 0x080e41c2 in tb_dealloc (tb=0xb4071284) at Python/traceback.c:33 #19 0x080e41c2 in tb_dealloc (tb=0xb7991824) at Python/traceback.c:33 #20 0x08080dca in insertdict (mp=0xb7f56824, key=0xb3fb9930, hash=1492466088, value=0xb3fb9914) at Objects/dictobject.c:394 #21 0x080811a4 in PyDict_SetItem (op=0xb7f56824, key=0xb3fb9930, value=0xb3fb9914) at Objects/dictobject.c:619 #22 0x08082dc6 in PyDict_SetItemString (v=0xb7f56824, key=0x8129284 "exc_traceback", item=0xb3fb9914) at Objects/dictobject.c:2103 #23 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb3fb9914) at Python/sysmodule.c:82 #24 0x080bc9e5 in PyEval_EvalFrameEx (f=0x9c10e7c, throwflag=0) at Python/ceval.c:2954 #25 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc890, globals=0xb7bbe57c, locals=0x0, args=0x9b8e2ac, argcount=1, kws=0x9b8e2b0, kwcount=0, defs=0xb7b7aed8, defcount=1, closure=0x0) at Python/ceval.c:2833 #26 0x080bd62a in PyEval_EvalFrameEx (f=0x9b8e16c, throwflag=0) at Python/ceval.c:3662 #27 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc848, globals=0xb7bbe57c, locals=0x0, args=0xb7af9d58, argcount=1, kws=0x9b7a818, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #28 0x08104083 in function_call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/funcobject.c:517 #29 0x0805a660 in PyObject_Call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/abstract.c:1860 #30 0x080bcb4b in PyEval_EvalFrameEx (f=0x9b82c0c, throwflag=0) at Python/ceval.c:3846 #31 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7cd6608, globals=0xb7cd4934, locals=0x0, args=0x9b7765c, argcount=2, kws=0x9b77664, kwcount=0, defs=0x0, defcount=0, closure=0xb7cfe874) at Python/ceval.c:2833 #32 0x080bd62a in PyEval_EvalFrameEx (f=0x9b7751c, throwflag=0) at Python/ceval.c:3662 #33 0x080bdf70 in PyEval_EvalFrameEx (f=0x9a9646c, throwflag=0) at Python/ceval.c:3652 #34 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39728, globals=0xb7f6ca44, locals=0x0, args=0x9b7a00c, argcount=0, kws=0x9b7a00c, kwcount=0, defs=0x0, defcount=0, closure=0xb796410c) at Python/ceval.c:2833 #35 0x080bd62a in PyEval_EvalFrameEx (f=0x9b79ebc, throwflag=0) at Python/ceval.c:3662 #36 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39770, globals=0xb7f6ca44, locals=0x0, args=0x99086c0, argcount=0, kws=0x99086c0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #37 0x080bd62a in PyEval_EvalFrameEx (f=0x9908584, throwflag=0) at Python/ceval.c:3662 #38 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 ---Type to continue, or q to quit--- #39 0x080bff32 in PyEval_EvalCode (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44) at Python/ceval.c:494 #40 0x080ddff1 in PyRun_FileExFlags (fp=0x98a4008, filename=0xbfffd4a3 "scoreserver.py", start=257, globals=0xb7f6ca44, locals=0xb7f6ca44, closeit=1, flags=0xbfffd298) at Python/pythonrun.c:1264 #41 0x080de321 in PyRun_SimpleFileExFlags (fp=Variable "fp" is not available. ) at Python/pythonrun.c:870 #42 0x08056ac4 in Py_Main (argc=1, argv=0xbfffd334) at Modules/main.c:496 #43 0x00a69d5f in __libc_start_main () from /lib/libc.so.6 #44 0x08056051 in _start () ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 From noreply at sourceforge.net Mon Jan 22 09:06:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 00:06:47 -0800 Subject: [ python-Bugs-1483133 ] gen_iternext: Assertion `f->f_back != ((void *)0)' failed Message-ID: Bugs item #1483133, was opened at 2006-05-06 23:09 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1483133&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None >Priority: 5 Private: No Submitted By: svensoho (svensoho) Assigned to: Phillip J. Eby (pje) Summary: gen_iternext: Assertion `f->f_back != ((void *)0)' failed Initial Comment: Seems to be similar bug as http://sourceforge.net/ tracker/index.php? func=detail&aid=1257960&group_id=5470&atid=105470 (fixed) Couldn't trigger with same script but with C application. Same source modification helps (at Objects/genobject.c:53) ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-22 09:06 Message: Logged In: YES user_id=21627 Originator: NO Python 2.4 is not actively maintained anymore. As this occurs in the debug build only, I recommend closing it as "won't fix". Just lowering the priority for now (svensoho, please don't change priorities). ---------------------------------------------------------------------- Comment By: svensoho (svensoho) Date: 2006-06-30 09:35 Message: Logged In: YES user_id=1518209 2.5 is already fixed: http://sourceforge.net/tracker/ index.php?func=detail&aid=1257960&group_id=5470&atid=105470 2.4 has exactly same problematic assertion, even same modification helps. Fedora has fixed it in their distribution: https://bugzilla.redhat.com/bugzilla/ show_bug.cgi?id=192592 ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-06-30 09:14 Message: Logged In: YES user_id=33168 Does this affect 2.5 or only 2.4? There were a fair amount of generator changes in 2.5. ---------------------------------------------------------------------- Comment By: svensoho (svensoho) Date: 2006-05-26 16:42 Message: Logged In: YES user_id=1518209 This bug is blocking development of PostgreSQL Python based stored procedure language -- PL/Python. See http://archives.postgresql.org/pgsql-patches/2006-04/msg 00265.php ---------------------------------------------------------------------- Comment By: svensoho (svensoho) Date: 2006-05-15 10:26 Message: Logged In: YES user_id=1518209 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1483133&group_id=5470 From noreply at sourceforge.net Mon Jan 22 09:08:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 00:08:10 -0800 Subject: [ python-Bugs-978833 ] SSL-ed sockets don't close correct? Message-ID: Bugs item #978833, was opened at 2004-06-24 11:57 Message generated for change (Settings changed) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978833&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 8 Private: No Submitted By: kxroberto (kxroberto) >Assigned to: Martin v. L?wis (loewis) Summary: SSL-ed sockets don't close correct? Initial Comment: When testing FTP over SSL I have strong doubt, that ssl-ed sockets are not closed correctly. (This doesn't show with https because nobody takes care about whats going on "after the party".) See the following : --- I need to run FTP over SSL from windows (not shitty sftp via ssh etc!) as explained on http://www.ford-hutchinson.com/~fh-1-pfh/ftps-ext.html (good variant 3: FTP_TLS ) I tried to learn from M2Crypto's ftpslib.py (uses OpenSSL - not Pythons SSL) and made a wrapper for ftplib.FTP using Pythons SSL. I wrap the cmd socket like: self.voidcmd('AUTH TLS') ssl = socket.ssl(self.sock, self.key_file, self.cert_file) import httplib self.sock = httplib.FakeSocket(self.sock, ssl) self.file = self.sock.makefile('rb') Everything works ok, if I don't SSL the data port connection, but only the If I SSL the data port connection too, it almosts work, but ... self.voidcmd('PBSZ 0') self.voidcmd('PROT P') wrap the data connection with SSL: ssl = socket.ssl(conn, self.key_file, self.cert_file) import httplib conn = httplib.FakeSocket(conn, ssl) than in retrbinary it hangs endless in the last 'return self.voidresp()'. all data of the retrieved file is already correctly in my basket! The ftp server just won't send the final '226 Transfer complete.' on the cmd socket. Why? def retrbinary(self, cmd, callback, blocksize=8192, rest=None): self.voidcmd('TYPE I') conn = self.transfercmd(cmd, rest) fp = conn.makefile('rb') while 1: #data = conn.recv(blocksize) data = fp.read() #blocksize) if not data: break callback(data) fp.close() conn.close() return self.voidresp() what could be reason? The server is a ProFTPD 1.2.9 Server. I debugged, that the underlying (Shared)socket of the conn object is really closed. (If I simly omit the self.voidresp(), I have one file in the box, but subsequent ftp communication on that connection is not anymore correct.) ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2006-11-20 12:33 Message: Logged In: YES user_id=4771 Originator: NO Martin, I think the rev 50844 is wrong. To start with, it goes clearly against the documentation for makefile() which states that both the socket and the pseudo-file can be closed independently. What httplib.py was doing was correct. I could write a whole essay about the twisted history of socket.py. It would boil down to: in r43746, Georg removed a comment that was partially out-of-date, but that was essential in explaining why there was no self._sock.close() in _socketobject.close(): because the original purpose of _socketobject was to implement dup(), so that multiple _socketobjects could refer to the same underlying _sock. The latter would close automagically when its reference counter dropped to zero. (This means that your check-in also made dup() stop working on all platforms.) The real OP's problem is that the _ssl object keeps a reference to the underlying _sock as well, as kxroberto pointed out. We need somewhere code that closes the _ssl object... For reference, PyPy - which doesn't have strong refcounting guarantees - added the equivalent of an explicit usage counter in the C socket object, and socket.py calls methods on its underlying object to increment and decrement that counter. It looks like a solution for CPython too - at least, relying on refcounting is bad, if only because - as you have just proved - it creates confusion. (Also, httplib/urllib have their own explicitly-refcounted wrappers...) ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-07-26 14:14 Message: Logged In: YES user_id=21627 This is now fixed in 50844. I won't backport it to 2.4, as it may cause working code to fail. For example, httplib would fail since it would already close the connection in getresponse, when the response object should still be able to read from the connection. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-07-03 14:03 Message: Logged In: YES user_id=21627 Can you please try the attached patch? It makes sure _socketobject.close really closes the socket, rather than relying on reference counting to close it. ---------------------------------------------------------------------- Comment By: kxroberto (kxroberto) Date: 2006-05-11 14:05 Message: Logged In: YES user_id=972995 Testing it with Python2.5a2, the problem is still there. Without the .shutdown(2) (or .shutdown(1)) patch to the httplib.SharedSocket (base for FakeSocket), the ftps example freezes on the cmd channel, because the SSL'ed data channel doesn't close/terminate --> FTPS server doesn't respond on the cmd channel. The ftps example is most specific to show the bug. Yet you can also easily blow up a HTTPS-server with this decent test code who only opens (bigger!) files and closes without reading everything: Python 2.5a2 (r25a2:45740, May 11 2006, 11:25:30) [GCC 3.3.5 (Debian 1:3.3.5-13)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Robert's Interactive Python - TAB=complete import sys,os,re,string,time,glob,thread,pdb >>> import urllib >>> l=[] >>> for i in range(10): ... f=urllib.urlopen('https://srv/big-Python-2.5a2.tgz') ... f.close() ... l.append(f) ... >>> => in the (apache) servers ssl_engine_log you can see that connections remain open (until apache times out after 2 minutes) and lots of extra apache daemons are started! => f.close() doesn't really close the connection (until it is __del__'ed ) Trying around I found that the original undeleted f.fp._ssl is most probably the cause and holds the real socket open. a f.fp._sock.close() doesn't close also - but only when del f.fp._ssl is done. (only a f.fp._sock._sock.close() would force the close). The original fp is held in closures of .readline(s)/__iter__/next... -- I now tried an alternative patch (instead of the shutdown(2)-patch), which also so far seems to cure everything . Maybe thats the right solution for the bug: --- httplib.py.orig 2006-05-11 11:25:32.000000000 +0200 +++ httplib.py 2006-05-11 13:45:07.000000000 +0200 @@ -970,6 +970,7 @@ self._shared.decref() self._closed = 1 self._shared = None + self._ssl = None class SSLFile(SharedSocketClient): """File-like object wrapping an SSL socket.""" @@ -1085,6 +1086,7 @@ def close(self): SharedSocketClient.close(self) self._sock = self.__class__._closedsocket() + self._ssl = None def makefile(self, mode, bufsize=None): if mode != 'r' and mode != 'rb': -------------- In another application with SSL'ed SMTP connections there arose similar problems. I also worked around the problem in smtplib.py so far in a similar style: def close(self): self.realsock.shutdown(2) self.realsock.close() --- Yet, the right patch maybe (not tested extensively so far): --- Lib-orig/smtplib.py 2006-05-03 13:10:40.000000000 +0200 +++ Lib/smtplib.py 2006-05-11 13:50:12.000000000 +0200 @@ -142,6 +142,7 @@ sendall = send def close(self): + self.sslobj = None self.realsock.close() class SSLFakeFile: @@ -161,7 +162,7 @@ return str def close(self): - pass + self.sslobj = None def quoteaddr(addr): """Quote a subset of the email addresses defined by RFC 821. ------------------ -robert ---------------------------------------------------------------------- Comment By: kxroberto (kxroberto) Date: 2005-09-24 21:45 Message: Logged In: YES user_id=972995 Now I managed to solve the problem for me with attached patch of httplib.py: a explicit shutdown ( 2 or 1 ) of the (faked) ssl'ed socket solves it. I still guess the ssl'ed socket (ssl dll) should do that auto on sock.close() Thus I leave it as a bug HERE. Right? [ I also have the hope, that this also solves the ssl-eof-errors with https (of some of my users; will see this in future) *** \usr\src\py24old/httplib.py Sat Sep 24 21:35:28 2005 --- httplib.py Sat Sep 24 21:37:48 2005 *************** class SharedSocket: *** 899,904 **** --- 899,905 ---- self._refcnt -= 1 assert self._refcnt >= 0 if self._refcnt == 0: + self.sock.shutdown(2) self.sock.close() def __del__(self): ---------------------------------------------------------------------- Comment By: kxroberto (kxroberto) Date: 2005-09-24 21:06 Message: Logged In: YES user_id=972995 I retried that again with py2.4.1. The problem/bug is still there. In attachment I supplied a full FTPS client test_ftpslib.py with simple test function - ready to run into the problem: At the end of ftp.retrlines 'return self.voidresp()' freezes : waiting eternally for response bytes at the command port. the same at the end of .storelines after the data is transfered on the data port. My preliminary guess is still, that python's ssl has a severe (EOF?) problem closing a socket/connection correctly. obviously this problem doesn't show up with https because everything is done on one connection - no dependency on a correct socket/connect close signal. (from other https communication with some proxies in between my users get ssl-eof-error errors also. I still can't solve that bug too. this shows python's ssl has a severe EOF bug with complex https also - or cannot handle certain situations correct.) I learned the FTPS/TLS client from M2crypto's ftpslib. the only difference in (transposed) logic, that I can see is, that M2crpyto uses an additional line "conn.set_session(self.sock.get_session())" during setup of the data port ssl connection. I don't know what it is, pythons ssl doesn't have sucht "session"-functions, but I think it has no severe meaning.? Robert ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-12-22 06:14 Message: Logged In: YES user_id=357491 Since I believe this was fixed with the patch Tino mentions and Roberto has not replied, I am closing as fixed. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2004-08-17 01:18 Message: Logged In: YES user_id=357491 Is this still a problem for you, Roberto, with Python 2.4a2? ---------------------------------------------------------------------- Comment By: Tino Lange (tinolange) Date: 2004-07-11 00:30 Message: Logged In: YES user_id=212920 Hi Roberto! Today a patch for _ssl.c was checked in (see #945642) that might solve your problem, too. Could you please grab the *next* alpha (this will be Python 2.4 Alpha 2) and test and report afterwards if it is solved? Thanks for your help! Tino ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=978833&group_id=5470 From noreply at sourceforge.net Mon Jan 22 09:20:03 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 00:20:03 -0800 Subject: [ python-Bugs-1641109 ] 2.3.6.4 Error in append and extend descriptions Message-ID: Bugs item #1641109, was opened at 2007-01-21 23:34 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1641109&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: ilalopoulos (arafin) Assigned to: Nobody/Anonymous (nobody) Summary: 2.3.6.4 Error in append and extend descriptions Initial Comment: 2.3.6.4 Mutable Sequence Types (2.4.4 Python Doc) Error in the table describing append and extend operations for the list type. specificaly: s.append(x) same as s[len(s):len(s)] = [x] (2) s.extend(x) same as s[len(s):len(s)] = x (3) should be: s.append(x) same as s[len(s):len(s)] = x (2) s.extend(x) same as s[len(s):len(s)] = [x] (3) ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-22 08:20 Message: Logged In: YES user_id=849994 Originator: NO Have you tried the original code and your corrections? If you do, you'll find that the original is correct. (In extend, x is already a sequence, so you mustn't wrap it in a list. In append, you want only one element added, so you wrap x in a list.) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1641109&group_id=5470 From noreply at sourceforge.net Mon Jan 22 09:46:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 00:46:33 -0800 Subject: [ python-Bugs-1579370 ] Segfault provoked by generators and exceptions Message-ID: Bugs item #1579370, was opened at 2006-10-18 02:23 Message generated for change (Comment added) made by awaters You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 9 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Nobody/Anonymous (nobody) Summary: Segfault provoked by generators and exceptions Initial Comment: A reproducible segfault when using heavily-nested generators and exceptions. Unfortunately, I haven't yet been able to provoke this behaviour with a standalone python2.5 script. There are, however, no third-party c extensions running in the process so I'm fairly confident that it is a problem in the core. The gist of the code is a series of nested generators which leave scope when an exception is raised. This exception is caught and re-raised in an outer loop. The old exception was holding on to the frame which was keeping the generators alive, and the sequence of generator destruction and new finalization caused the segfault. ---------------------------------------------------------------------- Comment By: Andrew Waters (awaters) Date: 2007-01-22 08:46 Message: Logged In: YES user_id=1418249 Originator: NO A quick test on code that always segfaulted with unpatched Python 2.5 seems to work. Needs more extensive testing... ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-22 07:51 Message: Logged In: YES user_id=21627 Originator: NO I don't like mklaas' patch, since I think it is conceptually wrong to have PyTraceBack_Here() use the frame's thread state (mklaas describes it as dirty, and I agree). I'm proposing an alternative patch (tr.diff); please test this as well. File Added: tr.diff ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 07:01 Message: Logged In: YES user_id=33168 Originator: NO Bumping priority to see if this should go into 2.5.1. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 10:42 Message: Logged In: YES user_id=21627 Originator: NO Why do frame objects have a thread state in the first place? In particular, why does PyTraceBack_Here get the thread state from the frame, instead of using the current thread? Introduction of f_tstate goes back to r7882, but it is not clear why it was done that way. ---------------------------------------------------------------------- Comment By: Andrew Waters (awaters) Date: 2007-01-04 09:35 Message: Logged In: YES user_id=1418249 Originator: NO This fixes the segfault problem that I was able to reliably reproduce on Linux. We need to get this applied (assuming it is the correct fix) to the source to make Python 2.5 usable for me in production code. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-11-27 18:41 Message: Logged In: YES user_id=1611720 Originator: YES The following patch resets the thread state of the generator when it is resumed, which prevents the segfault for me: Index: Objects/genobject.c =================================================================== --- Objects/genobject.c (revision 52849) +++ Objects/genobject.c (working copy) @@ -77,6 +77,7 @@ Py_XINCREF(tstate->frame); assert(f->f_back == NULL); f->f_back = tstate->frame; + f->f_tstate = tstate; gen->gi_running = 1; result = PyEval_EvalFrameEx(f, exc); ---------------------------------------------------------------------- Comment By: Eric Noyau (eric_noyau) Date: 2006-11-27 18:07 Message: Logged In: YES user_id=1388768 Originator: NO We are experiencing the same segfault in our application, reliably. Running our unit test suite just segfault everytime on both Linux and Mac OS X. Applying Martin's patch fixes the segfault, and makes everything fine and dandy, at the cost of some memory leaks if I understand properly. This particular bug prevents us to upgrade to python 2.5 in production. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-28 05:18 Message: Logged In: YES user_id=31435 > I tried Tim's hope.py on Linux x86_64 and > Mac OS X 10.4 with debug builds and neither > one crashed. Tim's guess looks pretty damn > good too. Neal, note that it's the /Windows/ malloc that fills freed memory with "dangerous bytes" in a debug build -- this really has nothing to do with that it's a debug build of /Python/ apart from that on Windows a debug build of Python also links in the debug version of Microsoft's malloc. The valgrind report is pointing at the same thing. Whether this leads to a crash is purely an accident of when and how the system malloc happens to reuse the freed memory. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-28 04:56 Message: Logged In: YES user_id=33168 Mike, what platform are you having the problem on? I tried Tim's hope.py on Linux x86_64 and Mac OS X 10.4 with debug builds and neither one crashed. Tim's guess looks pretty damn good too. Here's the result of valgrind: Invalid read of size 8 at 0x4CEBFE: PyTraceBack_Here (traceback.c:117) by 0x49C1F1: PyEval_EvalFrameEx (ceval.c:2515) by 0x4F615D: gen_send_ex (genobject.c:82) by 0x4F6326: gen_close (genobject.c:128) by 0x4F645E: gen_del (genobject.c:163) by 0x4F5F00: gen_dealloc (genobject.c:31) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x44534E: dict_dealloc (dictobject.c:801) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x4664FF: subtype_dealloc (typeobject.c:686) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x42325D: instancemethod_dealloc (classobject.c:2287) Address 0x56550C0 is 88 bytes inside a block of size 152 free'd at 0x4A1A828: free (vg_replace_malloc.c:233) by 0x4C3899: tstate_delete_common (pystate.c:256) by 0x4C3926: PyThreadState_DeleteCurrent (pystate.c:282) by 0x4D4043: t_bootstrap (threadmodule.c:448) by 0x4B24C48: pthread_start_thread (in /lib/libpthread-0.10.so) The only way I can think to fix this is to keep a set of active generators in the PyThreadState and calling gen_send_ex(exc=1) for all the active generators before killing the tstate in t_bootstrap. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-10-19 07:58 Message: Logged In: YES user_id=6656 > and for some reason Python uses the system malloc directly > to obtain memory for thread states. This bit is fairly easy: they are allocated without the GIL being held, which breaks an assumption of PyMalloc. No idea about the real problem, sadly. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-19 00:38 Message: Logged In: YES user_id=31435 I've attached a much simplified pure-Python script (hope.py) that reproduces a problem very quickly, on Windows, in a /debug/ build of current trunk. It typically prints: exiting generator joined thread at most twice before crapping out. At the time, the `next` argument to newtracebackobject() is 0xdddddddd, and tracing back a level shows that, in PyTraceBack_Here(), frame->tstate is entirely filled with 0xdd bytes. Note that this is not a debug-build obmalloc gimmick! This is Microsoft's similar debug-build gimmick for their malloc, and for some reason Python uses the system malloc directly to obtain memory for thread states. The Microsoft debug free() fills newly-freed memory with 0xdd, which has the same meaning as the debug-build obmalloc's DEADBYTE (0xdb). So somebody is accessing a thread state here after it's been freed. Best guess is that the generator is getting "cleaned up" after the thread that created it has gone away, so the generator's frame's f_tstate is trash. Note that a PyThreadState (a frame's f_tstate) is /not/ a Python object -- it's just a raw C struct, and its lifetime isn't controlled by refcounts. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-19 00:12 Message: Logged In: YES user_id=1611720 Despite Tim's reassurrance, I'm afraid that Martin's patch does infact prevent the segfault. Sounds like it also introduces a memleak. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-18 21:57 Message: Logged In: YES user_id=31435 > Can anybody tell why gi_frame *isn't* incref'ed when > the generator is created? As documented (in concrete.tex), PyGen_New(f) steals a reference to the frame passed to it. Its only call site (well, in the core) is in ceval.c, which returns immediately after PyGen_New takes over ownership of the frame the caller created: """ /* Create a new generator that owns the ready to run frame * and return that as the value. */ return PyGen_New(f); """ In short, that PyGen_New() doesn't incref the frame passed to it is intentional. It's possible that the intent is flawed ;-), but offhand I don't see how. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-10-18 21:05 Message: Logged In: YES user_id=21627 Can you please review/try attached patch? Can anybody tell why gi_frame *isn't* incref'ed when the generator is created? ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 19:47 Message: Logged In: YES user_id=1611720 I cannot yet produce an only-python script which reproduces the problem, but I can give an overview. There is a generator running in one thread, an exception being raised in another thread, and as a consequent, the generator in the first thread is garbage-collected (triggering an exception due to the new generator cleanup). The problem is extremely sensitive to timing--often the insertion/removal of print statements, or reordering the code, causes the problem to vanish, which is confounding my ability to create a simple test script. def getdocs(): def f(): while True: f() yield None # ----------------------------------------------------------------------------- class B(object): def __init__(self,): pass def doit(self): # must be an instance var to trigger segfault self.docIter = getdocs() print self.docIter # this is the generator referred-to in the traceback for i, item in enumerate(self.docIter): if i > 9: break print 'exiting generator' class A(object): """ Process entry point / main thread """ def __init__(self): while True: try: self.func() except Exception, e: print 'right after raise' def func(self): b = B() thread = threading.Thread(target=b.doit) thread.start() start_t = time.time() while True: try: if time.time() - start_t > 1: raise Exception except Exception: print 'right before raise' # SIGSEGV here. If this is changed to # 'break', no segfault occurs raise if __name__ == '__main__': A() ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 19:37 Message: Logged In: YES user_id=1611720 I've produced a simplified traceback with a single generator . Note the frame being used in the traceback (#0) is the same frame being dealloc'd (#11). The relevant call in traceback.c is: PyTraceBack_Here(PyFrameObject *frame) { PyThreadState *tstate = frame->f_tstate; PyTracebackObject *oldtb = (PyTracebackObject *) tstate->curexc_traceback; PyTracebackObject *tb = newtracebackobject(oldtb, frame); and I can verify that oldtb contains garbage: (gdb) print frame $1 = (PyFrameObject *) 0x8964d94 (gdb) print frame->f_tstate $2 = (PyThreadState *) 0x895b178 (gdb) print $2->curexc_traceback $3 = (PyObject *) 0x66 #0 0x080e4296 in PyTraceBack_Here (frame=0x8964d94) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x8964d94, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb7cca4ac, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb7cca4ac, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb7cca4ac) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb7cca4ac) at Objects/genobject.c:31 #6 0x080815b9 in dict_dealloc (mp=0xb7cc913c) at Objects/dictobject.c:801 #7 0x080927b2 in subtype_dealloc (self=0xb7cca76c) at Objects/typeobject.c:686 #8 0x0806028d in instancemethod_dealloc (im=0xb7d07f04) at Objects/classobject.c:2285 #9 0x080815b9 in dict_dealloc (mp=0xb7cc90b4) at Objects/dictobject.c:801 #10 0x080927b2 in subtype_dealloc (self=0xb7cca86c) at Objects/typeobject.c:686 #11 0x081028c5 in frame_dealloc (f=0x8964a94) at Objects/frameobject.c:416 #12 0x080e41b1 in tb_dealloc (tb=0xb7cc1fcc) at Python/traceback.c:34 #13 0x080e41c2 in tb_dealloc (tb=0xb7cc1f7c) at Python/traceback.c:33 #14 0x08080dca in insertdict (mp=0xb7f99824, key=0xb7ccd020, hash=1492466088, value=0xb7ccd054) at Objects/dictobject.c:394 #15 0x080811a4 in PyDict_SetItem (op=0xb7f99824, key=0xb7ccd020, value=0xb7ccd054) at Objects/dictobject.c:619 #16 0x08082dc6 in PyDict_SetItemString (v=0xb7f99824, key=0x8129284 "exc_traceback", item=0xb7ccd054) at Objects/dictobject.c:2103 #17 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb7ccd054) at Python/sysmodule.c:82 #18 0x080bc9e5 in PyEval_EvalFrameEx (f=0x895f934, throwflag=0) at Python/ceval.c:2954 ---Type to continue, or q to quit--- #19 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f6ade8, globals=0xb7fafa44, locals=0x0, args=0xb7cc5ff8, argcount=1, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #20 0x08104083 in function_call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/funcobject.c:517 #21 0x0805a660 in PyObject_Call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/abstract.c:1860 ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 02:23 Message: Logged In: YES user_id=1611720 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1208400192 (LWP 26235)] 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 94 if ((next != NULL && !PyTraceBack_Check(next)) || (gdb) bt #0 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x9c2d7b4, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb64f880c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb64f880c, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb64f880c) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb64f880c) at Objects/genobject.c:31 #6 0x080b9912 in PyEval_EvalFrameEx (f=0x9c2802c, throwflag=1) at Python/ceval.c:2491 #7 0x08101a40 in gen_send_ex (gen=0xb64f362c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #8 0x08101c0f in gen_close (gen=0xb64f362c, args=0x0) at Objects/genobject.c:128 #9 0x08101cde in gen_del (self=0xb64f362c) at Objects/genobject.c:163 #10 0x0810195b in gen_dealloc (gen=0xb64f362c) at Objects/genobject.c:31 #11 0x080815b9 in dict_dealloc (mp=0xb64f4a44) at Objects/dictobject.c:801 #12 0x080927b2 in subtype_dealloc (self=0xb64f340c) at Objects/typeobject.c:686 #13 0x0806028d in instancemethod_dealloc (im=0xb796a0cc) at Objects/classobject.c:2285 #14 0x080815b9 in dict_dealloc (mp=0xb64f78ac) at Objects/dictobject.c:801 #15 0x080927b2 in subtype_dealloc (self=0xb64f810c) at Objects/typeobject.c:686 #16 0x081028c5 in frame_dealloc (f=0x9c272bc) at Objects/frameobject.c:416 #17 0x080e41b1 in tb_dealloc (tb=0xb799166c) at Python/traceback.c:34 #18 0x080e41c2 in tb_dealloc (tb=0xb4071284) at Python/traceback.c:33 #19 0x080e41c2 in tb_dealloc (tb=0xb7991824) at Python/traceback.c:33 #20 0x08080dca in insertdict (mp=0xb7f56824, key=0xb3fb9930, hash=1492466088, value=0xb3fb9914) at Objects/dictobject.c:394 #21 0x080811a4 in PyDict_SetItem (op=0xb7f56824, key=0xb3fb9930, value=0xb3fb9914) at Objects/dictobject.c:619 #22 0x08082dc6 in PyDict_SetItemString (v=0xb7f56824, key=0x8129284 "exc_traceback", item=0xb3fb9914) at Objects/dictobject.c:2103 #23 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb3fb9914) at Python/sysmodule.c:82 #24 0x080bc9e5 in PyEval_EvalFrameEx (f=0x9c10e7c, throwflag=0) at Python/ceval.c:2954 #25 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc890, globals=0xb7bbe57c, locals=0x0, args=0x9b8e2ac, argcount=1, kws=0x9b8e2b0, kwcount=0, defs=0xb7b7aed8, defcount=1, closure=0x0) at Python/ceval.c:2833 #26 0x080bd62a in PyEval_EvalFrameEx (f=0x9b8e16c, throwflag=0) at Python/ceval.c:3662 #27 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc848, globals=0xb7bbe57c, locals=0x0, args=0xb7af9d58, argcount=1, kws=0x9b7a818, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #28 0x08104083 in function_call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/funcobject.c:517 #29 0x0805a660 in PyObject_Call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/abstract.c:1860 #30 0x080bcb4b in PyEval_EvalFrameEx (f=0x9b82c0c, throwflag=0) at Python/ceval.c:3846 #31 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7cd6608, globals=0xb7cd4934, locals=0x0, args=0x9b7765c, argcount=2, kws=0x9b77664, kwcount=0, defs=0x0, defcount=0, closure=0xb7cfe874) at Python/ceval.c:2833 #32 0x080bd62a in PyEval_EvalFrameEx (f=0x9b7751c, throwflag=0) at Python/ceval.c:3662 #33 0x080bdf70 in PyEval_EvalFrameEx (f=0x9a9646c, throwflag=0) at Python/ceval.c:3652 #34 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39728, globals=0xb7f6ca44, locals=0x0, args=0x9b7a00c, argcount=0, kws=0x9b7a00c, kwcount=0, defs=0x0, defcount=0, closure=0xb796410c) at Python/ceval.c:2833 #35 0x080bd62a in PyEval_EvalFrameEx (f=0x9b79ebc, throwflag=0) at Python/ceval.c:3662 #36 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39770, globals=0xb7f6ca44, locals=0x0, args=0x99086c0, argcount=0, kws=0x99086c0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #37 0x080bd62a in PyEval_EvalFrameEx (f=0x9908584, throwflag=0) at Python/ceval.c:3662 #38 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 ---Type to continue, or q to quit--- #39 0x080bff32 in PyEval_EvalCode (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44) at Python/ceval.c:494 #40 0x080ddff1 in PyRun_FileExFlags (fp=0x98a4008, filename=0xbfffd4a3 "scoreserver.py", start=257, globals=0xb7f6ca44, locals=0xb7f6ca44, closeit=1, flags=0xbfffd298) at Python/pythonrun.c:1264 #41 0x080de321 in PyRun_SimpleFileExFlags (fp=Variable "fp" is not available. ) at Python/pythonrun.c:870 #42 0x08056ac4 in Py_Main (argc=1, argv=0xbfffd334) at Modules/main.c:496 #43 0x00a69d5f in __libc_start_main () from /lib/libc.so.6 #44 0x08056051 in _start () ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 From noreply at sourceforge.net Mon Jan 22 13:13:49 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 04:13:49 -0800 Subject: [ python-Bugs-1568240 ] Tix is not included in 2.5 for Windows Message-ID: Bugs item #1568240, was opened at 2006-09-30 12:19 Message generated for change (Comment added) made by tzot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1568240&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: Christos Georgiou (tzot) Assigned to: Martin v. L?wis (loewis) Summary: Tix is not included in 2.5 for Windows Initial Comment: (I hope "Build" is more precise than "Extension Modules" and "Tkinter" for this specific bug.) At least the following files are missing from 2.5 for Windows: DLLs\tix8184.dll tcl\tix8184.lib tcl\tix8.1\* ---------------------------------------------------------------------- >Comment By: Christos Georgiou (tzot) Date: 2007-01-22 14:13 Message: Logged In: YES user_id=539787 Originator: YES For me, yes, x86 is sufficient. Hopefully for others too. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 15:16 Message: Logged In: YES user_id=21627 Originator: NO It seems that I can provide Tix binaries only for x86, not for AMD64 or Itanium. Is that sufficient? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-03 16:59 Message: Logged In: YES user_id=21627 Originator: NO Ah, ok. No, assigning this report to Neal or bumping its priority should not be done. ---------------------------------------------------------------------- Comment By: Christos Georgiou (tzot) Date: 2007-01-02 12:22 Message: Logged In: YES user_id=539787 Originator: YES Neal's message is this: http://mail.python.org/pipermail/python-dev/2006-December/070406.html and it refers to the 2.5.1 release, not prior to it. As you see, I refrained from both increasing the priority and assigning it to Neal, and actually just added a comment to the case with a related question, since I know you are the one responsible for the windows build and you already had assigned the bug to you. My adding this comment to the bug was nothing more or less than the action that felt appropriate, and still does feel appropriate to me (ie I didn't overstep any limits). The "we" was just all parties interested, and in this case, the ones I know are at least you (responsible for the windows build) and I (a user of Tix on windows). Happy new year, Martin! ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-12-30 00:26 Message: Logged In: YES user_id=21627 Originator: NO I haven't read Neal's message yet, but I wonder what he could do about it. I plan to fix this with 2.5.1, there is absolutely no way to fix this earlier. I'm not sure who "we" is who would like to bump the bug, and what precisely this bumping would do; tzot, please refrain from changing the priority to higher than 7. These priorities are reserved to the release manager. ---------------------------------------------------------------------- Comment By: Christos Georgiou (tzot) Date: 2006-12-27 19:46 Message: Logged In: YES user_id=539787 Originator: YES Should we bump the bug up and/or assign it to Neal Norwitz as he requested on Python-Dev? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1568240&group_id=5470 From noreply at sourceforge.net Mon Jan 22 16:46:54 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 07:46:54 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None >Priority: 7 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-22 10:46 Message: Logged In: YES user_id=11375 Originator: NO This would be an API change, and therefore out-of-bounds for 2.5. I suggest giving up on this for 2.5.1 and only fixing it in 2.6. I'll add another warning to the docs, and maybe to the module as well. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-21 17:10 Message: Logged In: YES user_id=1504904 Originator: YES Hold on, I have a plan. If _toc is only regenerated on locking, or at the end of a flush(), then the only way self._pending can be set at that time is if the application has made modifications before calling lock(). If we make that an exception-raising offence, then we can assume that self._toc is a faithful representation of the last known contents of the file. That means we can preserve the existing message keys on a reread without any of that _user_toc nonsense. Diff attached, to apply on top of mailbox-unified2. It's probably had even less review and testing than the previous version, but it appears to pass all the regression tests and doesn't change any existing semantics. File Added: mailbox-update-toc-new.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-20 22:16 Message: Logged In: YES user_id=11375 Originator: NO I'm starting to lose track of all the variations on the bug. Maybe we should just add more warnings to the documentation about locking the mailbox when modifying it and not try to fix this at all. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-20 13:20 Message: Logged In: YES user_id=1504904 Originator: YES Hang on. If a message's key changes after recreating _toc, that does not mean that another process has modified the mailbox. If the application removes a message and then (inadvertently) causes _toc to be regenerated, the keys of all subsequent messages will be decremented by one, due only to the application's own actions. That's what happens in the "broken locking" test case: the program intends to remove message 0, flush, and then remove message 1, but because _toc is regenerated in between, message 1 is renumbered as 0, message 2 is renumbered as 1, and so the program deletes message 2 instead. To clear _toc in such code without attempting to preserve the message keys turns possible data loss (in the case that another process modified the mailbox) into certain data loss. That's what I'm concerned about. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-19 10:24 Message: Logged In: YES user_id=11375 Originator: NO After reflection, I don't think the potential changing actually makes things any worse. _generate() always starts numbering keys with 1, so if a message's key changes because of lock()'s, re-reading, that means someone else has already modified the mailbox. Without the ToC clearing, you're already fated to have a corrupted mailbox because the new mailbox will be written using outdated file offsets. With the ToC clearing, you delete the wrong message. Neither outcome is good, but data is lost either way. The new behaviour is maybe a little bit better in that you're losing a single message but still generating a well-formed mailbox, and not a randomly jumbled mailbox. I suggest applying the patch to clear self._toc, and noting in the documentation that keys might possibly change after doing a lock(). ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 13:15 Message: Logged In: YES user_id=1504904 Originator: YES This version passes the new tests (it fixes the length checking bug, and no longer clears self._toc), but goes back to failing test_concurrent_add. File Added: mailbox-unified2-module.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 13:14 Message: Logged In: YES user_id=1504904 Originator: YES Unfortunately, there is a problem with clearing _toc: it's basically the one I alluded to in my comment of 2006-12-16. Back then I thought it could be caught by the test you issue the warning for, but the application may instead do its second remove() *after* the lock() (so that self._pending is not set at lock() time), using the key carried over from before it called unlock(). As before, this would result in the wrong message being deleted. I've added a test case for this (diff attached), and a bug I found in the process whereby flush() wasn't updating the length, which could cause subsequent flushes to fail (I've got a fix for this). These seem to have turned up a third bug in the MH class, but I haven't looked at that yet. File Added: mailbox-unified2-test.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:06 Message: Logged In: YES user_id=11375 Originator: NO Add mailbox-unified-patch. File Added: mailbox-unified-patch.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 15:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 14:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 01:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 14:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 13:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 14:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Mon Jan 22 17:09:34 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 08:09:34 -0800 Subject: [ python-Feature Requests-1567331 ] logging.RotatingFileHandler has no "infinite" backupCount Message-ID: Feature Requests item #1567331, was opened at 2006-09-28 21:36 Message generated for change (Comment added) made by vsajip You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1567331&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Skip Montanaro (montanaro) Assigned to: Vinay Sajip (vsajip) Summary: logging.RotatingFileHandler has no "infinite" backupCount Initial Comment: It seems to me that logging.RotatingFileHandler should have a way to spell "never delete old log files". This is useful in situations where you want an external process (manual or automatic) make decisions about deleting log files. ---------------------------------------------------------------------- >Comment By: Vinay Sajip (vsajip) Date: 2007-01-22 16:09 Message: Logged In: YES user_id=308438 Originator: NO Josiah - OK...suppose I use your semantics: Create log ... at rollover, log -> log.1, create log anew ... at rollover, log -> log.2, create log anew ... at rollover, log -> log.3, and the user has set a backup count of 3 so I can't do log -> log.4 - then I still need to rename files, it seems to me. If I don't, and say reuse log.1, then the user gets an unintuitive ordering where log.1 is newer than log.3 sometimes, but not ar other times - so your approach would *only* be beneficial where the backup count was infinite. For such scenarios, I think it's better to either use e.g. logrotate and WatchedFileHandler, or create a new class based on RotatingFileHandler to do what you want. Providing support for "infinite" log files is not a common enough use case, IMO, to justify support in the core package. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-20 18:39 Message: Logged In: YES user_id=341410 Originator: NO What about an optional different semantic for log renaming? Rather than log -> log.1, log -> log.+1, so if you have log, log.1, log.2; log -> log.3 and log gets created anew. I've used a similar semantic in other logging packages, and it works pretty well. It would also allow for users to have an "infinite" count of logfiles (if that is what they want). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2007-01-15 16:44 Message: Logged In: YES user_id=308438 Originator: NO The problem with this is that on rollover, RotatingFileHandler renames old logs: rollover.log.3 -> rollover.log.4, rollover.log.2 -> rollover.log.3, rollover.log.1 -> rollover.log.2, rollover.log -> rollover.log.1, and a new rollover.log is opened. With an arbitrary number of old log files, this leads to arbitrary renaming time - which could cause long pauses due to logging, not a good idea. If you are using e.g. logrotate or newsyslog, or a custom program to do logfile rotation, you can use the new logging.handlers.WatchedFileHandler handler (meant for use on Unix/Linux only - on Windows, logfiles can't be renamed or moved while in use and so the requirement doesn't arise) which watches the logged-to file to see when it changes. This has recently been checked into SVN trunk. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1567331&group_id=5470 From noreply at sourceforge.net Mon Jan 22 17:10:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 08:10:47 -0800 Subject: [ python-Bugs-1552726 ] Python polls unnecessarily every 0.1 second when interactive Message-ID: Bugs item #1552726, was opened at 2006-09-05 10:42 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1552726&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Closed Resolution: Fixed Priority: 9 Private: No Submitted By: Richard Boulton (richardb) Assigned to: A.M. Kuchling (akuchling) Summary: Python polls unnecessarily every 0.1 second when interactive Initial Comment: When python is running an interactive session, and is idle, it calls "select" with a timeout of 0.1 seconds repeatedly. This is intended to allow PyOS_InputHook() to be called every 0.1 seconds, but happens even if PyOS_InputHook() isn't being used (ie, is NULL). To reproduce: - start a python session - attach to it using strace -p PID - observe that python repeatedly This isn't a significant problem, since it only affects idle interactive python sessions and uses only a tiny bit of CPU, but people are whinging about it (though some appear to be doing so tongue-in-cheek) and it would be nice to fix it. The attached patch (against Python-2.5c1) modifies the readline.c module so that the polling doesn't happen unless PyOS_InputHook is not NULL. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-22 11:10 Message: Logged In: YES user_id=11375 Originator: NO Applied to 2.5.1 in rev. 53516. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 01:47 Message: Logged In: YES user_id=33168 Originator: NO I'm fine if this patch is applied. Since it was applied to trunk, it seems like it might as well go into 2.5.1 as well. I agree it's not that high priority, but don't see much reason to wait either. OTOH, I won't lose sleep if it's not applied, so do what you think is best. ---------------------------------------------------------------------- Comment By: Richard Boulton (richardb) Date: 2006-09-08 10:30 Message: Logged In: YES user_id=9565 I'm finding the function because it's defined in the compiled library - the header files aren't examined by configure when testing for this function. (this is because configure.in uses AC_CHECK_LIB to check for rl_callback_handler_install, which just tries to link the named function against the library). Presumably, rlconf.h is the configuration used when the readline library was compiled, so if READLINE_CALLBACKS is defined in it, I would expect the relevant functions to be present in the compiled library. In any case, this isn't desperately important, since you've managed to hack around the test anyway. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-08 09:12 Message: Logged In: YES user_id=11375 That's exactly my setup. I don't think there is a -dev package for readline 4. I do note that READLINE_CALLBACKS is defined in /usr/include/readline/rlconf.h, but Python's readline.c doesn't include this file, and none of the readline headers include it. So I don't know why you're finding the function! ---------------------------------------------------------------------- Comment By: Richard Boulton (richardb) Date: 2006-09-08 05:34 Message: Logged In: YES user_id=9565 HAVE_READLINE_CALLBACK is defined by configure.in whenever the readline library on the platform supports the rl_callback_handler_install() function. I'm using Ubuntu Dapper, and have libreadline 4 and 5 installed (more precisely, 4.3-18 and 5.1-7build1), but only the -dev package for 5.1-7build1. "info readline" describes rl_callback_handler_install(), and configure.in finds it, so I'm surprised it wasn't found on akuchling's machine. I agree that the code looks buggy on platforms in which signals don't necessarily get delivered to the main thread, but looks no more buggy with the patch than without. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 10:38 Message: Logged In: YES user_id=11375 On looking at the readline code, I think this patch makes no difference to signals. The code in readline.c for the callbacks looks like this: has_input = 0; while (!has_input) { ... has_input = select.select(rl_input); } if (has_input > 0) {read character} elif (errno == EINTR) {check signals} So I think that, if a signal is delivered to a thread and select() in the main thread doesn't return EINTR, the old code is just as problematic as the code with this patch. The (while !has_input) loop doesn't check for signals at all as an exit condition. I'm not sure what to do at this point. I think the new code is no worse than the old code with regard to signals. Maybe this loop is buggy w.r.t. to signals, but I don't know how to test that. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 10:17 Message: Logged In: YES user_id=11375 HAVE_READLINE_CALLBACK was not defined with readline 5.1 on Ubuntu Dapper, until I did the configure/CFLAG trick. I didn't think of a possible interaction with signals, and will re-open the bug while trying to work up a test case. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-09-07 10:12 Message: Logged In: YES user_id=6656 I'd be cautious about applying this to 2.5: we could end up with the same problem currently entertaining python-dev, i.e. a signal gets delivered to a non- main thread but the main thread is sitting in a select with no timeout so any python signal handler doesn't run until the user hits a key. HAVE_READLINE_CALLBACK is defined when readline is 2.1 *or newer* I think... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 10:02 Message: Logged In: YES user_id=11375 Recent versions of readline can still support callbacks if READLINE_CALLBACK is defined, so I could test the patch by running 'CFLAGS=-DREADLINE_CALLBACK' and re-running configure. Applied as rev. 51815 to the trunk, so the fix will be in Python 2.6. The 2.5 release manager needs to decide if it should be applied to the 2.5 branch. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-09-07 09:24 Message: Logged In: YES user_id=11375 Original report: http://perkypants.org/blog/2006/09/02/rfte-python This is tied to the version of readline being used; the select code is only used if HAVE_RL_CALLBACK is defined, and a comment in Python's configure.in claims it's only defined with readline 2.1. Current versions of readline are 4.3 and 5.1; are people still using such an ancient version of readline? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1552726&group_id=5470 From noreply at sourceforge.net Mon Jan 22 17:34:36 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 08:34:36 -0800 Subject: [ python-Bugs-1633941 ] for line in sys.stdin: doesn't notice EOF the first time Message-ID: Bugs item #1633941, was opened at 2007-01-12 05:34 Message generated for change (Comment added) made by draghuram You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: for line in sys.stdin: doesn't notice EOF the first time Initial Comment: [forwarded from http://bugs.debian.org/315888] for line in sys.stdin: doesn't notice EOF the first time when reading from tty. The test program: import sys for line in sys.stdin: print line, print "eof" A sample session: liw at esme$ python foo.py foo <--- I pressed Enter and then Ctrl-D foo <--- then this appeared, but not more eof <--- this only came when I pressed Ctrl-D a second time liw at esme$ Seems to me that there is some buffering issue where Python needs to read end-of-file twice to notice it on all levels. Once should be enough. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 11:34 Message: Logged In: YES user_id=984087 Originator: NO I am not entirely sure that this is a bug. $ cat testfile line1 line2 $ python foo.py < testfile This command behaves as expected. Only when the input is from tty, the above described behaviour happens. That could be because of the terminal settings where characters may be buffered until a newline is entered. ---------------------------------------------------------------------- Comment By: Gabriel Genellina (gagenellina) Date: 2007-01-13 23:20 Message: Logged In: YES user_id=479790 Originator: NO Same thing occurs on Windows. Even worse, if the line does not end with CR, Ctrl-Z (EOF in Windows, equivalent to Ctrl-D) has to be pressed 3 times: D:\Temp>python foo.py foo <--- I pressed Enter ^Z <--- I pressed Ctrl-Z and then Enter again foo <--- this appeared ^Z <--- I pressed Ctrl-Z and then Enter again D:\Temp>python foo.py foo^Z <--- I pressed Ctrl-Z and then Enter ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again foo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 From noreply at sourceforge.net Mon Jan 22 17:34:49 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 08:34:49 -0800 Subject: [ python-Bugs-1633941 ] for line in sys.stdin: doesn't notice EOF the first time Message-ID: Bugs item #1633941, was opened at 2007-01-12 05:34 Message generated for change (Comment added) made by draghuram You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: for line in sys.stdin: doesn't notice EOF the first time Initial Comment: [forwarded from http://bugs.debian.org/315888] for line in sys.stdin: doesn't notice EOF the first time when reading from tty. The test program: import sys for line in sys.stdin: print line, print "eof" A sample session: liw at esme$ python foo.py foo <--- I pressed Enter and then Ctrl-D foo <--- then this appeared, but not more eof <--- this only came when I pressed Ctrl-D a second time liw at esme$ Seems to me that there is some buffering issue where Python needs to read end-of-file twice to notice it on all levels. Once should be enough. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 11:34 Message: Logged In: YES user_id=984087 Originator: NO I am not entirely sure that this is a bug. $ cat testfile line1 line2 $ python foo.py < testfile This command behaves as expected. Only when the input is from tty, the above described behaviour happens. That could be because of the terminal settings where characters may be buffered until a newline is entered. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 11:34 Message: Logged In: YES user_id=984087 Originator: NO I am not entirely sure that this is a bug. $ cat testfile line1 line2 $ python foo.py < testfile This command behaves as expected. Only when the input is from tty, the above described behaviour happens. That could be because of the terminal settings where characters may be buffered until a newline is entered. ---------------------------------------------------------------------- Comment By: Gabriel Genellina (gagenellina) Date: 2007-01-13 23:20 Message: Logged In: YES user_id=479790 Originator: NO Same thing occurs on Windows. Even worse, if the line does not end with CR, Ctrl-Z (EOF in Windows, equivalent to Ctrl-D) has to be pressed 3 times: D:\Temp>python foo.py foo <--- I pressed Enter ^Z <--- I pressed Ctrl-Z and then Enter again foo <--- this appeared ^Z <--- I pressed Ctrl-Z and then Enter again D:\Temp>python foo.py foo^Z <--- I pressed Ctrl-Z and then Enter ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again foo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 From noreply at sourceforge.net Mon Jan 22 18:26:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 09:26:47 -0800 Subject: [ python-Feature Requests-1567331 ] logging.RotatingFileHandler has no "infinite" backupCount Message-ID: Feature Requests item #1567331, was opened at 2006-09-28 14:36 Message generated for change (Comment added) made by josiahcarlson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1567331&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Closed Resolution: Fixed Priority: 5 Private: No Submitted By: Skip Montanaro (montanaro) Assigned to: Vinay Sajip (vsajip) Summary: logging.RotatingFileHandler has no "infinite" backupCount Initial Comment: It seems to me that logging.RotatingFileHandler should have a way to spell "never delete old log files". This is useful in situations where you want an external process (manual or automatic) make decisions about deleting log files. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-22 09:26 Message: Logged In: YES user_id=341410 Originator: NO There are at least two ways to "solve" the "problem" regarding "what do we name the log after it is full". Here are just a few: 1) The 'being written to' log is X.log, the most recent 'finished' log is X.log., it uses the reverse renaming semantic to what is already available. 2) The 'being written to' log is X.log, the most recent 'finished' log is the log just before a 'missing' log. Say you have .log, .log.1, .log.3; .log.1 is the most recent 'finished' log, and when .log is full, you delete .log.3, rename .log to .log.2, and start writing to a new .log ( (mod x) + 1 method ). Semantic #1 isn't reasonable when you have a large number of log files (that isn't infinite), just like the current semantic isn't reasonable when you have a large number of log files (even infinite), but #2 is reasonable (in terms of filesystem manipulations) when you have any number of log files. It is unambiguous to the computer, and can be made unambiguous to the user with a 'get log filenames' function that returns the chronological listing of log files (everything after the 'missing' file comes first, then the stuff before the 'missing' file, then the 'current' log). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2007-01-22 08:09 Message: Logged In: YES user_id=308438 Originator: NO Josiah - OK...suppose I use your semantics: Create log ... at rollover, log -> log.1, create log anew ... at rollover, log -> log.2, create log anew ... at rollover, log -> log.3, and the user has set a backup count of 3 so I can't do log -> log.4 - then I still need to rename files, it seems to me. If I don't, and say reuse log.1, then the user gets an unintuitive ordering where log.1 is newer than log.3 sometimes, but not ar other times - so your approach would *only* be beneficial where the backup count was infinite. For such scenarios, I think it's better to either use e.g. logrotate and WatchedFileHandler, or create a new class based on RotatingFileHandler to do what you want. Providing support for "infinite" log files is not a common enough use case, IMO, to justify support in the core package. ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2007-01-20 10:39 Message: Logged In: YES user_id=341410 Originator: NO What about an optional different semantic for log renaming? Rather than log -> log.1, log -> log.+1, so if you have log, log.1, log.2; log -> log.3 and log gets created anew. I've used a similar semantic in other logging packages, and it works pretty well. It would also allow for users to have an "infinite" count of logfiles (if that is what they want). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2007-01-15 08:44 Message: Logged In: YES user_id=308438 Originator: NO The problem with this is that on rollover, RotatingFileHandler renames old logs: rollover.log.3 -> rollover.log.4, rollover.log.2 -> rollover.log.3, rollover.log.1 -> rollover.log.2, rollover.log -> rollover.log.1, and a new rollover.log is opened. With an arbitrary number of old log files, this leads to arbitrary renaming time - which could cause long pauses due to logging, not a good idea. If you are using e.g. logrotate or newsyslog, or a custom program to do logfile rotation, you can use the new logging.handlers.WatchedFileHandler handler (meant for use on Unix/Linux only - on Windows, logfiles can't be renamed or moved while in use and so the requirement doesn't arise) which watches the logged-to file to see when it changes. This has recently been checked into SVN trunk. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1567331&group_id=5470 From noreply at sourceforge.net Mon Jan 22 18:37:15 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 09:37:15 -0800 Subject: [ python-Bugs-1633941 ] for line in sys.stdin: doesn't notice EOF the first time Message-ID: Bugs item #1633941, was opened at 2007-01-12 05:34 Message generated for change (Comment added) made by draghuram You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: for line in sys.stdin: doesn't notice EOF the first time Initial Comment: [forwarded from http://bugs.debian.org/315888] for line in sys.stdin: doesn't notice EOF the first time when reading from tty. The test program: import sys for line in sys.stdin: print line, print "eof" A sample session: liw at esme$ python foo.py foo <--- I pressed Enter and then Ctrl-D foo <--- then this appeared, but not more eof <--- this only came when I pressed Ctrl-D a second time liw at esme$ Seems to me that there is some buffering issue where Python needs to read end-of-file twice to notice it on all levels. Once should be enough. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 12:37 Message: Logged In: YES user_id=984087 Originator: NO Sorry for my duplicate comment. It was a mistake. On closer examination, the OP's description does seem to indicate some issue. Please look at (attached) stdin_noiter.py which uses readline() directly and it does not have the problem described here. It properly detects EOF on first CTRL-D. This points to some problem with the iterator function fileobject.c:file_iternext(). I think that the first CTRL-D might be getting lost somewhere in the read ahead code (which only comes into picture with iterator). ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 11:34 Message: Logged In: YES user_id=984087 Originator: NO I am not entirely sure that this is a bug. $ cat testfile line1 line2 $ python foo.py < testfile This command behaves as expected. Only when the input is from tty, the above described behaviour happens. That could be because of the terminal settings where characters may be buffered until a newline is entered. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 11:34 Message: Logged In: YES user_id=984087 Originator: NO I am not entirely sure that this is a bug. $ cat testfile line1 line2 $ python foo.py < testfile This command behaves as expected. Only when the input is from tty, the above described behaviour happens. That could be because of the terminal settings where characters may be buffered until a newline is entered. ---------------------------------------------------------------------- Comment By: Gabriel Genellina (gagenellina) Date: 2007-01-13 23:20 Message: Logged In: YES user_id=479790 Originator: NO Same thing occurs on Windows. Even worse, if the line does not end with CR, Ctrl-Z (EOF in Windows, equivalent to Ctrl-D) has to be pressed 3 times: D:\Temp>python foo.py foo <--- I pressed Enter ^Z <--- I pressed Ctrl-Z and then Enter again foo <--- this appeared ^Z <--- I pressed Ctrl-Z and then Enter again D:\Temp>python foo.py foo^Z <--- I pressed Ctrl-Z and then Enter ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again foo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 From noreply at sourceforge.net Mon Jan 22 18:45:05 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 09:45:05 -0800 Subject: [ python-Bugs-1633941 ] for line in sys.stdin: doesn't notice EOF the first time Message-ID: Bugs item #1633941, was opened at 2007-01-12 05:34 Message generated for change (Comment added) made by draghuram You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: for line in sys.stdin: doesn't notice EOF the first time Initial Comment: [forwarded from http://bugs.debian.org/315888] for line in sys.stdin: doesn't notice EOF the first time when reading from tty. The test program: import sys for line in sys.stdin: print line, print "eof" A sample session: liw at esme$ python foo.py foo <--- I pressed Enter and then Ctrl-D foo <--- then this appeared, but not more eof <--- this only came when I pressed Ctrl-D a second time liw at esme$ Seems to me that there is some buffering issue where Python needs to read end-of-file twice to notice it on all levels. Once should be enough. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 12:45 Message: Logged In: YES user_id=984087 Originator: NO Ok. This may sound stupid but I couldn't find a way to attach a file to this bug report. So I am copying the code here: ************ import sys line = sys.stdin.readline() while (line): print line, line = sys.stdin.readline() print "eof" ************* ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 12:37 Message: Logged In: YES user_id=984087 Originator: NO Sorry for my duplicate comment. It was a mistake. On closer examination, the OP's description does seem to indicate some issue. Please look at (attached) stdin_noiter.py which uses readline() directly and it does not have the problem described here. It properly detects EOF on first CTRL-D. This points to some problem with the iterator function fileobject.c:file_iternext(). I think that the first CTRL-D might be getting lost somewhere in the read ahead code (which only comes into picture with iterator). ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 11:34 Message: Logged In: YES user_id=984087 Originator: NO I am not entirely sure that this is a bug. $ cat testfile line1 line2 $ python foo.py < testfile This command behaves as expected. Only when the input is from tty, the above described behaviour happens. That could be because of the terminal settings where characters may be buffered until a newline is entered. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 11:34 Message: Logged In: YES user_id=984087 Originator: NO I am not entirely sure that this is a bug. $ cat testfile line1 line2 $ python foo.py < testfile This command behaves as expected. Only when the input is from tty, the above described behaviour happens. That could be because of the terminal settings where characters may be buffered until a newline is entered. ---------------------------------------------------------------------- Comment By: Gabriel Genellina (gagenellina) Date: 2007-01-13 23:20 Message: Logged In: YES user_id=479790 Originator: NO Same thing occurs on Windows. Even worse, if the line does not end with CR, Ctrl-Z (EOF in Windows, equivalent to Ctrl-D) has to be pressed 3 times: D:\Temp>python foo.py foo <--- I pressed Enter ^Z <--- I pressed Ctrl-Z and then Enter again foo <--- this appeared ^Z <--- I pressed Ctrl-Z and then Enter again D:\Temp>python foo.py foo^Z <--- I pressed Ctrl-Z and then Enter ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again foo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 From noreply at sourceforge.net Mon Jan 22 20:09:21 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 11:09:21 -0800 Subject: [ python-Bugs-1586414 ] tarfile.extract() may cause file fragmentation on Windows XP Message-ID: Bugs item #1586414, was opened at 2006-10-28 23:22 Message generated for change (Comment added) made by gustaebel You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586414&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Rejected Priority: 5 Private: No Submitted By: Enoch Julias (enochjul) Assigned to: Lars Gust?bel (gustaebel) Summary: tarfile.extract() may cause file fragmentation on Windows XP Initial Comment: When I use tarfile.extract() to extract all the files from a large tar archive with lots of files tends to cause file fragmentation in Windows. Apparently NTFS cluster allocation interacts badly with such operations if Windows is not aware of the size of each file. The solution is to use a combination of the Win32 APIs SetFilePointer() and SetEndOfFile() before writing to the target file. This helps Windows choose a contiguous free space for the file. I tried it on the 2.6 trunk by calling file.truncate() (which seems to implement the appropriate calls on Windows) to set the file size before writing to a file. It helps to avoid fragmentation for the extracted files on my Windows XP x64 system. Can this be added to tarfile to improve its performance on Windows? ---------------------------------------------------------------------- >Comment By: Lars Gust?bel (gustaebel) Date: 2007-01-22 20:09 Message: Logged In: YES user_id=642936 Originator: NO Closed due to lack of interest, see discussion at #1587674. ---------------------------------------------------------------------- Comment By: Enoch Julias (enochjul) Date: 2006-10-31 06:07 Message: Logged In: YES user_id=6071 I submitted patch #1587674 for this, though I am not sure if it is a good idea to use truncate() for such a purpose. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-10-29 09:55 Message: Logged In: YES user_id=849994 Can you try to come up with a patch? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1586414&group_id=5470 From noreply at sourceforge.net Mon Jan 22 20:12:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 11:12:48 -0800 Subject: [ python-Bugs-1446119 ] subprocess interpreted double quotation wrong on windows Message-ID: Bugs item #1446119, was opened at 2006-03-09 05:26 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1446119&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: Python 2.4 >Status: Closed >Resolution: Out of Date Priority: 5 Private: No Submitted By: simon (simonhang) Assigned to: Peter ?strand (astrand) Summary: subprocess interpreted double quotation wrong on windows Initial Comment: If we run below python command print subprocess.Popen([r'c:\test.bat', r'test"string:']).pid Actually c:\test.bat test\"string\" is executed. Module subprocess doesn't interpret double quotation mark right. Back slash shouldn't be added. I believe problem is in function subprocess.list2cmdline. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-22 20:12 Message: Logged In: YES user_id=344921 Originator: NO No response from reporter, we confirm to the MS documentation as far as I can tell. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2006-07-10 22:12 Message: Logged In: YES user_id=344921 As far as I can tell, there's nothing wrong with subprocess.list2cmdline. Take a look at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vccelng/htm/progs_12.asp. There, you will find: ab"c which corresponds to: "ab\"c" In other words: a backslash should be added when converting from an argument to a string. Or do you intepret the MS web page differently? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1446119&group_id=5470 From noreply at sourceforge.net Mon Jan 22 20:27:51 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 11:27:51 -0800 Subject: [ python-Bugs-1358527 ] subprocess.py fails on Windows when there is no console Message-ID: Bugs item #1358527, was opened at 2005-11-16 23:59 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1358527&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Martin Blais (blais) Assigned to: Peter ?strand (astrand) Summary: subprocess.py fails on Windows when there is no console Initial Comment: Under Windows XP, using Python 2.4.2, calling a subprocess from "subprocess.py" from a script that does not have a console, with stdin=None (the default) fails. Since there is a check for stdin=stdout=stderr=None that just returns, to exhibit this problem you need to at least set stdout=PIPE (just to get it to run past the check for that special case). The problem is that in _get_handles(), l581-582: if stdin == None: p2cread = GetStdHandle(STD_INPUT_HANDLE) GetStdHandle returns None if there is no console. This is rather nasty bugger of a bug, since I suppose it breaks most GUI applications that start without the console (i.e. most) and that eventually invoke subprocesses and capture their output. I'm surprised to find this. To reproduce the problem, do this: 1. save the attached script to C:/temp/bug.py and C:/temp/bug.pyw 2. create two shortcuts on your desktop to invoke those scripts 3. open a shell and tail C:/temp/out.log For bug.py, the log file should display: 2005-11-16 17:38:11,661 INFO 0 For bug.pyw (no console), the log file should show the following exception: 2005-11-16 17:38:13,084 ERROR Traceback (most recent call last): File "C:\Temp\bug.pyw", line 20, in ? out = call(['C:/Cygwin/bin/ls.exe'], stdout=PIPE) #, stderr=PIPE) File "C:\Python24\lib\subprocess.py", line 412, in call return Popen(*args, **kwargs).wait() File "C:\Python24\lib\subprocess.py", line 533, in __init__ (p2cread, p2cwrite, File "C:\Python24\lib\subprocess.py", line 593, in _get_handles p2cread = self._make_inheritable(p2cread) File "C:\Python24\lib\subprocess.py", line 634, in _make_inheritable DUPLICATE_SAME_ACCESS) TypeError: an integer is required This is the bug. Note: in this test program, I'm invoking Cygwin's ls.exe. Feel free to change it ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-22 20:27 Message: Logged In: YES user_id=344921 Originator: NO Duplicate of 1124861. ---------------------------------------------------------------------- Comment By: Martin Blais (blais) Date: 2005-11-17 14:41 Message: Logged In: YES user_id=10996 Here is an example of a workaround: p = Popen(ps2pdf, stdin=PIPE, stdout=PIPE, stderr=PIPE, cwd=tempfile.gettempdir()) p.stdin.close() # FIXME: we need to specify and close stdin explicitly # because of a bug I found and reported in subprocess.py # when the program is launched without a console, see SF bug # tracker for the Python project for details. When the bug # gets fixed we should be able to remove this. Basically I just specify stdin=PIPE and close it by hand. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1358527&group_id=5470 From noreply at sourceforge.net Mon Jan 22 20:28:26 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 11:28:26 -0800 Subject: [ python-Bugs-1124861 ] subprocess fails on GetStdHandle in interactive GUI Message-ID: Bugs item #1124861, was opened at 2005-02-17 17:23 Message generated for change (Settings changed) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: davids (davidschein) Assigned to: Nobody/Anonymous (nobody) >Summary: subprocess fails on GetStdHandle in interactive GUI Initial Comment: Using the suprocess module from with IDLE or PyWindows, it appears that calls GetStdHandle (STD__HANDLE) returns None, which causes an error. (All appears fine on Linux, the standard Python command-line, and ipython.) For example: >>> import subprocess >>> p = subprocess.Popen("dir", stdout=subprocess.PIPE) Traceback (most recent call last): File "", line 1, in -toplevel- p = subprocess.Popen("dir", stdout=subprocess.PIPE) File "C:\Python24\lib\subprocess.py", line 545, in __init__ (p2cread, p2cwrite, File "C:\Python24\lib\subprocess.py", line 605, in _get_handles p2cread = self._make_inheritable(p2cread) File "C:\Python24\lib\subprocess.py", line 646, in _make_inheritable DUPLICATE_SAME_ACCESS) TypeError: an integer is required The error originates in the mswindows implementation of _get_handles. You need to set one of stdin, stdout, or strerr because the first line in the method is: if stdin == None and stdout == None and stderr == None: ...return (None, None, None, None, None, None) I added "if not handle: return GetCurrentProcess()" to _make_inheritable() as below and it worked. Of course, I really do not know what is going on, so I am letting go now... def _make_inheritable(self, handle): ..."""Return a duplicate of handle, which is inheritable""" ...if not handle: return GetCurrentProcess() ...return DuplicateHandle(GetCurrentProcess(), handle, ....................................GetCurrentProcess(), 0, 1, ....................................DUPLICATE_SAME_ACCESS) ---------------------------------------------------------------------- Comment By: craig (codecraig) Date: 2006-10-13 17:54 Message: Logged In: YES user_id=1258995 On windows, this seems to work from subprocess import * p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) ....in some cases (depending on what command you are executing, a command prompt window may appear). Do not show a window use this... import win32con p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE, creationflags=win32con.CREATE_NO_WINDOW) ...google for Microsoft Process Creation Flags for more info ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-09-26 16:53 Message: Logged In: YES user_id=945502 This issue was discussed on comp.lang.python[1] and Roger Upole suggested: """ Basically, gui apps like VS don't have a console, so GetStdHandle returns 0. _subprocess.GetStdHandle returns None if the handle is 0, which gives the original error. Pywin32 just returns the 0, so the process gets one step further but still hits the above error. Subprocess.py should probably check the result of GetStdHandle for None (or 0) and throw a readable error that says something like "No standard handle available, you must specify one" """ [1]http://mail.python.org/pipermail/python-list/2005-September/300744.html ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-08-13 22:37 Message: Logged In: YES user_id=945502 I ran into a similar problem in Ellogon (www.ellogon.org) which interfaces with Python through tclpython (http://jfontain.free.fr/tclpython.htm). My current workaround is to always set all of stdin, stdout, and stderr to subprocess.PIPE. I never use the stderr pipe, but at least this keeps the broken GetStdHandle calls from happening. Looking at the code, I kinda think the fix should be:: if handle is None: return handle return DuplicateHandle(GetCurrentProcess(), ... where if handle is None, it stays None. But I'm also probably in over my head here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470 From noreply at sourceforge.net Mon Jan 22 20:29:11 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 11:29:11 -0800 Subject: [ python-Bugs-1603907 ] subprocess: error redirecting i/o from non-console process Message-ID: Bugs item #1603907, was opened at 2006-11-27 18:20 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603907&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Python 2.5 >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Oren Tirosh (orenti) Assigned to: Peter ?strand (astrand) Summary: subprocess: error redirecting i/o from non-console process Initial Comment: In IDLE, PythonWin or other non-console interactive Python under Windows: >>> from subprocess import * >>> Popen('cmd', stdout=PIPE) Traceback (most recent call last): File "", line 1, in -toplevel- Popen('', stdout=PIPE) File "C:\python24\lib\subprocess.py", line 533, in __init__ (p2cread, p2cwrite, File "C:\python24\lib\subprocess.py", line 593, in _get_handles p2cread = self._make_inheritable(p2cread) File "C:\python24\lib\subprocess.py", line 634, in _make_inheritable DUPLICATE_SAME_ACCESS) TypeError: an integer is required The same command in a console windows is successful. Why it happens: subprocess assumes that GetStdHandle always succeeds but when there is no console it returns None. DuplicateHandle then complains about getting a non-integer. This problem does not happen when redirecting all three standard handles. Solution: Replace None with -1 (INVALID_HANDLE_VALUE) in _make_inheritable. Patch attached. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-22 20:29 Message: Logged In: YES user_id=344921 Originator: NO Duplicate of 1124861. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-21 16:31 Message: Logged In: YES user_id=344921 Originator: NO This the suggested patches are not ready for commit, I'm moving this issue to "bugs" instead. ---------------------------------------------------------------------- Comment By: Oren Tirosh (orenti) Date: 2007-01-07 19:13 Message: Logged In: YES user_id=562624 Originator: YES Oops. The new patch does not solve it in all cases in the win32api version, either... ---------------------------------------------------------------------- Comment By: Oren Tirosh (orenti) Date: 2007-01-07 19:09 Message: Logged In: YES user_id=562624 Originator: YES If you duplicate INVALID_HANDLE_VALUE you get a new valid handle to nothing :-) I guess the code really should not rely on this undocumented behavior. The reason I didn't return INVALID_HANDLE_VALUE directly is because DuplicateHandle returns a _subprocess_handle object, not an int. It's expected to have a .Close() method elsewhere in the code. Because of subtle difference between in the behavior of the _subprocess and win32api implementations of GetStdHandle in this case solving this issue this gets quite messy! File Added: subprocess-noconsole2.patch ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-07 11:58 Message: Logged In: YES user_id=344921 Originator: NO This patch looks very interesting. However, it feels a little bit strange to call DuplicateHandle with a handle of -1. Is this really allowed? What will DuplicateHandle return in this case? INVALID_HANDLE_VALUE? In that case, isn't it better to return INVALID_HANDLE_VALUE directly? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1603907&group_id=5470 From noreply at sourceforge.net Mon Jan 22 20:30:07 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 11:30:07 -0800 Subject: [ python-Bugs-1126208 ] subprocess.py Errors with IDLE Message-ID: Bugs item #1126208, was opened at 2005-02-17 21:33 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1126208&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Kurt B. Kaiser (kbk) Assigned to: Peter ?strand (astrand) Summary: subprocess.py Errors with IDLE Initial Comment: ===================== From: David S. alumni.tufts.edu> Subject: subprocess problem on Windows in IDLE and PythonWin Newsgroups: gmane.comp.python.general Date: Wed, 16 Feb 2005 02:05:24 +0000 Python 2.4 on Windows XP In the python command-line the following works fine: >>> from subprocess import * >>> p = Popen('dir', stdout=PIPE) >From within IDLE or PythonWin I get the following exception: Traceback (most recent call last): File "", line 1, in -toplevel- p = Popen('dir', stdout=PIPE) File "c:\python24\lib\subprocess.py", line 545, in __init__ (p2cread, p2cwrite, File "c:\python24\lib\subprocess.py", line 605, in _get_handles p2cread = self._make_inheritable(p2cread) File "c:\python24\lib\subprocess.py", line 646, in _make_inheritable DUPLICATE_SAME_ACCESS) TypeError: an integer is required Note it works fine on Linux also. I tested it with >>> p = Popen('ls', stdout=PIPE) ... and had no trouble. =========== I (KBK) can duplicate this on W2K using 2.4. If I run IDLE with the -n switch (no subprocess) the error doesn't occur. Unfortunately, I can't debug it because I don't have the necessary tools on Windows. It appears that the problem is in _subprocess.c:sp_DuplicateHandle(), likely that PyArg_ParseTuple() is OK but the failure occurs in the call to DuplicateHandle(). All the args to sp_DuplicateHandle() seem to be the right type. DUPLICATE_SAME_ACCESS is an integer, value 2 To find out what's going on, it would seem necessary to attach a windows debugger to IDLE's subprocess (not the IDLE GUI). Let me know if I can help. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-22 20:30 Message: Logged In: YES user_id=344921 Originator: NO Duplicate of 1124861. ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-09-26 16:51 Message: Logged In: YES user_id=945502 I believe this is related to 1124861 (if it's not a duplicate of it) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1126208&group_id=5470 From noreply at sourceforge.net Mon Jan 22 20:32:25 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 11:32:25 -0800 Subject: [ python-Bugs-1543469 ] test_subprocess fails on cygwin Message-ID: Bugs item #1543469, was opened at 2006-08-20 15:22 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1543469&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Installation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Miki Tebeka (tebeka) Assigned to: Nobody/Anonymous (nobody) Summary: test_subprocess fails on cygwin Initial Comment: This is RC1. test_subprocess fails. IMO due to the fact that there is a directory called "Python" in the python source directory. The fix should be that sys.executable will return the name with the '.exe' suffix on cygwin. Attached log of running the test. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-22 20:32 Message: Logged In: YES user_id=344921 Originator: NO Since this is not a subprocess bug per se, I'm letting someone else take care of this one. ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2006-08-21 10:07 Message: Logged In: NO Attached a patch, test_subprocess now passes. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-08-21 04:15 Message: Logged In: YES user_id=33168 Cygwin recently changed their behaviour. I have an outstanding hack to fix this. Patches would help get things fixed up. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1543469&group_id=5470 From noreply at sourceforge.net Mon Jan 22 20:33:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 11:33:50 -0800 Subject: [ python-Bugs-1238747 ] subprocess.Popen fails inside a Windows service Message-ID: Bugs item #1238747, was opened at 2005-07-15 10:31 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1238747&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Platform-specific >Status: Closed >Resolution: Duplicate Priority: 5 Private: No Submitted By: Adam Kerrison (adamk550) Assigned to: Peter ?strand (astrand) Summary: subprocess.Popen fails inside a Windows service Initial Comment: If you use subprocess.Popen() from within a Windows service and you try to redirect stdout or stderr, the call will fail with a TypeError. The issue appears to be that if you attempt to redirect stdout and/or stderr, the module also needs to set up stdin. Since you haven't specified what to do with stdin, the code simple duplicates the processes stdin handle using GetStdHandle(STD_INPUT_HANDLE) However, a Windows service doesn't have stdin etc so the returned handle is None. This handle is then passed to DuplicateHandle() which fails with the TypeError. A workaround is to explictly PIPE stdin but I have found at least one Windows program (the RCMD.EXE utility) that fails if its stdin is a pipe! (RCMD says "Internal Error 109" ...) The only other workaround is a to explictly open the NUL device and use that for stdin. ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-22 20:33 Message: Logged In: YES user_id=344921 Originator: NO Duplicate of 1124861. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1238747&group_id=5470 From noreply at sourceforge.net Mon Jan 22 20:34:39 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 11:34:39 -0800 Subject: [ python-Bugs-1637167 ] mailbox.py uses old email names Message-ID: Bugs item #1637167, was opened at 2007-01-16 17:19 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637167&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Russell Owen (reowen) >Assigned to: Barry A. Warsaw (bwarsaw) Summary: mailbox.py uses old email names Initial Comment: mailbox.py uses old (and presumably deprecated) names for stuff in the email package. This can confuse application packagers such as py2app. I believe the complete list of desirable changes is: email.Generator -> email.generator email.Message -> email.message email.message_from_string -> email.parser.message_from_string email.message_from_file -> email.parser.message_from_file I submitted patches for urllib, urllib2 and smptlib but wasn't sure enough of mailbox to do that. Those four modules are the only instances I found that needed changing at the main level of the library. However, I did not do a recursive search. There may be files inside packages that could also use cleanup. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-22 14:34 Message: Logged In: YES user_id=11375 Originator: NO Barry, are the suggested name changes for the email module correct? If yes, please assign this bug back to me and I'll make the changes to the mailbox module. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637167&group_id=5470 From noreply at sourceforge.net Mon Jan 22 20:36:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 11:36:48 -0800 Subject: [ python-Bugs-1124861 ] subprocess fails on GetStdHandle in interactive GUI Message-ID: Bugs item #1124861, was opened at 2005-02-17 17:23 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: Python 2.4 Status: Open Resolution: None >Priority: 7 Private: No Submitted By: davids (davidschein) Assigned to: Nobody/Anonymous (nobody) Summary: subprocess fails on GetStdHandle in interactive GUI Initial Comment: Using the suprocess module from with IDLE or PyWindows, it appears that calls GetStdHandle (STD__HANDLE) returns None, which causes an error. (All appears fine on Linux, the standard Python command-line, and ipython.) For example: >>> import subprocess >>> p = subprocess.Popen("dir", stdout=subprocess.PIPE) Traceback (most recent call last): File "", line 1, in -toplevel- p = subprocess.Popen("dir", stdout=subprocess.PIPE) File "C:\Python24\lib\subprocess.py", line 545, in __init__ (p2cread, p2cwrite, File "C:\Python24\lib\subprocess.py", line 605, in _get_handles p2cread = self._make_inheritable(p2cread) File "C:\Python24\lib\subprocess.py", line 646, in _make_inheritable DUPLICATE_SAME_ACCESS) TypeError: an integer is required The error originates in the mswindows implementation of _get_handles. You need to set one of stdin, stdout, or strerr because the first line in the method is: if stdin == None and stdout == None and stderr == None: ...return (None, None, None, None, None, None) I added "if not handle: return GetCurrentProcess()" to _make_inheritable() as below and it worked. Of course, I really do not know what is going on, so I am letting go now... def _make_inheritable(self, handle): ..."""Return a duplicate of handle, which is inheritable""" ...if not handle: return GetCurrentProcess() ...return DuplicateHandle(GetCurrentProcess(), handle, ....................................GetCurrentProcess(), 0, 1, ....................................DUPLICATE_SAME_ACCESS) ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-22 20:36 Message: Logged In: YES user_id=344921 Originator: NO The following bugs have been marked as duplicate of this bug: 1358527 1603907 1126208 1238747 ---------------------------------------------------------------------- Comment By: craig (codecraig) Date: 2006-10-13 17:54 Message: Logged In: YES user_id=1258995 On windows, this seems to work from subprocess import * p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) ....in some cases (depending on what command you are executing, a command prompt window may appear). Do not show a window use this... import win32con p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE, creationflags=win32con.CREATE_NO_WINDOW) ...google for Microsoft Process Creation Flags for more info ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-09-26 16:53 Message: Logged In: YES user_id=945502 This issue was discussed on comp.lang.python[1] and Roger Upole suggested: """ Basically, gui apps like VS don't have a console, so GetStdHandle returns 0. _subprocess.GetStdHandle returns None if the handle is 0, which gives the original error. Pywin32 just returns the 0, so the process gets one step further but still hits the above error. Subprocess.py should probably check the result of GetStdHandle for None (or 0) and throw a readable error that says something like "No standard handle available, you must specify one" """ [1]http://mail.python.org/pipermail/python-list/2005-September/300744.html ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-08-13 22:37 Message: Logged In: YES user_id=945502 I ran into a similar problem in Ellogon (www.ellogon.org) which interfaces with Python through tclpython (http://jfontain.free.fr/tclpython.htm). My current workaround is to always set all of stdin, stdout, and stderr to subprocess.PIPE. I never use the stderr pipe, but at least this keeps the broken GetStdHandle calls from happening. Looking at the code, I kinda think the fix should be:: if handle is None: return handle return DuplicateHandle(GetCurrentProcess(), ... where if handle is None, it stays None. But I'm also probably in over my head here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470 From noreply at sourceforge.net Mon Jan 22 21:24:17 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 12:24:17 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 16:03 Message generated for change (Comment added) made by baikie You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: David Watson (baikie) Date: 2007-01-22 20:24 Message: Logged In: YES user_id=1504904 Originator: YES So what you propose to commit for 2.5 is basically mailbox-unified2 (your mailbox-unified-patch, minus the _toc clearing)? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-22 15:46 Message: Logged In: YES user_id=11375 Originator: NO This would be an API change, and therefore out-of-bounds for 2.5. I suggest giving up on this for 2.5.1 and only fixing it in 2.6. I'll add another warning to the docs, and maybe to the module as well. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-21 22:10 Message: Logged In: YES user_id=1504904 Originator: YES Hold on, I have a plan. If _toc is only regenerated on locking, or at the end of a flush(), then the only way self._pending can be set at that time is if the application has made modifications before calling lock(). If we make that an exception-raising offence, then we can assume that self._toc is a faithful representation of the last known contents of the file. That means we can preserve the existing message keys on a reread without any of that _user_toc nonsense. Diff attached, to apply on top of mailbox-unified2. It's probably had even less review and testing than the previous version, but it appears to pass all the regression tests and doesn't change any existing semantics. File Added: mailbox-update-toc-new.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-21 03:16 Message: Logged In: YES user_id=11375 Originator: NO I'm starting to lose track of all the variations on the bug. Maybe we should just add more warnings to the documentation about locking the mailbox when modifying it and not try to fix this at all. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-20 18:20 Message: Logged In: YES user_id=1504904 Originator: YES Hang on. If a message's key changes after recreating _toc, that does not mean that another process has modified the mailbox. If the application removes a message and then (inadvertently) causes _toc to be regenerated, the keys of all subsequent messages will be decremented by one, due only to the application's own actions. That's what happens in the "broken locking" test case: the program intends to remove message 0, flush, and then remove message 1, but because _toc is regenerated in between, message 1 is renumbered as 0, message 2 is renumbered as 1, and so the program deletes message 2 instead. To clear _toc in such code without attempting to preserve the message keys turns possible data loss (in the case that another process modified the mailbox) into certain data loss. That's what I'm concerned about. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-19 15:24 Message: Logged In: YES user_id=11375 Originator: NO After reflection, I don't think the potential changing actually makes things any worse. _generate() always starts numbering keys with 1, so if a message's key changes because of lock()'s, re-reading, that means someone else has already modified the mailbox. Without the ToC clearing, you're already fated to have a corrupted mailbox because the new mailbox will be written using outdated file offsets. With the ToC clearing, you delete the wrong message. Neither outcome is good, but data is lost either way. The new behaviour is maybe a little bit better in that you're losing a single message but still generating a well-formed mailbox, and not a randomly jumbled mailbox. I suggest applying the patch to clear self._toc, and noting in the documentation that keys might possibly change after doing a lock(). ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 18:15 Message: Logged In: YES user_id=1504904 Originator: YES This version passes the new tests (it fixes the length checking bug, and no longer clears self._toc), but goes back to failing test_concurrent_add. File Added: mailbox-unified2-module.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 18:14 Message: Logged In: YES user_id=1504904 Originator: YES Unfortunately, there is a problem with clearing _toc: it's basically the one I alluded to in my comment of 2006-12-16. Back then I thought it could be caught by the test you issue the warning for, but the application may instead do its second remove() *after* the lock() (so that self._pending is not set at lock() time), using the key carried over from before it called unlock(). As before, this would result in the wrong message being deleted. I've added a test case for this (diff attached), and a bug I found in the process whereby flush() wasn't updating the length, which could cause subsequent flushes to fail (I've got a fix for this). These seem to have turned up a third bug in the MH class, but I haven't looked at that yet. File Added: mailbox-unified2-test.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 21:06 Message: Logged In: YES user_id=11375 Originator: NO Add mailbox-unified-patch. File Added: mailbox-unified-patch.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 21:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 20:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 19:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 06:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 19:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 18:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 19:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 18:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 17:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 19:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 15:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 19:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 19:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 19:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 17:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 19:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 14:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 13:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 21:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 20:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 20:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Mon Jan 22 21:34:25 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 12:34:25 -0800 Subject: [ python-Bugs-1637167 ] mailbox.py uses old email names Message-ID: Bugs item #1637167, was opened at 2007-01-16 22:19 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637167&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Russell Owen (reowen) Assigned to: Barry A. Warsaw (bwarsaw) Summary: mailbox.py uses old email names Initial Comment: mailbox.py uses old (and presumably deprecated) names for stuff in the email package. This can confuse application packagers such as py2app. I believe the complete list of desirable changes is: email.Generator -> email.generator email.Message -> email.message email.message_from_string -> email.parser.message_from_string email.message_from_file -> email.parser.message_from_file I submitted patches for urllib, urllib2 and smptlib but wasn't sure enough of mailbox to do that. Those four modules are the only instances I found that needed changing at the main level of the library. However, I did not do a recursive search. There may be files inside packages that could also use cleanup. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-22 20:34 Message: Logged In: YES user_id=849994 Originator: NO FWIW, the last two are incorrect. I already fixed that while doing the other three patches. There shouldn't be any occurences of old style package name imports left. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-22 19:34 Message: Logged In: YES user_id=11375 Originator: NO Barry, are the suggested name changes for the email module correct? If yes, please assign this bug back to me and I'll make the changes to the mailbox module. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637167&group_id=5470 From noreply at sourceforge.net Mon Jan 22 21:55:04 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 12:55:04 -0800 Subject: [ python-Bugs-1633678 ] mailbox.py _fromlinepattern regexp does not support positive Message-ID: Bugs item #1633678, was opened at 2007-01-11 20:14 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633678&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox.py _fromlinepattern regexp does not support positive Initial Comment: [forwarded from http://bugs.debian.org/254757] mailbox.py _fromlinepattern regexp does not support positive GMT offsets. the pattern didn't change in 2.5. bug submitter writes: archivemail incorrectly splits up messages in my mbox-format mail archvies. I use Squirrelmail, which seems to create mbox lines that look like this: >From mangled at clarke.tinyplanet.ca Mon Jan 26 12:29:24 2004 -0400 The "-0400" appears to be throwing it off. If the first message of an mbox file has such a line on it, archivemail flat out stops, saying the file is not mbox. If the later messages in an mbox file are in this style, they are not counted, and archivemail thinks that the preceding message is just kind of long, and the decision to archive or not is broken. I have stumbled on this bug when I wanted to archive my mails on a Sarge system. And since my TZ is positive, the regexp did not work. I think the correct regexp for /usr/lib/python2.3/mailbox.py should be: _fromlinepattern = r"From \s*[^\s]+\s+\w\w\w\s+\w\w\w\s+\d?\d\s+" \ r"\d?\d:\d\d(:\d\d)?(\s+[^\s]+)?\s+\d\d\d\d\s*((\+|-)\d\d\d\d)?\s*$" This should handle positive and negative timezones in From lines. I have tested it successfully with an email beginning with this line: >From fred at athena.olympe.fr Mon May 31 13:24:50 2004 +0200 as well as one withouth TZ reference. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-22 15:55 Message: Logged In: YES user_id=11375 Originator: NO According to qmail's description of the mbox format (http://www.qmail.org/qmail-manual-html/man5/mbox.html), the 'from' lines shouldn't contain timezone info, but may contain additional information after the date. So I think a better change is just to add [^\s]*\s* to the end of the pattern. Note that the docs recommend the PortableUnixMailbox class as preferable for just this reason: there's too much variation in from lines to make the strict parsing useful. Change committed to trunk in rev. 53519, and to release25-maint in rev. 53521. Thanks for your report! ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633678&group_id=5470 From noreply at sourceforge.net Mon Jan 22 22:10:59 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 13:10:59 -0800 Subject: [ python-Bugs-1249573 ] rfc822 module, bug in parsedate_tz Message-ID: Bugs item #1249573, was opened at 2005-08-01 13:56 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1249573&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: nemesis (nemesis_xpn) Assigned to: Nobody/Anonymous (nobody) Summary: rfc822 module, bug in parsedate_tz Initial Comment: I found that the function parsedate_tz of the rfc822 module has a bug (or at least I think so). I found a usenet article (message-id: <2714d.q75200 at myg.winews.net>) that has this Date field: Date: Tue,26 Jul 2005 13:14:27 GMT +0200 It seems to be correct?, but parsedate_tz is not able to decode it, it is confused by the absence of a space after the ",". I studied the parsedate_tz code and the problem is on its third line: ... if not data: return None data = data.split() ... After the split I have: ['Tue,26', 'Jul', '2005', '13:14:27', 'GMT', '+0200'] but "Tue," and "26" should be separated. Of course parsedate_tz correctly decode the field if you add a space after the ",". A possible solution is to change the line n?863 of rfc822.py (data=data.split()) with this one: data=data.replace(",",", ").split() it solves the problem and should not affect the normal behaviour. ? and looking at rfc2822 par3.3 it should be correct, the space after the comma is not mandatory: date-time = [ day-of-week "," ] date FWS time [CFWS] day-of-week = ([FWS] day-name) / obs-day-of-week day-name = "Mon" / "Tue" / "Wed" / "Thu" / "Fri" / "Sat" / "Sun" date = day month year ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-22 21:10 Message: Logged In: YES user_id=849994 Originator: NO Fixed in rev. 53522, 53523 (2.5). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1249573&group_id=5470 From noreply at sourceforge.net Mon Jan 22 22:12:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 13:12:13 -0800 Subject: [ python-Bugs-975330 ] Inconsistent newline handling in email module Message-ID: Bugs item #975330, was opened at 2004-06-18 12:50 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975330&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Anders Hammarquist (iko) >Assigned to: Barry A. Warsaw (bwarsaw) Summary: Inconsistent newline handling in email module Initial Comment: text/* parts of email messages must use \r\n as the newline separator. For unencoded messages. smtplib and friends take care of the translation from \n to \r\n in the SMTP processing. Parts which are unencoded (i.e. 7bit character sets) MUST use \n line endings, or smtplib with translate to \r\r\n. Parts that get encoded using quoted-printable can use either, because the qp-encoder assumes input data is text and reencodes with \n. However, parts which get encoded using base64 are NOT translated, and so must use \r\n line endings. This means you have to guess whether your text is going to get encoded or not (admittedly, usually not that hard), and translate the line endings appropriately before generating a Message instance. I think the fix would be for Charset.encode_body() to alway force the encoder to text mode (i.e.binary=False), since it seems unlikely to have a Charset for something which is not text. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=975330&group_id=5470 From noreply at sourceforge.net Mon Jan 22 22:24:04 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 13:24:04 -0800 Subject: [ python-Bugs-1627316 ] an extra comma in condition command crashes pdb Message-ID: Bugs item #1627316, was opened at 2007-01-03 20:26 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627316&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Ilya Sandler (isandler) Assigned to: Nobody/Anonymous (nobody) Summary: an extra comma in condition command crashes pdb Initial Comment: if instead of condition one enters (note the extra comma): condition , pdb throws an exception and aborts execution of a program Relevant parts of stacktrace: File "/usr/lib/python2.4/bdb.py", line 48, in trace_dispatch return self.dispatch_line(frame) File "/usr/lib/python2.4/bdb.py", line 66, in dispatch_line self.user_line(frame) File "/usr/lib/python2.4/pdb.py", line 135, in user_line self.interaction(frame, None) File "/usr/lib/python2.4/pdb.py", line 158, in interaction self.cmdloop() File "/usr/lib/python2.4/cmd.py", line 142, in cmdloop stop = self.onecmd(line) File "/usr/lib/python2.4/cmd.py", line 219, in onecmd return func(arg) File "/usr/lib/python2.4/pdb.py", line 390, in do_condition bpnum = int(args[0].strip()) ValueError: invalid literal for int(): 2, Uncaught exception. Entering post mortem debugging Running 'cont' or 'step' will restart the program > /site/tools/pse/lib/python2.4/pdb.py(390)do_condition() -> bpnum = int(args[0].strip()) ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-22 21:24 Message: Logged In: YES user_id=849994 Originator: NO Fixed in rev. 53524, 53525 (2.5). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1627316&group_id=5470 From noreply at sourceforge.net Tue Jan 23 01:27:51 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 16:27:51 -0800 Subject: [ python-Bugs-1642054 ] Python 2.5 gets curses.h warning on HPUX Message-ID: Bugs item #1642054, was opened at 2007-01-22 19:27 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1642054&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Roy Smith (roysmith) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.5 gets curses.h warning on HPUX Initial Comment: I downloaded http://www.python.org/ftp/python/2.5/Python-2.5.tgz and tried to build it on "HP-UX glade B.11.11 U 9000/800 unknown". When I ran "./configure", I got warnings that "curses.h: present but cannot be compiled". See attached log file. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1642054&group_id=5470 From noreply at sourceforge.net Tue Jan 23 04:20:06 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 19:20:06 -0800 Subject: [ python-Bugs-411881 ] Use of "except:" in logging module Message-ID: <200701230320.l0N3K6u0013775@sc8-sf-db2-new-b.sourceforge.net> Bugs item #411881, was opened at 2001-03-28 04:58 Message generated for change (Comment added) made by sf-robot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=411881&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Closed Resolution: Fixed Priority: 2 Private: No Submitted By: Itamar Shtull-Trauring (itamar) Assigned to: Vinay Sajip (vsajip) Summary: Use of "except:" in logging module Initial Comment: A large amount of modules in the standard library use "except:" instead of specifying the exceptions to be caught. In some cases this may be correct, but I think in most cases this not true and this may cause problems. Here's the list of modules, which I got by doing: grep "except:" *.py | cut -f 1 -d " " | sort | uniq Bastion.py CGIHTTPServer.py Cookie.py SocketServer.py anydbm.py asyncore.py bdb.py cgi.py chunk.py cmd.py code.py compileall.py doctest.py fileinput.py formatter.py getpass.py htmllib.py imaplib.py inspect.py locale.py locale.py mailcap.py mhlib.py mimetools.py mimify.py os.py pdb.py popen2.py posixfile.py pre.py pstats.py pty.py pyclbr.py pydoc.py repr.py rexec.py rfc822.py shelve.py shutil.py tempfile.py threading.py traceback.py types.py unittest.py urllib.py zipfile.py ---------------------------------------------------------------------- >Comment By: SourceForge Robot (sf-robot) Date: 2007-01-22 19:20 Message: Logged In: YES user_id=1312539 Originator: NO This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2007-01-08 10:55 Message: Logged In: YES user_id=308438 Originator: NO The following changes have been checked into trunk: logging.handlers: bare except clause removed from SMTPHandler.emit. Now, only ImportError is trapped. logging.handlers: bare except clause removed from SocketHandler.createSocket. Now, only socket.error is trapped. logging: bare except clause removed from LogRecord.__init__. Now, only ValueError, TypeError and AttributeError are trapped. I'm marking this as Pending; please submit a change if you think these changes are insufficient. With the default setting of raiseExceptions, all exceptions caused by programmer error should be re-thrown by logging. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2006-12-22 04:52 Message: Logged In: YES user_id=44345 Originator: NO Vinay, In LogRecord.__init__ what exceptions do you expect to catch? Looking at the code for basename and splitext in os.py it's pretty hard to see how they would raise an exception unless they were passed something besides string or unicode objects. I think all you are doing here is masking programmer error. In StreamHandler.emit what might you get besides ValueError (if self.stream is closed)? I don't have time to go through each of the cases, but in general, it seems like the set of possible exceptions that could be raised at any given point in the code is generally pretty small. You should catch those exceptions and let the other stuff go. They are generally going to be programmer's errors and shouldn't be silently squashed. Skip ---------------------------------------------------------------------- Comment By: Vinay Sajip (vsajip) Date: 2006-12-21 23:42 Message: Logged In: YES user_id=308438 Originator: NO The reason for the fair number of bare excepts in logging is this: in many cases (e.g. long-running processes like Zope servers) users don't want their application to change behaviour just because of some exception thrown in logging. So, logging aims to be very quiet indeed and swallows exceptions, except SystemExit and KeyboardInterrupt in certain situations. Also, logging is one of the modules which is (meant to be) 1.5.2 compatible, and string exceptions are not that uncommon in older code. I've looked at bare excepts in logging and here's my summary on them: logging/__init__.py: ==================== currentframe(): Backward compatibility only, sys._getframe is used where available so currentframe() will only be called on rare occasions. LogRecord.__init__(): There's a try/bare except around calls to os.path.basename() and os.path.splitext(). I could add a raise clause for SystemExit/KeyboardInterrupt. StreamHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). shutdown(): Normally only called at system exit, and will re-raise everything if raiseExceptions is set (the default). logging/handlers.py: ==================== BaseRotatingHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). SocketHandler.createSocket(): I could add a raise clause for SystemExit/KeyboardInterrupt. SocketHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). SysLogHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). SMTPHandler.emit(): Should change bare except to ImportError for the formatdate import. Elsewhere, reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). NTEventLogHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). HTTPHandler.emit(): Reraises SystemExit and KeyboardInterrupt, and otherwise calls handleError() which raises everything if raiseExceptions is set (the default). logging/config.py: ==================== listen.ConfigStreamHandler.handle(): Reraises SystemExit and KeyboardInterrupt, prints everything else and continues - seems OK for a long-running thread. What do you think? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-21 06:09 Message: Logged In: YES user_id=11375 Originator: NO Raymond said (in 2003) most of the remaining except: statements looked reasonable, so I'm changing this bug's summary to refer to the logging module and reassigning to vsajip. PEP 8 doesn't say anything about bare excepts; I'll bring this up on python-dev. ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2003-12-13 03:21 Message: Logged In: YES user_id=80475 Hold-off on logging for a bit. Vinay Sajip has other patches already under review. I'll ask him to fix-up the bare excepts in conjuction with those patches. For the other modules, patches are welcome. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2003-12-11 12:54 Message: Logged In: YES user_id=6380 You're right. The logging module uses more blank except: clauses than I'm comfortable with. Anyone want to upload a patch set? ---------------------------------------------------------------------- Comment By: Grant Monroe (gmonroe) Date: 2003-12-11 12:50 Message: Logged In: YES user_id=929204 A good example of an incorrect use of a blanket "except:" clause is in __init__.py in the logging module. The emit method of the StreamHandler class should special case KeyboardInterrupt. Something like this: try: .... except KeyboardInterrupt: raise except: self.handleError(record) ---------------------------------------------------------------------- Comment By: Raymond Hettinger (rhettinger) Date: 2003-09-01 19:47 Message: Logged In: YES user_id=80475 Some efforts were made to remove many bare excepts prior to Py2.3a1. Briefly scanning those that remain, it looks like many of them are appropriate or best left alone. I recommend that this bug be closed unless someone sees something specific that demands a change. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-05-16 16:30 Message: Logged In: YES user_id=357491 threading.py is clear. It's blanket exceptions are for printing debug output since exceptions in threads don't get passed back to the original frame anyway. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2002-08-13 20:15 Message: Logged In: YES user_id=44345 checked in fileinput.py (v 1.15) with three except:'s tightened up. The comment in the code about IOError notwithstanding, I don't see how any of the three situations would have caught anything other than OSError. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2002-08-12 12:58 Message: Logged In: YES user_id=44345 Note that this particular item was expected to be an ongoing item, with no obvious closure. Some of the bare excepts will have subtle ramifications, and it's not always obvious what specific exceptions should be caught. I've made a few changes to my local source tree which I should check in. Rather than opening new tracker items, I believe those with checkin privileges should correct those flaws they identify and attach a comment which will alert those monitoring the item. Those people without checkin privileges should just attach a patch with a note. Skip ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2002-08-12 00:22 Message: Logged In: YES user_id=21627 My proposal would be to track this under a different issue: Terry, if you volunteer, please produce a new list of offenders (perhaps in an attachment to the report so it can be updated), and attach any fixes that you have to that report. People with CVS write access can then apply those patches and delete them from the report. If you do so, please post the new issue number in this report, so we have a link. ---------------------------------------------------------------------- Comment By: Terry J. Reedy (tjreedy) Date: 2002-08-11 11:16 Message: Logged In: YES user_id=593130 Remove types.py from the list. As distributed with 2.2.1, it has 5 'except xxxError:' statements but no offending bare except:'s. Skip (or anyone else): if/when you pursue this, I volunteer to do occasional sleuthing and send reports with suggestions and/or questions. Example: getpass.py has one 'offense': try: fd = sys.stdin.fileno() except: return default_getpass(prompt) According to lib doc 2.2.8 File Objects (as I interpret) fileno () should either work without exception or *not* be implemented. Suggestion: insert AttributeError . Question: do we protect against pseudofile objects that ignore doc and have fake .fileno() that raises NotImplementedError or whatever? ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2002-03-22 22:02 Message: Logged In: YES user_id=44345 as partial fix, checked in changes for the following modules: mimetools.py (1.24) popen2.py (1.23) quopripy (1.19) CGIHTTPServer.py (1.22) ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2002-03-20 13:24 Message: Logged In: YES user_id=44345 Here is a context diff with proposed changes for the following modules: CGIHTTPServer, cgi, cmd, code, fileinput, httplib, inspect, locale, mimetools, popen2, quopri ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2001-08-11 08:06 Message: Logged In: YES user_id=21627 Fixed urllib in 1.131 and types in 1.19. ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-07-04 00:11 Message: Logged In: YES user_id=3066 Fixed modules mhlib and rfc822 (SF is having a problem generating the checkin emails, though). ---------------------------------------------------------------------- Comment By: Fred L. Drake, Jr. (fdrake) Date: 2001-05-11 12:40 Message: Logged In: YES user_id=3066 OK, I've fixed up a few more modules: anydbm chunk formatter htmllib mailcap pre pty I made one change to asyncore as well, but other bare except clauses remain there; I'm not sufficiently familiar with that code to just go digging into those. I also fixed an infraction in pstats, but left others for now. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2001-04-23 01:14 Message: Logged In: YES user_id=31435 Ping's intent is that pydoc work under versions of Python as early as 1.5.2, so that sys._getframe is off-limits in pydoc and its supporting code (like inspect.py). ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2001-04-23 00:32 Message: Logged In: YES user_id=21627 For inspect.py, why is it necessary to keep the old code at all? My proposal: remove currentframe altogether, and do currentframe = sys._getframe unconditionally. ---------------------------------------------------------------------- Comment By: Itamar Shtull-Trauring (itamar) Date: 2001-04-22 07:52 Message: Logged In: YES user_id=32065 I submitted a 4th patch. I'm starting to run out of easy cases... ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2001-04-19 02:15 Message: Logged In: YES user_id=44345 I believe the following patch is correct for the try/except in inspect.currentframe. Note that it fixes two problems. One, it avoids a bare except. Two, it gets rid of a string argument to the raise statement (string exceptions are now deprecated, right?). *** /tmp/skip/inspect.py Thu Apr 19 04:13:36 2001 --- /tmp/skip/inspect.py.~1.16~ Thu Apr 19 04:13:36 2001 *************** *** 643,650 **** def currentframe(): """Return the frame object for the caller's stack frame.""" try: ! 1/0 ! except ZeroDivisionError: return sys.exc_traceback.tb_frame.f_back if hasattr(sys, '_getframe'): currentframe = sys._getframe --- 643,650 ---- def currentframe(): """Return the frame object for the caller's stack frame.""" try: ! raise 'catch me' ! except: return sys.exc_traceback.tb_frame.f_back if hasattr(sys, '_getframe'): currentframe = sys._getframe ---------------------------------------------------------------------- Comment By: Itamar Shtull-Trauring (itamar) Date: 2001-04-17 08:27 Message: Logged In: YES user_id=32065 inspect.py uses sys_getframe if it's there, the other code is for backwards compatibility. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-11 10:24 Message: Logged In: YES user_id=6380 Actually, inspect.py should use sys._getframe()! And yes, KeyboardError is definitely one of the reasons why this is such a bad idiom... ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2001-04-11 10:15 Message: Logged In: YES user_id=89016 > Can you identify modules where catching everything > is incorrect If "everything" includes KeyboardInterrupt, it's definitely incorrect, even in inspect.py's simple try: raise 'catch me' except: return sys.exc_traceback.tb_frame.f_back which should probably be: try: raise 'catch me' except KeyboardInterrupt: raise except: return sys.exc_traceback.tb_frame.f_back ---------------------------------------------------------------------- Comment By: Walter D?rwald (doerwalter) Date: 2001-04-11 10:13 Message: Logged In: YES user_id=89016 > Can you identify modules where catching everything > is incorrect If "everything" includes KeyboardInterrupt, it's definitely incorrect, even in inspect.py's simple try: raise 'catch me' except: return sys.exc_traceback.tb_frame.f_back which should probably be: try: raise 'catch me' except KeyboardInterrupt: raise except: return sys.exc_traceback.tb_frame.f_back ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2001-04-10 08:45 Message: Logged In: YES user_id=6380 I've applied the three patches you supplied. I agree with Martin that to do this right we'll have to tread carefully. But please go on! (No way more of this will find its way into 2.1 though.) ---------------------------------------------------------------------- Comment By: Itamar Shtull-Trauring (itamar) Date: 2001-03-30 02:54 Message: Logged In: YES user_id=32065 inspect.py should be removed from this list, the use is correct. In general, I just submitted this bug so that when people are editing a module they'll notice these things, since in some cases only someone who knows the code very well can know if the "expect:" is needed or not. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2001-03-29 22:59 Message: Logged In: YES user_id=21627 Can you identify modules where catching everything is incorrect, and propose changes to correct them. This should be done one-by-one, with careful analysis in each case, and may take well months or years to complete. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=411881&group_id=5470 From noreply at sourceforge.net Tue Jan 23 05:45:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 20:45:33 -0800 Subject: [ python-Bugs-1483133 ] gen_iternext: Assertion `f->f_back != ((void *)0)' failed Message-ID: Bugs item #1483133, was opened at 2006-05-06 14:09 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1483133&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 >Status: Closed >Resolution: Wont Fix Priority: 5 Private: No Submitted By: svensoho (svensoho) Assigned to: Phillip J. Eby (pje) Summary: gen_iternext: Assertion `f->f_back != ((void *)0)' failed Initial Comment: Seems to be similar bug as http://sourceforge.net/ tracker/index.php? func=detail&aid=1257960&group_id=5470&atid=105470 (fixed) Couldn't trigger with same script but with C application. Same source modification helps (at Objects/genobject.c:53) ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-22 20:45 Message: Logged In: YES user_id=33168 Originator: NO I agree with Martin. This is fixed in 2.5, but since we are no longer maintaining 2.4, it will not be fixed there. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-22 00:06 Message: Logged In: YES user_id=21627 Originator: NO Python 2.4 is not actively maintained anymore. As this occurs in the debug build only, I recommend closing it as "won't fix". Just lowering the priority for now (svensoho, please don't change priorities). ---------------------------------------------------------------------- Comment By: svensoho (svensoho) Date: 2006-06-30 00:35 Message: Logged In: YES user_id=1518209 2.5 is already fixed: http://sourceforge.net/tracker/ index.php?func=detail&aid=1257960&group_id=5470&atid=105470 2.4 has exactly same problematic assertion, even same modification helps. Fedora has fixed it in their distribution: https://bugzilla.redhat.com/bugzilla/ show_bug.cgi?id=192592 ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-06-30 00:14 Message: Logged In: YES user_id=33168 Does this affect 2.5 or only 2.4? There were a fair amount of generator changes in 2.5. ---------------------------------------------------------------------- Comment By: svensoho (svensoho) Date: 2006-05-26 07:42 Message: Logged In: YES user_id=1518209 This bug is blocking development of PostgreSQL Python based stored procedure language -- PL/Python. See http://archives.postgresql.org/pgsql-patches/2006-04/msg 00265.php ---------------------------------------------------------------------- Comment By: svensoho (svensoho) Date: 2006-05-15 01:26 Message: Logged In: YES user_id=1518209 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1483133&group_id=5470 From noreply at sourceforge.net Tue Jan 23 07:21:41 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 22 Jan 2007 22:21:41 -0800 Subject: [ python-Bugs-1560179 ] Better/faster implementation of os.path.basename/dirname Message-ID: Bugs item #1560179, was opened at 2006-09-17 16:55 Message generated for change (Comment added) made by pylucid You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1560179&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Closed Resolution: Accepted Priority: 5 Private: No Submitted By: Michael Gebetsroither (einsteinmg) Assigned to: Nobody/Anonymous (nobody) Summary: Better/faster implementation of os.path.basename/dirname Initial Comment: hi, basename/dirname could do better (especially on long pathnames) def basename(p): return split(p)[1] def dirname(p): return split(p)[0] both construct base and dirname and discard the unused one. what about that? def basename(p): i = p.rfind('/') + 1 return p[i:] def dirname(p): i = p.rfind('/') + 1 return p[:i] greets, michael ---------------------------------------------------------------------- Comment By: Jens Diemer (pylucid) Date: 2007-01-23 07:21 Message: Logged In: YES user_id=1330780 Originator: NO A faster implementation is ok... But why only posixpath patched? Why not ntpath and macpath updated, too? ---------------------------------------------------------------------- Comment By: Josiah Carlson (josiahcarlson) Date: 2006-10-17 00:23 Message: Logged In: YES user_id=341410 I note that in the current SVN, dirname uses a test of "if head and head != '/'*len(head):" to check for the path being all /, could be replaced by "if head and head.count('/') != len(head):", but it probably isn't terribly important. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-10-12 15:08 Message: Logged In: YES user_id=849994 Committed in rev. 52316. ---------------------------------------------------------------------- Comment By: Michael Gebetsroither (einsteinmg) Date: 2006-09-18 14:42 Message: Logged In: YES user_id=1600082 posixpath with this patch passes all test from test_posixpath cleanly. benchmark: basename( 310 ) means basename called with a 310 chars long path sum = 0.0435626506805 min = 4.19616699219e-05 posixpath.basename( 310 ) sum = 0.152147769928 min = 0.00014591217041 posixpath_orig.basename( 310 ) sum = 0.0436658859253 min = 4.07695770264e-05 posixpath.basename( 106 ) sum = 0.117312431335 min = 0.000112771987915 posixpath_orig.basename( 106 ) sum = 0.0426909923553 min = 4.07695770264e-05 posixpath.basename( 21 ) sum = 0.113305330276 min = 0.000110864639282 posixpath_orig.basename( 21 ) sum = 0.12392115593 min = 0.000121831893921 posixpath.dirname( 310 ) sum = 0.152860403061 min = 0.00014591217041 posixpath_orig.dirname( 310 ) sum = 0.0942873954773 min = 9.08374786377e-05 posixpath.dirname( 106 ) sum = 0.114937067032 min = 0.000111818313599 posixpath_orig.dirname( 106 ) sum = 0.0918889045715 min = 8.79764556885e-05 posixpath.dirname( 21 ) sum = 0.114675760269 min = 0.000109910964966 posixpath_orig.dirname( 21 ) greets ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1560179&group_id=5470 From noreply at sourceforge.net Tue Jan 23 22:13:42 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 23 Jan 2007 13:13:42 -0800 Subject: [ python-Bugs-1579370 ] Segfault provoked by generators and exceptions Message-ID: Bugs item #1579370, was opened at 2006-10-18 04:23 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 9 Private: No Submitted By: Mike Klaas (mklaas) Assigned to: Nobody/Anonymous (nobody) Summary: Segfault provoked by generators and exceptions Initial Comment: A reproducible segfault when using heavily-nested generators and exceptions. Unfortunately, I haven't yet been able to provoke this behaviour with a standalone python2.5 script. There are, however, no third-party c extensions running in the process so I'm fairly confident that it is a problem in the core. The gist of the code is a series of nested generators which leave scope when an exception is raised. This exception is caught and re-raised in an outer loop. The old exception was holding on to the frame which was keeping the generators alive, and the sequence of generator destruction and new finalization caused the segfault. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-23 22:13 Message: Logged In: YES user_id=21627 Originator: NO This is now fixed in r53531 and r53532. For the trunk, it is likely that f_tstate will get eliminated altogether in the near future. People who had the problem are really encouraged to test 2.5.1c1 when it is released. ---------------------------------------------------------------------- Comment By: Andrew Waters (awaters) Date: 2007-01-22 09:46 Message: Logged In: YES user_id=1418249 Originator: NO A quick test on code that always segfaulted with unpatched Python 2.5 seems to work. Needs more extensive testing... ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-22 08:51 Message: Logged In: YES user_id=21627 Originator: NO I don't like mklaas' patch, since I think it is conceptually wrong to have PyTraceBack_Here() use the frame's thread state (mklaas describes it as dirty, and I agree). I'm proposing an alternative patch (tr.diff); please test this as well. File Added: tr.diff ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 08:01 Message: Logged In: YES user_id=33168 Originator: NO Bumping priority to see if this should go into 2.5.1. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-04 11:42 Message: Logged In: YES user_id=21627 Originator: NO Why do frame objects have a thread state in the first place? In particular, why does PyTraceBack_Here get the thread state from the frame, instead of using the current thread? Introduction of f_tstate goes back to r7882, but it is not clear why it was done that way. ---------------------------------------------------------------------- Comment By: Andrew Waters (awaters) Date: 2007-01-04 10:35 Message: Logged In: YES user_id=1418249 Originator: NO This fixes the segfault problem that I was able to reliably reproduce on Linux. We need to get this applied (assuming it is the correct fix) to the source to make Python 2.5 usable for me in production code. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-11-27 19:41 Message: Logged In: YES user_id=1611720 Originator: YES The following patch resets the thread state of the generator when it is resumed, which prevents the segfault for me: Index: Objects/genobject.c =================================================================== --- Objects/genobject.c (revision 52849) +++ Objects/genobject.c (working copy) @@ -77,6 +77,7 @@ Py_XINCREF(tstate->frame); assert(f->f_back == NULL); f->f_back = tstate->frame; + f->f_tstate = tstate; gen->gi_running = 1; result = PyEval_EvalFrameEx(f, exc); ---------------------------------------------------------------------- Comment By: Eric Noyau (eric_noyau) Date: 2006-11-27 19:07 Message: Logged In: YES user_id=1388768 Originator: NO We are experiencing the same segfault in our application, reliably. Running our unit test suite just segfault everytime on both Linux and Mac OS X. Applying Martin's patch fixes the segfault, and makes everything fine and dandy, at the cost of some memory leaks if I understand properly. This particular bug prevents us to upgrade to python 2.5 in production. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-28 07:18 Message: Logged In: YES user_id=31435 > I tried Tim's hope.py on Linux x86_64 and > Mac OS X 10.4 with debug builds and neither > one crashed. Tim's guess looks pretty damn > good too. Neal, note that it's the /Windows/ malloc that fills freed memory with "dangerous bytes" in a debug build -- this really has nothing to do with that it's a debug build of /Python/ apart from that on Windows a debug build of Python also links in the debug version of Microsoft's malloc. The valgrind report is pointing at the same thing. Whether this leads to a crash is purely an accident of when and how the system malloc happens to reuse the freed memory. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-28 06:56 Message: Logged In: YES user_id=33168 Mike, what platform are you having the problem on? I tried Tim's hope.py on Linux x86_64 and Mac OS X 10.4 with debug builds and neither one crashed. Tim's guess looks pretty damn good too. Here's the result of valgrind: Invalid read of size 8 at 0x4CEBFE: PyTraceBack_Here (traceback.c:117) by 0x49C1F1: PyEval_EvalFrameEx (ceval.c:2515) by 0x4F615D: gen_send_ex (genobject.c:82) by 0x4F6326: gen_close (genobject.c:128) by 0x4F645E: gen_del (genobject.c:163) by 0x4F5F00: gen_dealloc (genobject.c:31) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x44534E: dict_dealloc (dictobject.c:801) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x4664FF: subtype_dealloc (typeobject.c:686) by 0x44D207: _Py_Dealloc (object.c:1928) by 0x42325D: instancemethod_dealloc (classobject.c:2287) Address 0x56550C0 is 88 bytes inside a block of size 152 free'd at 0x4A1A828: free (vg_replace_malloc.c:233) by 0x4C3899: tstate_delete_common (pystate.c:256) by 0x4C3926: PyThreadState_DeleteCurrent (pystate.c:282) by 0x4D4043: t_bootstrap (threadmodule.c:448) by 0x4B24C48: pthread_start_thread (in /lib/libpthread-0.10.so) The only way I can think to fix this is to keep a set of active generators in the PyThreadState and calling gen_send_ex(exc=1) for all the active generators before killing the tstate in t_bootstrap. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-10-19 09:58 Message: Logged In: YES user_id=6656 > and for some reason Python uses the system malloc directly > to obtain memory for thread states. This bit is fairly easy: they are allocated without the GIL being held, which breaks an assumption of PyMalloc. No idea about the real problem, sadly. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-19 02:38 Message: Logged In: YES user_id=31435 I've attached a much simplified pure-Python script (hope.py) that reproduces a problem very quickly, on Windows, in a /debug/ build of current trunk. It typically prints: exiting generator joined thread at most twice before crapping out. At the time, the `next` argument to newtracebackobject() is 0xdddddddd, and tracing back a level shows that, in PyTraceBack_Here(), frame->tstate is entirely filled with 0xdd bytes. Note that this is not a debug-build obmalloc gimmick! This is Microsoft's similar debug-build gimmick for their malloc, and for some reason Python uses the system malloc directly to obtain memory for thread states. The Microsoft debug free() fills newly-freed memory with 0xdd, which has the same meaning as the debug-build obmalloc's DEADBYTE (0xdb). So somebody is accessing a thread state here after it's been freed. Best guess is that the generator is getting "cleaned up" after the thread that created it has gone away, so the generator's frame's f_tstate is trash. Note that a PyThreadState (a frame's f_tstate) is /not/ a Python object -- it's just a raw C struct, and its lifetime isn't controlled by refcounts. ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-19 02:12 Message: Logged In: YES user_id=1611720 Despite Tim's reassurrance, I'm afraid that Martin's patch does infact prevent the segfault. Sounds like it also introduces a memleak. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2006-10-18 23:57 Message: Logged In: YES user_id=31435 > Can anybody tell why gi_frame *isn't* incref'ed when > the generator is created? As documented (in concrete.tex), PyGen_New(f) steals a reference to the frame passed to it. Its only call site (well, in the core) is in ceval.c, which returns immediately after PyGen_New takes over ownership of the frame the caller created: """ /* Create a new generator that owns the ready to run frame * and return that as the value. */ return PyGen_New(f); """ In short, that PyGen_New() doesn't incref the frame passed to it is intentional. It's possible that the intent is flawed ;-), but offhand I don't see how. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-10-18 23:05 Message: Logged In: YES user_id=21627 Can you please review/try attached patch? Can anybody tell why gi_frame *isn't* incref'ed when the generator is created? ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 21:47 Message: Logged In: YES user_id=1611720 I cannot yet produce an only-python script which reproduces the problem, but I can give an overview. There is a generator running in one thread, an exception being raised in another thread, and as a consequent, the generator in the first thread is garbage-collected (triggering an exception due to the new generator cleanup). The problem is extremely sensitive to timing--often the insertion/removal of print statements, or reordering the code, causes the problem to vanish, which is confounding my ability to create a simple test script. def getdocs(): def f(): while True: f() yield None # ----------------------------------------------------------------------------- class B(object): def __init__(self,): pass def doit(self): # must be an instance var to trigger segfault self.docIter = getdocs() print self.docIter # this is the generator referred-to in the traceback for i, item in enumerate(self.docIter): if i > 9: break print 'exiting generator' class A(object): """ Process entry point / main thread """ def __init__(self): while True: try: self.func() except Exception, e: print 'right after raise' def func(self): b = B() thread = threading.Thread(target=b.doit) thread.start() start_t = time.time() while True: try: if time.time() - start_t > 1: raise Exception except Exception: print 'right before raise' # SIGSEGV here. If this is changed to # 'break', no segfault occurs raise if __name__ == '__main__': A() ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 21:37 Message: Logged In: YES user_id=1611720 I've produced a simplified traceback with a single generator . Note the frame being used in the traceback (#0) is the same frame being dealloc'd (#11). The relevant call in traceback.c is: PyTraceBack_Here(PyFrameObject *frame) { PyThreadState *tstate = frame->f_tstate; PyTracebackObject *oldtb = (PyTracebackObject *) tstate->curexc_traceback; PyTracebackObject *tb = newtracebackobject(oldtb, frame); and I can verify that oldtb contains garbage: (gdb) print frame $1 = (PyFrameObject *) 0x8964d94 (gdb) print frame->f_tstate $2 = (PyThreadState *) 0x895b178 (gdb) print $2->curexc_traceback $3 = (PyObject *) 0x66 #0 0x080e4296 in PyTraceBack_Here (frame=0x8964d94) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x8964d94, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb7cca4ac, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb7cca4ac, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb7cca4ac) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb7cca4ac) at Objects/genobject.c:31 #6 0x080815b9 in dict_dealloc (mp=0xb7cc913c) at Objects/dictobject.c:801 #7 0x080927b2 in subtype_dealloc (self=0xb7cca76c) at Objects/typeobject.c:686 #8 0x0806028d in instancemethod_dealloc (im=0xb7d07f04) at Objects/classobject.c:2285 #9 0x080815b9 in dict_dealloc (mp=0xb7cc90b4) at Objects/dictobject.c:801 #10 0x080927b2 in subtype_dealloc (self=0xb7cca86c) at Objects/typeobject.c:686 #11 0x081028c5 in frame_dealloc (f=0x8964a94) at Objects/frameobject.c:416 #12 0x080e41b1 in tb_dealloc (tb=0xb7cc1fcc) at Python/traceback.c:34 #13 0x080e41c2 in tb_dealloc (tb=0xb7cc1f7c) at Python/traceback.c:33 #14 0x08080dca in insertdict (mp=0xb7f99824, key=0xb7ccd020, hash=1492466088, value=0xb7ccd054) at Objects/dictobject.c:394 #15 0x080811a4 in PyDict_SetItem (op=0xb7f99824, key=0xb7ccd020, value=0xb7ccd054) at Objects/dictobject.c:619 #16 0x08082dc6 in PyDict_SetItemString (v=0xb7f99824, key=0x8129284 "exc_traceback", item=0xb7ccd054) at Objects/dictobject.c:2103 #17 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb7ccd054) at Python/sysmodule.c:82 #18 0x080bc9e5 in PyEval_EvalFrameEx (f=0x895f934, throwflag=0) at Python/ceval.c:2954 ---Type to continue, or q to quit--- #19 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f6ade8, globals=0xb7fafa44, locals=0x0, args=0xb7cc5ff8, argcount=1, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #20 0x08104083 in function_call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/funcobject.c:517 #21 0x0805a660 in PyObject_Call (func=0xb7cc7294, arg=0xb7cc5fec, kw=0x0) at Objects/abstract.c:1860 ---------------------------------------------------------------------- Comment By: Mike Klaas (mklaas) Date: 2006-10-18 04:23 Message: Logged In: YES user_id=1611720 Program received signal SIGSEGV, Segmentation fault. [Switching to Thread -1208400192 (LWP 26235)] 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 94 if ((next != NULL && !PyTraceBack_Check(next)) || (gdb) bt #0 0x080e4296 in PyTraceBack_Here (frame=0x9c2d7b4) at Python/traceback.c:94 #1 0x080b9ab7 in PyEval_EvalFrameEx (f=0x9c2d7b4, throwflag=1) at Python/ceval.c:2459 #2 0x08101a40 in gen_send_ex (gen=0xb64f880c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #3 0x08101c0f in gen_close (gen=0xb64f880c, args=0x0) at Objects/genobject.c:128 #4 0x08101cde in gen_del (self=0xb64f880c) at Objects/genobject.c:163 #5 0x0810195b in gen_dealloc (gen=0xb64f880c) at Objects/genobject.c:31 #6 0x080b9912 in PyEval_EvalFrameEx (f=0x9c2802c, throwflag=1) at Python/ceval.c:2491 #7 0x08101a40 in gen_send_ex (gen=0xb64f362c, arg=0x81333e0, exc=1) at Objects/genobject.c:82 #8 0x08101c0f in gen_close (gen=0xb64f362c, args=0x0) at Objects/genobject.c:128 #9 0x08101cde in gen_del (self=0xb64f362c) at Objects/genobject.c:163 #10 0x0810195b in gen_dealloc (gen=0xb64f362c) at Objects/genobject.c:31 #11 0x080815b9 in dict_dealloc (mp=0xb64f4a44) at Objects/dictobject.c:801 #12 0x080927b2 in subtype_dealloc (self=0xb64f340c) at Objects/typeobject.c:686 #13 0x0806028d in instancemethod_dealloc (im=0xb796a0cc) at Objects/classobject.c:2285 #14 0x080815b9 in dict_dealloc (mp=0xb64f78ac) at Objects/dictobject.c:801 #15 0x080927b2 in subtype_dealloc (self=0xb64f810c) at Objects/typeobject.c:686 #16 0x081028c5 in frame_dealloc (f=0x9c272bc) at Objects/frameobject.c:416 #17 0x080e41b1 in tb_dealloc (tb=0xb799166c) at Python/traceback.c:34 #18 0x080e41c2 in tb_dealloc (tb=0xb4071284) at Python/traceback.c:33 #19 0x080e41c2 in tb_dealloc (tb=0xb7991824) at Python/traceback.c:33 #20 0x08080dca in insertdict (mp=0xb7f56824, key=0xb3fb9930, hash=1492466088, value=0xb3fb9914) at Objects/dictobject.c:394 #21 0x080811a4 in PyDict_SetItem (op=0xb7f56824, key=0xb3fb9930, value=0xb3fb9914) at Objects/dictobject.c:619 #22 0x08082dc6 in PyDict_SetItemString (v=0xb7f56824, key=0x8129284 "exc_traceback", item=0xb3fb9914) at Objects/dictobject.c:2103 #23 0x080e2837 in PySys_SetObject (name=0x8129284 "exc_traceback", v=0xb3fb9914) at Python/sysmodule.c:82 #24 0x080bc9e5 in PyEval_EvalFrameEx (f=0x9c10e7c, throwflag=0) at Python/ceval.c:2954 #25 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc890, globals=0xb7bbe57c, locals=0x0, args=0x9b8e2ac, argcount=1, kws=0x9b8e2b0, kwcount=0, defs=0xb7b7aed8, defcount=1, closure=0x0) at Python/ceval.c:2833 #26 0x080bd62a in PyEval_EvalFrameEx (f=0x9b8e16c, throwflag=0) at Python/ceval.c:3662 #27 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7bbc848, globals=0xb7bbe57c, locals=0x0, args=0xb7af9d58, argcount=1, kws=0x9b7a818, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #28 0x08104083 in function_call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/funcobject.c:517 #29 0x0805a660 in PyObject_Call (func=0xb7b79c34, arg=0xb7af9d4c, kw=0xb7962c64) at Objects/abstract.c:1860 #30 0x080bcb4b in PyEval_EvalFrameEx (f=0x9b82c0c, throwflag=0) at Python/ceval.c:3846 #31 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7cd6608, globals=0xb7cd4934, locals=0x0, args=0x9b7765c, argcount=2, kws=0x9b77664, kwcount=0, defs=0x0, defcount=0, closure=0xb7cfe874) at Python/ceval.c:2833 #32 0x080bd62a in PyEval_EvalFrameEx (f=0x9b7751c, throwflag=0) at Python/ceval.c:3662 #33 0x080bdf70 in PyEval_EvalFrameEx (f=0x9a9646c, throwflag=0) at Python/ceval.c:3652 #34 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39728, globals=0xb7f6ca44, locals=0x0, args=0x9b7a00c, argcount=0, kws=0x9b7a00c, kwcount=0, defs=0x0, defcount=0, closure=0xb796410c) at Python/ceval.c:2833 #35 0x080bd62a in PyEval_EvalFrameEx (f=0x9b79ebc, throwflag=0) at Python/ceval.c:3662 #36 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f39770, globals=0xb7f6ca44, locals=0x0, args=0x99086c0, argcount=0, kws=0x99086c0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 #37 0x080bd62a in PyEval_EvalFrameEx (f=0x9908584, throwflag=0) at Python/ceval.c:3662 #38 0x080bfda3 in PyEval_EvalCodeEx (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at Python/ceval.c:2833 ---Type to continue, or q to quit--- #39 0x080bff32 in PyEval_EvalCode (co=0xb7f397b8, globals=0xb7f6ca44, locals=0xb7f6ca44) at Python/ceval.c:494 #40 0x080ddff1 in PyRun_FileExFlags (fp=0x98a4008, filename=0xbfffd4a3 "scoreserver.py", start=257, globals=0xb7f6ca44, locals=0xb7f6ca44, closeit=1, flags=0xbfffd298) at Python/pythonrun.c:1264 #41 0x080de321 in PyRun_SimpleFileExFlags (fp=Variable "fp" is not available. ) at Python/pythonrun.c:870 #42 0x08056ac4 in Py_Main (argc=1, argv=0xbfffd334) at Modules/main.c:496 #43 0x00a69d5f in __libc_start_main () from /lib/libc.so.6 #44 0x08056051 in _start () ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1579370&group_id=5470 From noreply at sourceforge.net Tue Jan 23 22:26:46 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 23 Jan 2007 13:26:46 -0800 Subject: [ python-Bugs-1377858 ] segfaults when using __del__ and weakrefs Message-ID: Bugs item #1377858, was opened at 2005-12-10 22:20 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1377858&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open >Resolution: Accepted Priority: 9 Private: No Submitted By: Carl Friedrich Bolz (cfbolz) >Assigned to: Brett Cannon (bcannon) Summary: segfaults when using __del__ and weakrefs Initial Comment: You can segfault Python by creating a weakref to an object in its __del__ method, storing it somewhere and then trying to dereference the weakref afterwards. the attached file shows the described behaviour. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-23 22:26 Message: Logged In: YES user_id=21627 Originator: NO The first comment has a non-sensical (to me) phrase: "rely on part of theof the object". Otherwise, it looks fine to me. Please apply, if you can, before 2.5c1. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-17 19:38 Message: Logged In: YES user_id=357491 Originator: NO I have just been waiting on someone to do a final code review on it. As soon as someone else signs off I will commit it. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 08:02 Message: Logged In: YES user_id=33168 Originator: NO Brett, Michael, Armin, can we get this patch checked in for 2.5.1? ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2006-08-20 06:31 Message: Logged In: YES user_id=357491 After finally figuring out where *list was made NULL (and adding a comment about it where it occurs), I added a test to test_weakref.py . Didn't try to tackle classic classes. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2006-08-12 13:31 Message: Logged In: YES user_id=4771 The clear_weakref(*list) only clears the first weakref to the object. You need a while loop in your patch. (attached proposed fix) Now we're left with fixing the same bug in old-style classes (surprize surprize), and turning the crasher into a test. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2006-06-29 19:36 Message: Logged In: YES user_id=357491 So after staring at this crasher it seemed to me to be that clearing the new weakrefs w/o calling their finalizers after calling the object's finalizer was the best solution. I couldn't think of any other good way to communicate to the new weakrefs that the object they refer to was no longer viable memory without doing clear_weakref() work by hand. Attached is a patch to do this. Michael, can you have a look? ---------------------------------------------------------------------- Comment By: Georg Brandl (birkenfeld) Date: 2006-01-10 20:29 Message: Logged In: YES user_id=1188172 Added to outstanding_crashes.py. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-01-09 12:58 Message: Logged In: YES user_id=6656 Hmm, maybe the referenced mayhem is more to do with clearing __dict__ than calling __del__. What breaks if we do things in this order: 1. call __del__ 2. clear weakrefs 3. clear __dict__ ? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-01-09 12:54 Message: Logged In: YES user_id=6656 Hmm, I was kind of hoping this report would get more attention. The problem is obvious if you read typeobject.c around line 660: the weakref list is cleared before __del__ is called, so any weakrefs added during the execution of __del__ are never informed of the object's death. One fix for this would be to clear the weakref list _after_ calling __del__ but that led to other mayhem in ways I haven't boethered to understand (see SF bug #742911). I guess we could just clear out any weakrefs created in __del__ without calling their callbacks. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1377858&group_id=5470 From noreply at sourceforge.net Wed Jan 24 00:22:27 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 23 Jan 2007 15:22:27 -0800 Subject: [ python-Bugs-1377858 ] segfaults when using __del__ and weakrefs Message-ID: Bugs item #1377858, was opened at 2005-12-10 13:20 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1377858&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 >Status: Closed Resolution: Accepted Priority: 9 Private: No Submitted By: Carl Friedrich Bolz (cfbolz) Assigned to: Brett Cannon (bcannon) Summary: segfaults when using __del__ and weakrefs Initial Comment: You can segfault Python by creating a weakref to an object in its __del__ method, storing it somewhere and then trying to dereference the weakref afterwards. the attached file shows the described behaviour. ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2007-01-23 15:22 Message: Logged In: YES user_id=357491 Originator: NO rev. 53533 (for 25-maint) and rev. 53535 (trunk) have the patch with an improved comment. Py3K should eventually have its crasher file for this test deleted since classic classes will no longer be an issue. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-23 13:26 Message: Logged In: YES user_id=21627 Originator: NO The first comment has a non-sensical (to me) phrase: "rely on part of theof the object". Otherwise, it looks fine to me. Please apply, if you can, before 2.5c1. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-17 10:38 Message: Logged In: YES user_id=357491 Originator: NO I have just been waiting on someone to do a final code review on it. As soon as someone else signs off I will commit it. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-16 23:02 Message: Logged In: YES user_id=33168 Originator: NO Brett, Michael, Armin, can we get this patch checked in for 2.5.1? ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2006-08-19 21:31 Message: Logged In: YES user_id=357491 After finally figuring out where *list was made NULL (and adding a comment about it where it occurs), I added a test to test_weakref.py . Didn't try to tackle classic classes. ---------------------------------------------------------------------- Comment By: Armin Rigo (arigo) Date: 2006-08-12 04:31 Message: Logged In: YES user_id=4771 The clear_weakref(*list) only clears the first weakref to the object. You need a while loop in your patch. (attached proposed fix) Now we're left with fixing the same bug in old-style classes (surprize surprize), and turning the crasher into a test. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2006-06-29 10:36 Message: Logged In: YES user_id=357491 So after staring at this crasher it seemed to me to be that clearing the new weakrefs w/o calling their finalizers after calling the object's finalizer was the best solution. I couldn't think of any other good way to communicate to the new weakrefs that the object they refer to was no longer viable memory without doing clear_weakref() work by hand. Attached is a patch to do this. Michael, can you have a look? ---------------------------------------------------------------------- Comment By: Georg Brandl (birkenfeld) Date: 2006-01-10 11:29 Message: Logged In: YES user_id=1188172 Added to outstanding_crashes.py. ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-01-09 03:58 Message: Logged In: YES user_id=6656 Hmm, maybe the referenced mayhem is more to do with clearing __dict__ than calling __del__. What breaks if we do things in this order: 1. call __del__ 2. clear weakrefs 3. clear __dict__ ? ---------------------------------------------------------------------- Comment By: Michael Hudson (mwh) Date: 2006-01-09 03:54 Message: Logged In: YES user_id=6656 Hmm, I was kind of hoping this report would get more attention. The problem is obvious if you read typeobject.c around line 660: the weakref list is cleared before __del__ is called, so any weakrefs added during the execution of __del__ are never informed of the object's death. One fix for this would be to clear the weakref list _after_ calling __del__ but that led to other mayhem in ways I haven't boethered to understand (see SF bug #742911). I guess we could just clear out any weakrefs created in __del__ without calling their callbacks. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1377858&group_id=5470 From noreply at sourceforge.net Wed Jan 24 05:22:15 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 23 Jan 2007 20:22:15 -0800 Subject: [ python-Bugs-1643150 ] Grammatical Error Message-ID: Bugs item #1643150, was opened at 2007-01-23 20:22 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643150&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Steve Miller (stmille) Assigned to: Nobody/Anonymous (nobody) Summary: Grammatical Error Initial Comment: http://docs.python.org/tut/node10.html s/One my also/One may also/ ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643150&group_id=5470 From noreply at sourceforge.net Wed Jan 24 06:46:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 23 Jan 2007 21:46:57 -0800 Subject: [ python-Bugs-1643150 ] Grammatical Error Message-ID: Bugs item #1643150, was opened at 2007-01-24 13:22 Message generated for change (Comment added) made by quiver You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643150&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: Steve Miller (stmille) Assigned to: Nobody/Anonymous (nobody) Summary: Grammatical Error Initial Comment: http://docs.python.org/tut/node10.html s/One my also/One may also/ ---------------------------------------------------------------------- >Comment By: George Yoshida (quiver) Date: 2007-01-24 14:46 Message: Logged In: YES user_id=671362 Originator: NO Thank for the report. But this is already fixed in svn trunk. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643150&group_id=5470 From noreply at sourceforge.net Wed Jan 24 11:22:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 02:22:47 -0800 Subject: [ python-Bugs-1643369 ] function breakpoints in pdb Message-ID: Bugs item #1643369, was opened at 2007-01-24 10:22 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643369&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: decitre (decitre) Assigned to: Nobody/Anonymous (nobody) Summary: function breakpoints in pdb Initial Comment: pdb.Pdb.find_function method is not able to recognize class methods, since the regular expression it uses only looks for "def" at beginning of lines. Please replace r'def\s+%s\s*[(]' % funcname with r'\s*def\s+%s\s*[(]' % funcname test file in attachment. this file shows that pdb can set a breakpoint on foo but not on readline function. Regards, Emmanuel www.e-rsatz.info ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643369&group_id=5470 From noreply at sourceforge.net Wed Jan 24 11:23:00 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 02:23:00 -0800 Subject: [ python-Feature Requests-1643370 ] recursive urlparse Message-ID: Feature Requests item #1643370, was opened at 2007-01-24 10:23 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1643370&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Nobody/Anonymous (nobody) Summary: recursive urlparse Initial Comment: urlparse module is incomplete. there is no convenient high-level function to parse url down into atomic chunks, urldecode query and bring it to array (or dictionary for that case), so that you can modify that dictionary and reassemble it into query again using nothing more than simple array manipulations. This kind of function is universal and flexible in the same way that low-level API, but in comparison it allows to considerably speed up development process if the speech is about urls I propose urlparseex(urlstring) function that will dissect the URL into dictionary of appropriate dictionaries or strings and decode all % entities scheme 0 string netloc 1 dictionary username 1.1 string or whatever password 1.2 string or whatever server 1.3 hostname string port 1.4 port integer path 2 string params 3 ordered dictionary of path components for the sake of reassembling them later (sorry, I have little pythons in my head to replace "ordered dictionary" with something more appropriate) where respective path part entry is also dictionary of parameters query 4 dictionary fragment 5 string there must be also counterpart urlunparseex(dictionary) to reassemble url and reencode entities Reasons behind the decision: - 90% of time you need to decode % entities - this must be made by default (whoever need to let them encoded are in minor and may use other functions) - atomic recursion format is needed to be able to easily change any url component and reassemble it back - get simple swiss-army knife for high-level (read - logical) url operations in one module http://docs.python.org/lib/module-urlparse.html There is also this proposal below. It is a little bit different, but shows that after four years url handling problems are still actual. http://sourceforge.net/tracker/index.php?func=detail&aid=600362&group_id=5470&atid=355470 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1643370&group_id=5470 From noreply at sourceforge.net Wed Jan 24 11:32:55 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 02:32:55 -0800 Subject: [ python-Bugs-1643369 ] function breakpoints in pdb Message-ID: Bugs item #1643369, was opened at 2007-01-24 10:22 Message generated for change (Settings changed) made by decitre You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643369&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None >Priority: 6 Private: No Submitted By: decitre (decitre) Assigned to: Nobody/Anonymous (nobody) Summary: function breakpoints in pdb Initial Comment: pdb.Pdb.find_function method is not able to recognize class methods, since the regular expression it uses only looks for "def" at beginning of lines. Please replace r'def\s+%s\s*[(]' % funcname with r'\s*def\s+%s\s*[(]' % funcname test file in attachment. this file shows that pdb can set a breakpoint on foo but not on readline function. Regards, Emmanuel www.e-rsatz.info ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643369&group_id=5470 From noreply at sourceforge.net Wed Jan 24 16:53:43 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 07:53:43 -0800 Subject: [ python-Bugs-1362475 ] Text.edit_modified() doesn't work Message-ID: Bugs item #1362475, was opened at 2005-11-21 03:13 Message generated for change (Comment added) made by mkiever You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1362475&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Tkinter Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ron Provost (ronpro at cox.net) Assigned to: Martin v. L?wis (loewis) Summary: Text.edit_modified() doesn't work Initial Comment: Tkinter's Text widget has a method edit_modified() which should return True if the modified flag of the widget has been set, False otherwise. It should also be possible to pass True or False into the method to set the flag to a desired state. The implementation retrieves the correct value, but then calls self._getints( result ). This causes an exception to be thrown. In my build, I found that changing the implementation to the following appears to fix the function. return self.tk.call( self._w, 'edit', 'modified', arg ) ---------------------------------------------------------------------- Comment By: Matthias Kievernagel (mkiever) Date: 2007-01-24 15:53 Message: Logged In: YES user_id=1477880 Originator: NO Posted patch 1643641. The patch removes the offending _getints call. Greetings, Matthias Kievernagel ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1362475&group_id=5470 From noreply at sourceforge.net Wed Jan 24 18:20:22 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 09:20:22 -0800 Subject: [ python-Bugs-1633941 ] for line in sys.stdin: doesn't notice EOF the first time Message-ID: Bugs item #1633941, was opened at 2007-01-12 05:34 Message generated for change (Comment added) made by draghuram You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Matthias Klose (doko) Assigned to: Nobody/Anonymous (nobody) Summary: for line in sys.stdin: doesn't notice EOF the first time Initial Comment: [forwarded from http://bugs.debian.org/315888] for line in sys.stdin: doesn't notice EOF the first time when reading from tty. The test program: import sys for line in sys.stdin: print line, print "eof" A sample session: liw at esme$ python foo.py foo <--- I pressed Enter and then Ctrl-D foo <--- then this appeared, but not more eof <--- this only came when I pressed Ctrl-D a second time liw at esme$ Seems to me that there is some buffering issue where Python needs to read end-of-file twice to notice it on all levels. Once should be enough. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-24 12:20 Message: Logged In: YES user_id=984087 Originator: NO I tested two kinds of inputs with iter and noiter verisons. I posted "noter" code and OP's code is the iter version. 1) For input without newline at all (line1) behaves same with both versions. 2) The noiter version prints "eof" with "line1\n" while the iter version requires an additional CTRL-D. This is because iter version uses read ahead which is implemented using fread() . A simple C program using fread() behaves exactly same way. I tested on Linux but am sure windows behaviour (as posted by gagenellina) will have same reasons. Since the issue is with platform's stdio library, I don't think python should fix anything here. However, it may be worthwhile to mention something about this in documentation. I will open a bug for this purpose. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 12:45 Message: Logged In: YES user_id=984087 Originator: NO Ok. This may sound stupid but I couldn't find a way to attach a file to this bug report. So I am copying the code here: ************ import sys line = sys.stdin.readline() while (line): print line, line = sys.stdin.readline() print "eof" ************* ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 12:37 Message: Logged In: YES user_id=984087 Originator: NO Sorry for my duplicate comment. It was a mistake. On closer examination, the OP's description does seem to indicate some issue. Please look at (attached) stdin_noiter.py which uses readline() directly and it does not have the problem described here. It properly detects EOF on first CTRL-D. This points to some problem with the iterator function fileobject.c:file_iternext(). I think that the first CTRL-D might be getting lost somewhere in the read ahead code (which only comes into picture with iterator). ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 11:34 Message: Logged In: YES user_id=984087 Originator: NO I am not entirely sure that this is a bug. $ cat testfile line1 line2 $ python foo.py < testfile This command behaves as expected. Only when the input is from tty, the above described behaviour happens. That could be because of the terminal settings where characters may be buffered until a newline is entered. ---------------------------------------------------------------------- Comment By: Raghuram Devarakonda (draghuram) Date: 2007-01-22 11:34 Message: Logged In: YES user_id=984087 Originator: NO I am not entirely sure that this is a bug. $ cat testfile line1 line2 $ python foo.py < testfile This command behaves as expected. Only when the input is from tty, the above described behaviour happens. That could be because of the terminal settings where characters may be buffered until a newline is entered. ---------------------------------------------------------------------- Comment By: Gabriel Genellina (gagenellina) Date: 2007-01-13 23:20 Message: Logged In: YES user_id=479790 Originator: NO Same thing occurs on Windows. Even worse, if the line does not end with CR, Ctrl-Z (EOF in Windows, equivalent to Ctrl-D) has to be pressed 3 times: D:\Temp>python foo.py foo <--- I pressed Enter ^Z <--- I pressed Ctrl-Z and then Enter again foo <--- this appeared ^Z <--- I pressed Ctrl-Z and then Enter again D:\Temp>python foo.py foo^Z <--- I pressed Ctrl-Z and then Enter ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again ^Z <--- cursor stays here; I pressed Ctrl-Z and then Enter again foo ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633941&group_id=5470 From noreply at sourceforge.net Wed Jan 24 18:28:19 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 09:28:19 -0800 Subject: [ python-Bugs-1643712 ] Emphasize buffering issues when sys.stdin is used Message-ID: Bugs item #1643712, was opened at 2007-01-24 12:28 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643712&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Raghuram Devarakonda (draghuram) Assigned to: Nobody/Anonymous (nobody) Summary: Emphasize buffering issues when sys.stdin is used Initial Comment: Hi, Please look at the bug: http://sourceforge.net/tracker/index.php?func=detail&aid=1633941&group_id=5470&atid=105470 As I commented there, I don't think any fix is needed but it appears to me that mentioning this case in docs wouldn't hurt. Something like this can be added to next() description at: http://docs.python.org/lib/bltin-file-objects.html "Please consider buffering issues while using ``for line in sys.stdin`` when the input is being interactively entered". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643712&group_id=5470 From noreply at sourceforge.net Wed Jan 24 19:14:14 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 10:14:14 -0800 Subject: [ python-Bugs-1643738 ] Problem with signals in a single-threaded application Message-ID: Bugs item #1643738, was opened at 2007-01-24 16:14 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ulisses Furquim (ulissesf) Assigned to: Nobody/Anonymous (nobody) Summary: Problem with signals in a single-threaded application Initial Comment: I'm aware of the problems with signals in a multithreaded application, but I was using signals in a single-threaded application and noticed something that seemed wrong. Some signals were apparently being lost, but when another signal came in the python handler for that "lost" signal was being called. The problem seems to be inside the signal module. The global variable is_tripped is incremented every time a signal arrives. Then, inside PyErr_CheckSignals() (the pending call that calls all python handlers for signals that arrived) we can return immediately if is_tripped is zero. If is_tripped is different than zero, we loop through all signals calling the registered python handlers and after that we zero is_tripped. This seems to be ok, but what happens if a signal arrives after we've returned from its handler (or even after we've checked if that signal arrived) and before we zero is_tripped? I guess we can have a situation where is_tripped is zero but some Handlers[i].tripped are not. In fact, I've inserted some debugging output and could see that this actually happens and then I've written the attached test program to reproduce the problem. When we run this program, the handler for the SIGALRM isn't called after we return from the SIGIO handler. We return to our main loop and print 'Loop!' every 3 seconds aprox. and the SIGALRM handler is called only when another signal arrives (like when we hit Ctrl-C). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 From noreply at sourceforge.net Wed Jan 24 20:46:32 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 11:46:32 -0800 Subject: [ python-Bugs-1643738 ] Problem with signals in a single-threaded application Message-ID: Bugs item #1643738, was opened at 2007-01-24 16:14 Message generated for change (Comment added) made by ulissesf You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ulisses Furquim (ulissesf) Assigned to: Nobody/Anonymous (nobody) Summary: Problem with signals in a single-threaded application Initial Comment: I'm aware of the problems with signals in a multithreaded application, but I was using signals in a single-threaded application and noticed something that seemed wrong. Some signals were apparently being lost, but when another signal came in the python handler for that "lost" signal was being called. The problem seems to be inside the signal module. The global variable is_tripped is incremented every time a signal arrives. Then, inside PyErr_CheckSignals() (the pending call that calls all python handlers for signals that arrived) we can return immediately if is_tripped is zero. If is_tripped is different than zero, we loop through all signals calling the registered python handlers and after that we zero is_tripped. This seems to be ok, but what happens if a signal arrives after we've returned from its handler (or even after we've checked if that signal arrived) and before we zero is_tripped? I guess we can have a situation where is_tripped is zero but some Handlers[i].tripped are not. In fact, I've inserted some debugging output and could see that this actually happens and then I've written the attached test program to reproduce the problem. When we run this program, the handler for the SIGALRM isn't called after we return from the SIGIO handler. We return to our main loop and print 'Loop!' every 3 seconds aprox. and the SIGALRM handler is called only when another signal arrives (like when we hit Ctrl-C). ---------------------------------------------------------------------- >Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 17:46 Message: Logged In: YES user_id=1578960 Originator: YES This patch is very simple. We didn't want to remove the is_tripped variable because PyErr_CheckSignals() is called several times directly so it would be nice if we could return immediately if no signals arrived. We also didn't want to run the registered handlers with any set of signals blocked. Thus, we thought of zeroing is_tripped as soon as we know there are signals to be handled (after we test is_tripped). This way most of the times we can return immediately because is_tripped is zero and we also don't need to block any signals. However, with this approach we can have a situation where is_tripped isn't zero but we have no signals to handle, so we'll loop through all signals and no registered handler will be called. This happens when we receive a signal after we zero is_tripped and before we check Handlers[i].tripped for that signal. Any comments? File Added: signals-v0.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 From noreply at sourceforge.net Wed Jan 24 21:19:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 12:19:33 -0800 Subject: [ python-Bugs-1643738 ] Problem with signals in a single-threaded application Message-ID: Bugs item #1643738, was opened at 2007-01-24 13:14 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ulisses Furquim (ulissesf) Assigned to: Nobody/Anonymous (nobody) Summary: Problem with signals in a single-threaded application Initial Comment: I'm aware of the problems with signals in a multithreaded application, but I was using signals in a single-threaded application and noticed something that seemed wrong. Some signals were apparently being lost, but when another signal came in the python handler for that "lost" signal was being called. The problem seems to be inside the signal module. The global variable is_tripped is incremented every time a signal arrives. Then, inside PyErr_CheckSignals() (the pending call that calls all python handlers for signals that arrived) we can return immediately if is_tripped is zero. If is_tripped is different than zero, we loop through all signals calling the registered python handlers and after that we zero is_tripped. This seems to be ok, but what happens if a signal arrives after we've returned from its handler (or even after we've checked if that signal arrived) and before we zero is_tripped? I guess we can have a situation where is_tripped is zero but some Handlers[i].tripped are not. In fact, I've inserted some debugging output and could see that this actually happens and then I've written the attached test program to reproduce the problem. When we run this program, the handler for the SIGALRM isn't called after we return from the SIGIO handler. We return to our main loop and print 'Loop!' every 3 seconds aprox. and the SIGALRM handler is called only when another signal arrives (like when we hit Ctrl-C). ---------------------------------------------------------------------- >Comment By: Tim Peters (tim_one) Date: 2007-01-24 15:19 Message: Logged In: YES user_id=31435 Originator: NO Very nice! I'd add a description of the minor pathology remaining you described here as a code comment, at the point is_tripped is set to 0. If this stuff were screamingly obvious, the bug you fixed wouldn't have persisted for 15 years ;-) ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 14:46 Message: Logged In: YES user_id=1578960 Originator: YES This patch is very simple. We didn't want to remove the is_tripped variable because PyErr_CheckSignals() is called several times directly so it would be nice if we could return immediately if no signals arrived. We also didn't want to run the registered handlers with any set of signals blocked. Thus, we thought of zeroing is_tripped as soon as we know there are signals to be handled (after we test is_tripped). This way most of the times we can return immediately because is_tripped is zero and we also don't need to block any signals. However, with this approach we can have a situation where is_tripped isn't zero but we have no signals to handle, so we'll loop through all signals and no registered handler will be called. This happens when we receive a signal after we zero is_tripped and before we check Handlers[i].tripped for that signal. Any comments? File Added: signals-v0.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 From noreply at sourceforge.net Wed Jan 24 21:20:12 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 12:20:12 -0800 Subject: [ python-Bugs-1642054 ] Python 2.5 gets curses.h warning on HPUX Message-ID: Bugs item #1642054, was opened at 2007-01-22 19:27 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1642054&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Roy Smith (roysmith) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.5 gets curses.h warning on HPUX Initial Comment: I downloaded http://www.python.org/ftp/python/2.5/Python-2.5.tgz and tried to build it on "HP-UX glade B.11.11 U 9000/800 unknown". When I ran "./configure", I got warnings that "curses.h: present but cannot be compiled". See attached log file. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-24 15:20 Message: Logged In: YES user_id=11375 Originator: NO You'll have to help us some more. This is apparently happening because HP-UX's curses.h file needs some other header file to be included first; not having an HP-UX machine, I have no way to figure out which other header file is needed. Could you please try to figure out which file is necessary? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1642054&group_id=5470 From noreply at sourceforge.net Wed Jan 24 21:20:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 12:20:50 -0800 Subject: [ python-Bugs-1635363 ] Add command line help to windows unistall binary Message-ID: Bugs item #1635363, was opened at 2007-01-14 15:58 Message generated for change (Settings changed) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635363&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) >Assigned to: Thomas Heller (theller) Summary: Add command line help to windows unistall binary Initial Comment: It is impossible to remove package installed with uninstall binary created with Distutils unless you know that you need to specify -u switch. "E:\ENV\Python24\Removescons.exe" -u "E:\ENV\Python24\scons-wininst.log" If there are any additional switches - they could be displayed in MsgBox instead of/along with error message. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1635363&group_id=5470 From noreply at sourceforge.net Wed Jan 24 21:24:10 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 12:24:10 -0800 Subject: [ python-Bugs-1643738 ] Problem with signals in a single-threaded application Message-ID: Bugs item #1643738, was opened at 2007-01-24 13:14 Message generated for change (Comment added) made by tony_nelson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ulisses Furquim (ulissesf) Assigned to: Nobody/Anonymous (nobody) Summary: Problem with signals in a single-threaded application Initial Comment: I'm aware of the problems with signals in a multithreaded application, but I was using signals in a single-threaded application and noticed something that seemed wrong. Some signals were apparently being lost, but when another signal came in the python handler for that "lost" signal was being called. The problem seems to be inside the signal module. The global variable is_tripped is incremented every time a signal arrives. Then, inside PyErr_CheckSignals() (the pending call that calls all python handlers for signals that arrived) we can return immediately if is_tripped is zero. If is_tripped is different than zero, we loop through all signals calling the registered python handlers and after that we zero is_tripped. This seems to be ok, but what happens if a signal arrives after we've returned from its handler (or even after we've checked if that signal arrived) and before we zero is_tripped? I guess we can have a situation where is_tripped is zero but some Handlers[i].tripped are not. In fact, I've inserted some debugging output and could see that this actually happens and then I've written the attached test program to reproduce the problem. When we run this program, the handler for the SIGALRM isn't called after we return from the SIGIO handler. We return to our main loop and print 'Loop!' every 3 seconds aprox. and the SIGALRM handler is called only when another signal arrives (like when we hit Ctrl-C). ---------------------------------------------------------------------- Comment By: Tony Nelson (tony_nelson) Date: 2007-01-24 15:24 Message: Logged In: YES user_id=1356214 Originator: NO ISTM that is_tripped should be zeroed after the test for threading, so that signals will finally get handled when the proper thread is running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2007-01-24 15:19 Message: Logged In: YES user_id=31435 Originator: NO Very nice! I'd add a description of the minor pathology remaining you described here as a code comment, at the point is_tripped is set to 0. If this stuff were screamingly obvious, the bug you fixed wouldn't have persisted for 15 years ;-) ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 14:46 Message: Logged In: YES user_id=1578960 Originator: YES This patch is very simple. We didn't want to remove the is_tripped variable because PyErr_CheckSignals() is called several times directly so it would be nice if we could return immediately if no signals arrived. We also didn't want to run the registered handlers with any set of signals blocked. Thus, we thought of zeroing is_tripped as soon as we know there are signals to be handled (after we test is_tripped). This way most of the times we can return immediately because is_tripped is zero and we also don't need to block any signals. However, with this approach we can have a situation where is_tripped isn't zero but we have no signals to handle, so we'll loop through all signals and no registered handler will be called. This happens when we receive a signal after we zero is_tripped and before we check Handlers[i].tripped for that signal. Any comments? File Added: signals-v0.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 From noreply at sourceforge.net Wed Jan 24 21:35:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 12:35:28 -0800 Subject: [ python-Feature Requests-1635363 ] Add command line help to windows unistall binary Message-ID: Feature Requests item #1635363, was opened at 2007-01-14 21:58 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635363&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Distutils >Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Thomas Heller (theller) Summary: Add command line help to windows unistall binary Initial Comment: It is impossible to remove package installed with uninstall binary created with Distutils unless you know that you need to specify -u switch. "E:\ENV\Python24\Removescons.exe" -u "E:\ENV\Python24\scons-wininst.log" If there are any additional switches - they could be displayed in MsgBox instead of/along with error message. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2007-01-24 21:35 Message: Logged In: YES user_id=11105 Originator: NO I do not remember what my original intention was to not document the usage of the bdist_wininst uninstaller. However, this is the first time that this request has come up, so it seems there is no pressing need to run the uninstaller manually. You could (and probably should) use the control panel app to remove packages. Now, you have discovered the magic that is needed so you can use it. I would prefer not to 'fix' this - especially since there are other problems with bdist_wininst, I guess it will be superseeded by bdist_msi sooner or later. Changing this to 'feature request'. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635363&group_id=5470 From noreply at sourceforge.net Wed Jan 24 21:36:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 12:36:48 -0800 Subject: [ python-Bugs-901727 ] extra_path kwarg to setup() undocumented Message-ID: Bugs item #901727, was opened at 2004-02-21 16:04 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=901727&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Bob Ippolito (etrepum) >Assigned to: Nobody/Anonymous (nobody) Summary: extra_path kwarg to setup() undocumented Initial Comment: I can't find documentation for extra_path anywhere.. but this is the documentation I found by searching google ( http:// mail.python.org/pipermail/distutils-sig/2000-March/000803.html ), from an old USAGE.txt that sits in the CVS attic now: extra_path: information about extra intervening directories to put between 'install_lib' and 'install_sitelib', along with a .pth file to ensure that those directories wind up in sys.path. Can be a 1- or 2-tuple, or a comma-delimited string with 1 or 2 parts. The 1-element case is simpler: the .pth file and directory have the same name (except for ".pth"). Eg. if extra_path is "foo" or ("foo",), then Distutils sets 'install_site_lib' to 'install_lib' + "site-packages/foo", and puts foo.path in the "site-packages" directory; it contains just "foo". The 2-element case allows the .pth file and intervening directories to be named differently; eg. if 'extra_path' is ("foo","foo/bar/baz") (or "foo,foo/bar/baz"), then Distutils will set 'install_site_lib' to 'install_lib' + "site-packages/foo/bar/baz", and put "foo.pth" containing "foo/bar/baz" in the "site-packages" directory. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2007-01-24 21:36 Message: Logged In: YES user_id=11105 Originator: NO Unassign, I won't work on this. ---------------------------------------------------------------------- Comment By: Ronald Oussoren (ronaldoussoren) Date: 2005-05-17 18:16 Message: Logged In: YES user_id=580910 extra_path also doesn't have a command-line equivalent. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=901727&group_id=5470 From noreply at sourceforge.net Wed Jan 24 21:38:03 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 12:38:03 -0800 Subject: [ python-Bugs-914375 ] modulefinder is not documented Message-ID: Bugs item #914375, was opened at 2004-03-11 20:33 Message generated for change (Settings changed) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=914375&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Fred L. Drake, Jr. (fdrake) Assigned to: Thomas Heller (theller) Summary: modulefinder is not documented Initial Comment: The "modulefinder" module has not been documented. Now that it is a module, it needs to be documented. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-06-14 08:42 Message: Logged In: YES user_id=849994 There seems to be a libmodulefinder.tex, but it is not very thorough. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2005-06-16 20:51 Message: Logged In: YES user_id=11105 If Just doesn't appear ;-) please assign to me. I should at least describe the api that is actually *used* in py2exe. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2005-06-16 20:21 Message: Logged In: YES user_id=11375 Just, it looks like you're responsible for modulefinder, so I'm reassigning this to you. It would be helpful if you could take a look at the docs and see if anything is documented that should be private. Please unassign (or close?) this bug if you're not connected to modulefinder. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2004-08-07 22:14 Message: Logged In: YES user_id=11375 I've written a crude first cut at this, but the module's code is very hard to read and it's not clear which bits are public and which aren't. The module's author should do this task (and use some docstrings in the code, too) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=914375&group_id=5470 From noreply at sourceforge.net Wed Jan 24 21:47:52 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 12:47:52 -0800 Subject: [ python-Feature Requests-1635335 ] Add registry functions to windows postinstall Message-ID: Feature Requests item #1635335, was opened at 2007-01-14 21:00 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Thomas Heller (theller) Summary: Add registry functions to windows postinstall Initial Comment: It would be useful to add regkey_created() or regkey_modified() to windows postinstall scripts along with directory_created() and file_created(). Useful for adding installed package to App Paths. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2007-01-24 21:47 Message: Logged In: YES user_id=11105 Originator: NO General comments: There are some problems with bdist_wininst that I assume will get worse in the future, especially with the postinstall script, because of different versions of the MS C runtime library. The installers that bdist_wininst creates are linked against a certain version which must be the same version that the Python runtime uses. If they do not match, the output of the postinstall script will not be displayed in the gui, or, in the worst case it could crash. The second problem is that bdist_wininst will not work with 64-bit Pythons. All this *could* probably be fixed, of course, but since bdist_msi does *not* have these problems IMO bdist_msi will superseed bdist_wininst sooner or later. About the concrete problem: Originally, when bdist_wininst was first implemented, Python did not have the _winreg module, so it was not possible to create or remove registry entries in the install script or postinstall script anyway and these function would not have made any sense at all. They could probably make sense now, but it is equally possible to modify the registry in the postinstall-script at installation time, and revert these changes in the postinstall-script at uninstallation time. I would prefer not to make these changes, since a workaround is possible. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 19:07 Message: Logged In: YES user_id=21627 Originator: NO Thomas, what do you think? ---------------------------------------------------------------------- Comment By: anatoly techtonik (techtonik) Date: 2007-01-20 15:26 Message: Logged In: YES user_id=669020 Originator: YES Windows postinstall script is bundled with installation, launched after installation and just before uninstall. It is described here. http://docs.python.org/dist/postinstallation-script.html#SECTION005310000000000000000 Where these should be defined? I do not know - there are already some functions that are said to be "available as additional built-in functions in the installation script." on the page above. The purpose is to be able to create/delete registry keys during installation. This should also be reflected in installation log file with appropriate status code so that users could be aware of what's going on. I think the functions needed are already defined in http://docs.python.org/lib/module--winreg.html but the module is very low-level. I'd rather use Autoit like API - http://www.autoitscript.com/autoit3/docs/functions/RegRead.htm http://www.autoitscript.com/autoit3/docs/functions/RegWrite.htm http://www.autoitscript.com/autoit3/docs/functions/RegDelete.htm ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 11:55 Message: Logged In: YES user_id=21627 Originator: NO Can you please elaborate? Where should these functions be defined, what should they do, and when should they be invoked (by what code)? Also, what is a "windows postinstall script"? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 From noreply at sourceforge.net Wed Jan 24 21:48:21 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 12:48:21 -0800 Subject: [ python-Bugs-1599254 ] mailbox: other programs' messages can vanish without trace Message-ID: Bugs item #1599254, was opened at 2006-11-19 11:03 Message generated for change (Comment added) made by akuchling You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 7 Private: No Submitted By: David Watson (baikie) Assigned to: A.M. Kuchling (akuchling) Summary: mailbox: other programs' messages can vanish without trace Initial Comment: The mailbox classes based on _singlefileMailbox (mbox, MMDF, Babyl) implement the flush() method by writing the new mailbox contents into a temporary file which is then renamed over the original. Unfortunately, if another program tries to deliver messages while mailbox.py is working, and uses only fcntl() locking, it will have the old file open and be blocked waiting for the lock to become available. Once mailbox.py has replaced the old file and closed it, making the lock available, the other program will write its messages into the now-deleted "old" file, consigning them to oblivion. I've caused Postfix on Linux to lose mail this way (although I did have to turn off its use of dot-locking to do so). A possible fix is attached. Instead of new_file being renamed, its contents are copied back to the original file. If file.truncate() is available, the mailbox is then truncated to size. Otherwise, if truncation is required, it's truncated to zero length beforehand by reopening self._path with mode wb+. In the latter case, there's a check to see if the mailbox was replaced while we weren't looking, but there's still a race condition. Any alternative ideas? Incidentally, this fixes a problem whereby Postfix wouldn't deliver to the replacement file as it had the execute bit set. ---------------------------------------------------------------------- >Comment By: A.M. Kuchling (akuchling) Date: 2007-01-24 15:48 Message: Logged In: YES user_id=11375 Originator: NO I've strengthened the warning again. The MH bug in unified2 is straightforward: MH.remove() opens a file object, locks it, closes the file object, and then tries to unlock it. Presumably the MH test case never bothered locking the mailbox before making changes before. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-22 15:24 Message: Logged In: YES user_id=1504904 Originator: YES So what you propose to commit for 2.5 is basically mailbox-unified2 (your mailbox-unified-patch, minus the _toc clearing)? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-22 10:46 Message: Logged In: YES user_id=11375 Originator: NO This would be an API change, and therefore out-of-bounds for 2.5. I suggest giving up on this for 2.5.1 and only fixing it in 2.6. I'll add another warning to the docs, and maybe to the module as well. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-21 17:10 Message: Logged In: YES user_id=1504904 Originator: YES Hold on, I have a plan. If _toc is only regenerated on locking, or at the end of a flush(), then the only way self._pending can be set at that time is if the application has made modifications before calling lock(). If we make that an exception-raising offence, then we can assume that self._toc is a faithful representation of the last known contents of the file. That means we can preserve the existing message keys on a reread without any of that _user_toc nonsense. Diff attached, to apply on top of mailbox-unified2. It's probably had even less review and testing than the previous version, but it appears to pass all the regression tests and doesn't change any existing semantics. File Added: mailbox-update-toc-new.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-20 22:16 Message: Logged In: YES user_id=11375 Originator: NO I'm starting to lose track of all the variations on the bug. Maybe we should just add more warnings to the documentation about locking the mailbox when modifying it and not try to fix this at all. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-20 13:20 Message: Logged In: YES user_id=1504904 Originator: YES Hang on. If a message's key changes after recreating _toc, that does not mean that another process has modified the mailbox. If the application removes a message and then (inadvertently) causes _toc to be regenerated, the keys of all subsequent messages will be decremented by one, due only to the application's own actions. That's what happens in the "broken locking" test case: the program intends to remove message 0, flush, and then remove message 1, but because _toc is regenerated in between, message 1 is renumbered as 0, message 2 is renumbered as 1, and so the program deletes message 2 instead. To clear _toc in such code without attempting to preserve the message keys turns possible data loss (in the case that another process modified the mailbox) into certain data loss. That's what I'm concerned about. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-19 10:24 Message: Logged In: YES user_id=11375 Originator: NO After reflection, I don't think the potential changing actually makes things any worse. _generate() always starts numbering keys with 1, so if a message's key changes because of lock()'s, re-reading, that means someone else has already modified the mailbox. Without the ToC clearing, you're already fated to have a corrupted mailbox because the new mailbox will be written using outdated file offsets. With the ToC clearing, you delete the wrong message. Neither outcome is good, but data is lost either way. The new behaviour is maybe a little bit better in that you're losing a single message but still generating a well-formed mailbox, and not a randomly jumbled mailbox. I suggest applying the patch to clear self._toc, and noting in the documentation that keys might possibly change after doing a lock(). ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 13:15 Message: Logged In: YES user_id=1504904 Originator: YES This version passes the new tests (it fixes the length checking bug, and no longer clears self._toc), but goes back to failing test_concurrent_add. File Added: mailbox-unified2-module.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-18 13:14 Message: Logged In: YES user_id=1504904 Originator: YES Unfortunately, there is a problem with clearing _toc: it's basically the one I alluded to in my comment of 2006-12-16. Back then I thought it could be caught by the test you issue the warning for, but the application may instead do its second remove() *after* the lock() (so that self._pending is not set at lock() time), using the key carried over from before it called unlock(). As before, this would result in the wrong message being deleted. I've added a test case for this (diff attached), and a bug I found in the process whereby flush() wasn't updating the length, which could cause subsequent flushes to fail (I've got a fix for this). These seem to have turned up a third bug in the MH class, but I haven't looked at that yet. File Added: mailbox-unified2-test.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:06 Message: Logged In: YES user_id=11375 Originator: NO Add mailbox-unified-patch. File Added: mailbox-unified-patch.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 16:05 Message: Logged In: YES user_id=11375 Originator: NO mailbox-pending-lock is the difference between David's copy-back-new + fcntl-warning and my -unified patch, uploaded so that he can comment on the changes. The only change is to make _singleFileMailbox.lock() clear self._toc, forcing a re-read of the mailbox file. If _pending is true, the ToC isn't cleared and a warning is logged. I think this lets existing code run (albeit possibly with a warning if the mailbox is modified before .lock() is called), but fixes the risk of missing changes written by another process. Triggering a new warning is sort of an API change, but IMHO it's still worth backporting; code that triggers this warning is at risk of losing messages or corrupting the mailbox. Clearing the _toc on .lock() is also sort of an API change, but it's necessary to avoid the potential data loss. It may lead to some redundant scanning of mailbox files, but this price is worth paying, I think; people probably don't have 2Gb mbox files (I hope not, anyway!) and no extra read is done if you create the mailbox and immediately lock it before looking anything up. Neal: if you want to discuss this patch or want an explanation of something, feel free to chat with me about it. I'll wait a day or two and see if David spots any problems. If nothing turns up, I'll commit it to both trunk and release25-maint. File Added: mailbox-pending-lock.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 15:53 Message: Logged In: YES user_id=11375 Originator: NO mailbox-unified-patch contains David's copy-back-new and fcntl-warn patches, plus the test-mailbox patch and some additional changes to mailbox.py from me. (I'll upload a diff to David's diffs in a minute.) This is the change I'd like to check in. test_mailbox.py now passes, as does the mailbox-break.py script I'm using. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-17 14:56 Message: Logged In: YES user_id=11375 Originator: NO Committed a modified version of the doc. patch in rev. 53472 (trunk) and rev. 53474 (release25-maint). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-17 01:48 Message: Logged In: YES user_id=33168 Originator: NO Andrew, do you need any help with this? ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-15 14:01 Message: Logged In: YES user_id=11375 Originator: NO Comment from Andrew MacIntyre (os2vacpp is the OS/2 that lacks ftruncate()): ================ I actively support the OS/2 EMX port (sys.platform == "os2emx"; build directory is PC/os2emx). I would like to keep the VAC++ port alive, but the reality is I don't have the resources to do so. The VAC++ port was the subject of discussion about removal of build support support from the source tree for 2.6 - I don't recall there being a definitive outcome, but if someone does delete the PC/os2vacpp directory, I'm not in a position to argue. AMK: (mailbox.py has a separate section of code used when file.truncate() isn't available, and the existence of this section is bedevilling me. It would be convenient if platforms without file.truncate() weren't a factor; then this branch could just be removed. In your opinion, would it hurt OS/2 users of mailbox.py if support for platforms without file.truncate() was removed?) aimacintyre: No. From what documentation I can quickly check, ftruncate() operates on file descriptors rather than FILE pointers. As such I am sure that, if it became an issue, it would not be difficult to write a ftruncate() emulation wrapper for the underlying OS/2 APIs that implement the required functionality. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-13 13:32 Message: Logged In: YES user_id=1504904 Originator: YES I like the warning idea - it seems appropriate if the problem is relatively rare. How about this? File Added: mailbox-fcntl-warn.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 14:41 Message: Logged In: YES user_id=11375 Originator: NO One OS/2 port lacks truncate(), and so does RISCOS. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 13:41 Message: Logged In: YES user_id=11375 Originator: NO I realized that making flush() invalidate keys breaks the final example in the docs, which loops over inbox.iterkeys() and removes messages, doing a pack() after each message. Which platforms lack file.truncate()? Windows has it; POSIX has it, so modern Unix variants should all have it. Maybe mailbox should simply raise an exception (or trigger a warning?) if truncate is missing, and we should then assume that flush() has no effect upon keys. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-12 12:12 Message: Logged In: YES user_id=11375 Originator: NO So shall we document flush() as invalidating keys, then? ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 14:57 Message: Logged In: YES user_id=1504904 Originator: YES Oops, length checking had made the first two lines of this patch redundant; update-toc applies OK with fuzz. File Added: mailbox-copy-back-new.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:30 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-copy-back-53287.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2007-01-06 10:24 Message: Logged In: YES user_id=1504904 Originator: YES Aack, yes, that should be _next_user_key. Attaching a fixed version. I've been thinking, though: flush() does in fact invalidate the keys on platforms without a file.truncate(), when the fcntl() lock is momentarily released afterwards. It seems hard to avoid this as, perversely, fcntl() locks are supposed to be released automatically on all file descriptors referring to the file whenever the process closes any one of them - even one the lock was never set on. So, code using mailbox.py such platforms could inadvertently be carrying keys across an unlocked period, which is not made safe by the update-toc patch (as it's only meant to avert disasters resulting from doing this *and* rebuilding the table of contents, *assuming* that another process hasn't deleted or rearranged messages). File Added: mailbox-update-toc-fixed.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:51 Message: Logged In: YES user_id=11375 Originator: NO Question about mailbox-update-doc: the add() method still returns self._next_key - 1; should this be self._next_user_key - 1? The keys in _user_toc are the ones returned to external users of the mailbox, right? (A good test case would be to initialize _next_key to 0 and _next_user_key to a different value like 123456.) I'm still staring at the patch, trying to convince myself that it will help -- haven't spotted any problems, but this bug is making me nervous... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-05 14:24 Message: Logged In: YES user_id=11375 Originator: NO As a step toward improving matters, I've attached the suggested doc patch (for both 25-maint and trunk). It encourages people to use Maildir :), explicitly states that modifications should be bracketed by lock(), and fixes the examples to match. It does not say that keys are invalidated by doing a flush(), because we're going to try to avoid the necessity for that. File Added: mailbox-docs.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 14:48 Message: Logged In: YES user_id=11375 Originator: NO Committed length-checking.diff to trunk in rev. 53110. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:19 Message: Logged In: YES user_id=1504904 Originator: YES File Added: mailbox-test-lock.diff ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-20 14:17 Message: Logged In: YES user_id=1504904 Originator: YES Yeah, I think that should definitely go in. ExternalClashError or a subclass sounds fine to me (although you could make a whole taxonomy of these errors, really). It would be good to have the code actually keep up with other programs' changes, though; a program might just want to count the messages at first, say, and not make changes until much later. I've been trying out the second option (patch attached, to apply on top of mailbox-copy-back), regenerating _toc on locking, but preserving existing keys. The patch allows existing _generate_toc()s to work unmodified, but means that _toc now holds the entire last known contents of the mailbox file, with the 'pending' (user-visible) mailbox state being held in a new attribute, _user_toc, which is a mapping from keys issued to the program to the keys of _toc (i.e. sequence numbers in the file). When _toc is updated, any new messages that have appeared are given keys in _user_toc that haven't been issued before, and any messages that have disappeared are removed from it. The code basically assumes that messages with the same sequence number are the same message, though, so even if most cases are caught by the length check, programs that make deletions/replacements before locking could still delete the wrong messages. This behaviour could be trapped, though, by raising an exception in lock() if self._pending is set (after all, code like that would be incorrect unless it could be assumed that the mailbox module kept hashes of each message or something). Also attached is a patch to the test case, adding a lock/unlock around the message count to make sure _toc is up-to-date if the parent process finishes first; without it, there are still intermittent failures. File Added: mailbox-update-toc.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-20 09:46 Message: Logged In: YES user_id=11375 Originator: NO Attaching a patch that adds length checking: before doing a flush() on a single-file mailbox, seek to the end and verify its length is unchanged. It raises an ExternalClashError if the file's length has changed. (Should there be a different exception for this case, perhaps a subclass of ExternalClashError?) I verified that this change works by running a program that added 25 messages, pausing between each one, and then did 'echo "new line" > /tmp/mbox' from a shell while the program was running. I also noticed that the self._lookup() call in self.flush() wasn't necessary, and replaced it by an assertion. I think this change should go on both the trunk and 25-maint branches. File Added: length-checking.diff ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-18 12:43 Message: Logged In: YES user_id=11375 Originator: NO Eep, correct; changing the key IDs would be a big problem for existing code. We could say 'discard all keys' after doing lock() or unlock(), but this is an API change that means the fix couldn't be backported to 2.5-maint. We could make generating the ToC more complicated, preserving key IDs when possible; that may not be too difficult, though the code might be messy. Maybe it's best to just catch this error condition: save the size of the mailbox, updating it in _append_message(), and then make .flush() raise an exception if the mailbox size has unexpectedly changed. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-12-16 14:09 Message: Logged In: YES user_id=1504904 Originator: YES Yes, I see what you mean. I had tried multiple flushes, but only inside a single lock/unlock. But this means that in the no-truncate() code path, even this is currently unsafe, as the lock is momentarily released after flushing. I think _toc should be regenerated after every lock(), as with the risk of another process replacing/deleting/rearranging the messages, it isn't valid to carry sequence numbers from one locked period to another anyway, or from unlocked to locked. However, this runs the risk of dangerously breaking code that thinks it is valid to do so, even in the case where the mailbox was *not* modified (i.e. turning possible failure into certain failure). For instance, if the program removes message 1, then as things stand, the key "1" is no longer used, and removing message 2 will remove the message that followed 1. If _toc is regenerated in between, however (using the current code, so that the messages are renumbered from 0), then the old message 2 becomes message 1, and removing message 2 will therefore remove the wrong message. You'd also have things like pending deletions and replacements (also unsafe generally) being forgotten. So it would take some work to get right, if it's to work at all... ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 09:06 Message: Logged In: YES user_id=11375 Originator: NO I'm testing the fix using two Python processes running mailbox.py, and my test case fails even with your patch. This is due to another bug, even in the patched version. mbox has a dictionary attribute, _toc, mapping message keys to positions in the file. flush() writes out all the messages in self._toc and constructs a new _toc with the new file offsets. It doesn't re-read the file to see if new messages were added by another process. One fix that seems to work: instead of doing 'self._toc = new_toc' after flush() has done its work, do self._toc = None. The ToC will be regenerated the next time _lookup() is called, causing a re-read of all the contents of the mbox. Inefficient, but I see no way around the necessity for doing this. It's not clear to me that my suggested fix is enough, though. Process #1 opens a mailbox, reads the ToC, and the process does something else for 5 minutes. In the meantime, process #2 adds a file to the mbox. Process #1 then adds a message to the mbox and writes it out; it never notices process #2's change. Maybe the _toc has to be regenerated every time you call lock(), because at this point you know there will be no further updates to the mbox by any other process. Any unlocked usage of _toc should also really be regenerating _toc every time, because you never know if another process has added a message... but that would be really inefficient. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-15 08:17 Message: Logged In: YES user_id=11375 Originator: NO The attached patch adds a test case to test_mailbox.py that demonstrates the problem. No modifications to mailbox.py are needed to show data loss. Now looking at the patch... File Added: mailbox-test.patch ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2006-12-12 16:04 Message: Logged In: YES user_id=11375 Originator: NO I agree with David's analysis; this is in fact a bug. I'll try to look at the patch. ---------------------------------------------------------------------- Comment By: David Watson (baikie) Date: 2006-11-19 15:44 Message: Logged In: YES user_id=1504904 Originator: YES This is a bug. The point is that the code is subverting the protection of its own fcntl locking. I should have pointed out that Postfix was still using fcntl locking, and that should have been sufficient. (In fact, it was due to its use of fcntl locking that it chose precisely the wrong moment to deliver mail.) Dot-locking does protect against this, but not every program uses it - which is precisely the reason that the code implements fcntl locking in the first place. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-11-19 15:02 Message: Logged In: YES user_id=21627 Originator: NO Mailbox locking was invented precisely to support this kind of operation. Why do you complain that things break if you deliberately turn off the mechanism preventing breakage? I fail to see a bug here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1599254&group_id=5470 From noreply at sourceforge.net Wed Jan 24 21:53:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 12:53:33 -0800 Subject: [ python-Bugs-1544339 ] _ctypes fails to build on Solaris x86 32-bit (Sun compiler) Message-ID: Bugs item #1544339, was opened at 2006-08-22 06:28 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1544339&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Case Van Horsen (casevh) Assigned to: Thomas Heller (theller) Summary: _ctypes fails to build on Solaris x86 32-bit (Sun compiler) Initial Comment: The _ctypes modules fails to compile on Solaris 10 x86 32-bit using the Sun Studio 11 compiler. _ctypes does compile successfully using gcc. The error messages are attached. If needed, I can provide access to the machine. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2007-01-24 21:53 Message: Logged In: YES user_id=11105 Originator: NO You can at least see which test(s) crashes when you run the ctypes tests in this way: ./python Lib/ctypes/test/runtests.py -v ---------------------------------------------------------------------- Comment By: Case Van Horsen (casevh) Date: 2006-10-13 04:57 Message: Logged In: YES user_id=1212585 I have tracked down two issues. First Sun's cc compiler does defines __386 instead of __386__. This causes problems in ffitarget.h Second, Sun's cc compiler fails on the following line in ffi.h: } ffi_closure __attribute__((aligned (8))); This is a problem in Sun's cc compiler. It is fixed in the Sun Studio Express August 2006 release. I don't think there is a patch for the "official" Sun Studio 11 compiler. With these two changes, ctypes does compile but "make test" still fails. I am still researching the "make test" failure. test_crypt test_csv test_ctypes sh: objdump: not found *** Signal 11 - core dumped make: Fatal error: Command failed for target `test' bash-3.00$ ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1544339&group_id=5470 From noreply at sourceforge.net Wed Jan 24 22:00:06 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 13:00:06 -0800 Subject: [ python-Bugs-1637120 ] Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Message-ID: Bugs item #1637120, was opened at 2007-01-16 22:06 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Orlando Irrazabal (oirraza) Assigned to: Thomas Heller (theller) Summary: Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Initial Comment: Initial Comment: Build of Python 2.5 on AIX 5.3 with xlc_r fails with the below error message. The configure line is: ./configure --with-gcc="xlc_r -q64" --with-cxx="xlC_r -q64" --disable-ipv6 AR="ar -X64" [...] building '_ctypes' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -Ibuild/temp.aix-5.3-2.5/libffi/include -Ibuild/temp.aix-5.3-2.5/libffi -I/sw_install/python-2.5/Modules/_ctypes/libffi/src -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes.o "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.15: 1506-276 (S) Syntax error: possible missing '{'? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 248.3: 1506-273 (E) Missing type in declaration of ffi_raw_closure. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.38: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 252.23: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-033 (S) Function ffi_prep_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.23: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.38: 1506-275 (S) Unexpected text ')' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.43: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 258.28: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-033 (S) Function ffi_prep_java_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.28: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.43: 1506-275 (S) Unexpected text ')' encountered. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 71.9: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 77.26: 1506-195 (S) Integral constant expression with a value greater than zero is required. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 2804.31: 1506-068 (E) Operation between types "void*" and "int(*)(void)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3357.28: 1506-280 (E) Function argument assignment between types "int(*)(void)" and "void*" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3417.42: 1506-022 (S) "pcl" is not a member of "struct {...}". "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4749.67: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,const void*,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4750.66: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,int,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4751.69: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const char*,long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4752.64: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(void*,struct _object*,struct _object*)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4754.70: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const unsigned int*,int)" is not allowed. building '_ctypes_test' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.o "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 61.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 68.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 75.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field M must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field N must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field O must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field P must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field Q must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field R must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field S must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_s. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_u. [...] ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2007-01-24 22:00 Message: Logged In: YES user_id=11105 Originator: NO There seem to be three separate compilation errors: 1. (build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses) >This looks like the compile does not understand the __attribute__((...)) syntax. 2. In _ctypes_test.c, lines 61/68/75: The source uses C++ comments instead of C comments. 3. The compiler does not seem to support bit fields in structures with type 'short'. For issue 1: oirraza, can you try the compilation with '__attribute__((...))' removed? Issue 2: is fixed now in SVN. Issue 3: Hm, I don't actually know how to approach this. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 21:02 Message: Logged In: YES user_id=21627 Originator: NO oirraza, can you please try the subversion maintenance branch for Python 2.5 instead and report whether the bug has there? It is at http://svn.python.org/projects/python/branches/release25-maint/ Thomas, can you please take a look at this? If not, unassign. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 From noreply at sourceforge.net Wed Jan 24 22:09:20 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 13:09:20 -0800 Subject: [ python-Bugs-1643738 ] Problem with signals in a single-threaded application Message-ID: Bugs item #1643738, was opened at 2007-01-24 16:14 Message generated for change (Comment added) made by ulissesf You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ulisses Furquim (ulissesf) Assigned to: Nobody/Anonymous (nobody) Summary: Problem with signals in a single-threaded application Initial Comment: I'm aware of the problems with signals in a multithreaded application, but I was using signals in a single-threaded application and noticed something that seemed wrong. Some signals were apparently being lost, but when another signal came in the python handler for that "lost" signal was being called. The problem seems to be inside the signal module. The global variable is_tripped is incremented every time a signal arrives. Then, inside PyErr_CheckSignals() (the pending call that calls all python handlers for signals that arrived) we can return immediately if is_tripped is zero. If is_tripped is different than zero, we loop through all signals calling the registered python handlers and after that we zero is_tripped. This seems to be ok, but what happens if a signal arrives after we've returned from its handler (or even after we've checked if that signal arrived) and before we zero is_tripped? I guess we can have a situation where is_tripped is zero but some Handlers[i].tripped are not. In fact, I've inserted some debugging output and could see that this actually happens and then I've written the attached test program to reproduce the problem. When we run this program, the handler for the SIGALRM isn't called after we return from the SIGIO handler. We return to our main loop and print 'Loop!' every 3 seconds aprox. and the SIGALRM handler is called only when another signal arrives (like when we hit Ctrl-C). ---------------------------------------------------------------------- >Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 19:09 Message: Logged In: YES user_id=1578960 Originator: YES Yep, you're right, Tony Nelson. We overlooked this case but we can zero is_tripped after the test for threading as you've already said. The patch was updated and it also includes the code comment Tim Peters suggested. Please, I don't know if the wording is right so feel free to comment on it. I still plan to write a test case for the problem being solved (as soon as I understand how test_signals.py work :-). File Added: signals-v1.patch ---------------------------------------------------------------------- Comment By: Tony Nelson (tony_nelson) Date: 2007-01-24 18:24 Message: Logged In: YES user_id=1356214 Originator: NO ISTM that is_tripped should be zeroed after the test for threading, so that signals will finally get handled when the proper thread is running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2007-01-24 18:19 Message: Logged In: YES user_id=31435 Originator: NO Very nice! I'd add a description of the minor pathology remaining you described here as a code comment, at the point is_tripped is set to 0. If this stuff were screamingly obvious, the bug you fixed wouldn't have persisted for 15 years ;-) ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 17:46 Message: Logged In: YES user_id=1578960 Originator: YES This patch is very simple. We didn't want to remove the is_tripped variable because PyErr_CheckSignals() is called several times directly so it would be nice if we could return immediately if no signals arrived. We also didn't want to run the registered handlers with any set of signals blocked. Thus, we thought of zeroing is_tripped as soon as we know there are signals to be handled (after we test is_tripped). This way most of the times we can return immediately because is_tripped is zero and we also don't need to block any signals. However, with this approach we can have a situation where is_tripped isn't zero but we have no signals to handle, so we'll loop through all signals and no registered handler will be called. This happens when we receive a signal after we zero is_tripped and before we check Handlers[i].tripped for that signal. Any comments? File Added: signals-v0.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 From noreply at sourceforge.net Wed Jan 24 22:48:31 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 13:48:31 -0800 Subject: [ python-Feature Requests-1635335 ] Add registry functions to windows postinstall Message-ID: Feature Requests item #1635335, was opened at 2007-01-14 21:00 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: None >Status: Closed >Resolution: Wont Fix Priority: 5 Private: No Submitted By: anatoly techtonik (techtonik) Assigned to: Thomas Heller (theller) Summary: Add registry functions to windows postinstall Initial Comment: It would be useful to add regkey_created() or regkey_modified() to windows postinstall scripts along with directory_created() and file_created(). Useful for adding installed package to App Paths. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-24 22:48 Message: Logged In: YES user_id=21627 Originator: NO Closing this as "won't fix", then. techtonik, if you think this is an important feature, please contribute a patch. ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2007-01-24 21:47 Message: Logged In: YES user_id=11105 Originator: NO General comments: There are some problems with bdist_wininst that I assume will get worse in the future, especially with the postinstall script, because of different versions of the MS C runtime library. The installers that bdist_wininst creates are linked against a certain version which must be the same version that the Python runtime uses. If they do not match, the output of the postinstall script will not be displayed in the gui, or, in the worst case it could crash. The second problem is that bdist_wininst will not work with 64-bit Pythons. All this *could* probably be fixed, of course, but since bdist_msi does *not* have these problems IMO bdist_msi will superseed bdist_wininst sooner or later. About the concrete problem: Originally, when bdist_wininst was first implemented, Python did not have the _winreg module, so it was not possible to create or remove registry entries in the install script or postinstall script anyway and these function would not have made any sense at all. They could probably make sense now, but it is equally possible to modify the registry in the postinstall-script at installation time, and revert these changes in the postinstall-script at uninstallation time. I would prefer not to make these changes, since a workaround is possible. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 19:07 Message: Logged In: YES user_id=21627 Originator: NO Thomas, what do you think? ---------------------------------------------------------------------- Comment By: anatoly techtonik (techtonik) Date: 2007-01-20 15:26 Message: Logged In: YES user_id=669020 Originator: YES Windows postinstall script is bundled with installation, launched after installation and just before uninstall. It is described here. http://docs.python.org/dist/postinstallation-script.html#SECTION005310000000000000000 Where these should be defined? I do not know - there are already some functions that are said to be "available as additional built-in functions in the installation script." on the page above. The purpose is to be able to create/delete registry keys during installation. This should also be reflected in installation log file with appropriate status code so that users could be aware of what's going on. I think the functions needed are already defined in http://docs.python.org/lib/module--winreg.html but the module is very low-level. I'd rather use Autoit like API - http://www.autoitscript.com/autoit3/docs/functions/RegRead.htm http://www.autoitscript.com/autoit3/docs/functions/RegWrite.htm http://www.autoitscript.com/autoit3/docs/functions/RegDelete.htm ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-20 11:55 Message: Logged In: YES user_id=21627 Originator: NO Can you please elaborate? Where should these functions be defined, what should they do, and when should they be invoked (by what code)? Also, what is a "windows postinstall script"? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1635335&group_id=5470 From noreply at sourceforge.net Wed Jan 24 22:54:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 13:54:09 -0800 Subject: [ python-Bugs-1642054 ] Python 2.5 gets curses.h warning on HPUX Message-ID: Bugs item #1642054, was opened at 2007-01-22 19:27 Message generated for change (Comment added) made by roysmith You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1642054&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Roy Smith (roysmith) Assigned to: Nobody/Anonymous (nobody) Summary: Python 2.5 gets curses.h warning on HPUX Initial Comment: I downloaded http://www.python.org/ftp/python/2.5/Python-2.5.tgz and tried to build it on "HP-UX glade B.11.11 U 9000/800 unknown". When I ran "./configure", I got warnings that "curses.h: present but cannot be compiled". See attached log file. ---------------------------------------------------------------------- >Comment By: Roy Smith (roysmith) Date: 2007-01-24 16:54 Message: Logged In: YES user_id=390499 Originator: YES OK, looking a bit deeper, the actual error in config.log is: configure:4739: result: no configure:4774: checking for conio.h configure:4781: result: no configure:4658: checking curses.h usability configure:4670: gcc -c -g -O2 conftest.c >&5 In file included from conftest.c:54: /opt/gnu/lib/gcc-lib/hppa2.0w-hp-hpux11.11/3.2.3/include/curses.h:755: syntax error before "va_list" /opt/gnu/lib/gcc-lib/hppa2.0w-hp-hpux11.11/3.2.3/include/curses.h:756: syntax error before "va_list" /opt/gnu/lib/gcc-lib/hppa2.0w-hp-hpux11.11/3.2.3/include/curses.h:757: syntax error before "va_list" /opt/gnu/lib/gcc-lib/hppa2.0w-hp-hpux11.11/3.2.3/include/curses.h:758: syntax error before "va_list" Adding "#include " appears to solve the problem. I'm pretty weak (major understatement) on building configure scripts, but if you create a new one with varargs.h, I'll be happy to test it on my box for you. ---------------------------------------------------------------------- Comment By: A.M. Kuchling (akuchling) Date: 2007-01-24 15:20 Message: Logged In: YES user_id=11375 Originator: NO You'll have to help us some more. This is apparently happening because HP-UX's curses.h file needs some other header file to be included first; not having an HP-UX machine, I have no way to figure out which other header file is needed. Could you please try to figure out which file is necessary? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1642054&group_id=5470 From noreply at sourceforge.net Thu Jan 25 00:00:31 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 15:00:31 -0800 Subject: [ python-Bugs-1643943 ] strptime %U broken Message-ID: Bugs item #1643943, was opened at 2007-01-24 23:00 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643943&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Brian Nahas (bnahas) Assigned to: Nobody/Anonymous (nobody) Summary: strptime %U broken Initial Comment: Python 2.4.1 (#1, May 16 2005, 15:19:29) [GCC 4.0.0 20050512 (Red Hat 4.0.0-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from time import strptime >>> strptime('2006-53-0', '%Y-%U-%w') (2006, 12, 31, 0, 0, 0, 6, 365, -1) >>> strptime('2007-00-0', '%Y-%U-%w') (2006, 12, 24, 0, 0, 0, 6, -7, -1) >>> strptime('2007-01-0', '%Y-%U-%w') (2007, 1, 7, 0, 0, 0, 6, 7, -1) >>> strptime('2007-02-0', '%Y-%U-%w') (2007, 1, 7, 0, 0, 0, 6, 7, -1) >>> strptime('2007-03-0', '%Y-%U-%w') (2007, 1, 14, 0, 0, 0, 6, 14, -1) >>> Note that in the above test, Sunday of week 1 and week 2 for 2007 reported the date as 2007-01-07 and Sunday of week 0 was reported as 2006-12-24, not 2006-12-31. I'm not sure exactly what is correct, but the inconsistencies are bothersome. Same results on: Python 2.4.4c1 (#70, Oct 11 2006, 10:59:14) [MSC v.1310 32 bit (Intel)] on win32 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643943&group_id=5470 From noreply at sourceforge.net Thu Jan 25 01:08:34 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 16:08:34 -0800 Subject: [ python-Bugs-1637120 ] Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Message-ID: Bugs item #1637120, was opened at 2007-01-16 18:06 Message generated for change (Comment added) made by oirraza You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Orlando Irrazabal (oirraza) Assigned to: Thomas Heller (theller) Summary: Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Initial Comment: Initial Comment: Build of Python 2.5 on AIX 5.3 with xlc_r fails with the below error message. The configure line is: ./configure --with-gcc="xlc_r -q64" --with-cxx="xlC_r -q64" --disable-ipv6 AR="ar -X64" [...] building '_ctypes' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -Ibuild/temp.aix-5.3-2.5/libffi/include -Ibuild/temp.aix-5.3-2.5/libffi -I/sw_install/python-2.5/Modules/_ctypes/libffi/src -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes.o "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.15: 1506-276 (S) Syntax error: possible missing '{'? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 248.3: 1506-273 (E) Missing type in declaration of ffi_raw_closure. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.38: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 252.23: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-033 (S) Function ffi_prep_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.23: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.38: 1506-275 (S) Unexpected text ')' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.43: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 258.28: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-033 (S) Function ffi_prep_java_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.28: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.43: 1506-275 (S) Unexpected text ')' encountered. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 71.9: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 77.26: 1506-195 (S) Integral constant expression with a value greater than zero is required. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 2804.31: 1506-068 (E) Operation between types "void*" and "int(*)(void)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3357.28: 1506-280 (E) Function argument assignment between types "int(*)(void)" and "void*" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3417.42: 1506-022 (S) "pcl" is not a member of "struct {...}". "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4749.67: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,const void*,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4750.66: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,int,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4751.69: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const char*,long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4752.64: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(void*,struct _object*,struct _object*)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4754.70: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const unsigned int*,int)" is not allowed. building '_ctypes_test' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.o "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 61.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 68.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 75.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field M must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field N must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field O must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field P must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field Q must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field R must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field S must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_s. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_u. [...] ---------------------------------------------------------------------- >Comment By: Orlando Irrazabal (oirraza) Date: 2007-01-24 21:08 Message: Logged In: YES user_id=876766 Originator: YES Thomas, i downloaded the subversion maintenance branch from http://svn.python.org/projects/python/branches/release25-maint/ but when i run ./configure fails with the below error message [...] mv: cannot rename config.c to Modules/config.c: No such file or directory ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2007-01-24 18:00 Message: Logged In: YES user_id=11105 Originator: NO There seem to be three separate compilation errors: 1. (build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses) >This looks like the compile does not understand the __attribute__((...)) syntax. 2. In _ctypes_test.c, lines 61/68/75: The source uses C++ comments instead of C comments. 3. The compiler does not seem to support bit fields in structures with type 'short'. For issue 1: oirraza, can you try the compilation with '__attribute__((...))' removed? Issue 2: is fixed now in SVN. Issue 3: Hm, I don't actually know how to approach this. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 17:02 Message: Logged In: YES user_id=21627 Originator: NO oirraza, can you please try the subversion maintenance branch for Python 2.5 instead and report whether the bug has there? It is at http://svn.python.org/projects/python/branches/release25-maint/ Thomas, can you please take a look at this? If not, unassign. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 From noreply at sourceforge.net Thu Jan 25 07:20:52 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 24 Jan 2007 22:20:52 -0800 Subject: [ python-Bugs-1637120 ] Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Message-ID: Bugs item #1637120, was opened at 2007-01-16 22:06 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Orlando Irrazabal (oirraza) Assigned to: Thomas Heller (theller) Summary: Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Initial Comment: Initial Comment: Build of Python 2.5 on AIX 5.3 with xlc_r fails with the below error message. The configure line is: ./configure --with-gcc="xlc_r -q64" --with-cxx="xlC_r -q64" --disable-ipv6 AR="ar -X64" [...] building '_ctypes' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -Ibuild/temp.aix-5.3-2.5/libffi/include -Ibuild/temp.aix-5.3-2.5/libffi -I/sw_install/python-2.5/Modules/_ctypes/libffi/src -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes.o "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.15: 1506-276 (S) Syntax error: possible missing '{'? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 248.3: 1506-273 (E) Missing type in declaration of ffi_raw_closure. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.38: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 252.23: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-033 (S) Function ffi_prep_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.23: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.38: 1506-275 (S) Unexpected text ')' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.43: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 258.28: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-033 (S) Function ffi_prep_java_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.28: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.43: 1506-275 (S) Unexpected text ')' encountered. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 71.9: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 77.26: 1506-195 (S) Integral constant expression with a value greater than zero is required. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 2804.31: 1506-068 (E) Operation between types "void*" and "int(*)(void)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3357.28: 1506-280 (E) Function argument assignment between types "int(*)(void)" and "void*" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3417.42: 1506-022 (S) "pcl" is not a member of "struct {...}". "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4749.67: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,const void*,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4750.66: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,int,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4751.69: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const char*,long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4752.64: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(void*,struct _object*,struct _object*)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4754.70: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const unsigned int*,int)" is not allowed. building '_ctypes_test' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.o "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 61.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 68.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 75.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field M must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field N must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field O must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field P must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field Q must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field R must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field S must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_s. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_u. [...] ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-25 07:20 Message: Logged In: YES user_id=21627 Originator: NO oirraza: when you say "I downloaded", what precisely do you mean? The only sensible way of downloading it is through subversion checkout, i.e. "svn co http:...". ---------------------------------------------------------------------- Comment By: Orlando Irrazabal (oirraza) Date: 2007-01-25 01:08 Message: Logged In: YES user_id=876766 Originator: YES Thomas, i downloaded the subversion maintenance branch from http://svn.python.org/projects/python/branches/release25-maint/ but when i run ./configure fails with the below error message [...] mv: cannot rename config.c to Modules/config.c: No such file or directory ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2007-01-24 22:00 Message: Logged In: YES user_id=11105 Originator: NO There seem to be three separate compilation errors: 1. (build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses) >This looks like the compile does not understand the __attribute__((...)) syntax. 2. In _ctypes_test.c, lines 61/68/75: The source uses C++ comments instead of C comments. 3. The compiler does not seem to support bit fields in structures with type 'short'. For issue 1: oirraza, can you try the compilation with '__attribute__((...))' removed? Issue 2: is fixed now in SVN. Issue 3: Hm, I don't actually know how to approach this. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 21:02 Message: Logged In: YES user_id=21627 Originator: NO oirraza, can you please try the subversion maintenance branch for Python 2.5 instead and report whether the bug has there? It is at http://svn.python.org/projects/python/branches/release25-maint/ Thomas, can you please take a look at this? If not, unassign. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 From noreply at sourceforge.net Thu Jan 25 10:51:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 01:51:57 -0800 Subject: [ python-Bugs-1644239 ] Error arrow offset wrong Message-ID: Bugs item #1644239, was opened at 2007-01-25 09:51 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1644239&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: IDLE Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Cees Timmerman (ctimmerman) Assigned to: Nobody/Anonymous (nobody) Summary: Error arrow offset wrong Initial Comment: >>> def check_path(f): ... asert not '"' in f File "", line 2 asert not '"' in f ^ SyntaxError: invalid syntax It looks like the tab i used to indent was converted to 4 spaces and then each space back to tabs which each got converted to 4 spaces. Python 2.4.4c1 (#2, Oct 11 2006, 21:51:02) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5)] on linux2 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1644239&group_id=5470 From noreply at sourceforge.net Thu Jan 25 15:13:26 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 06:13:26 -0800 Subject: [ python-Bugs-1637120 ] Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Message-ID: Bugs item #1637120, was opened at 2007-01-16 18:06 Message generated for change (Comment added) made by oirraza You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Orlando Irrazabal (oirraza) Assigned to: Thomas Heller (theller) Summary: Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Initial Comment: Initial Comment: Build of Python 2.5 on AIX 5.3 with xlc_r fails with the below error message. The configure line is: ./configure --with-gcc="xlc_r -q64" --with-cxx="xlC_r -q64" --disable-ipv6 AR="ar -X64" [...] building '_ctypes' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -Ibuild/temp.aix-5.3-2.5/libffi/include -Ibuild/temp.aix-5.3-2.5/libffi -I/sw_install/python-2.5/Modules/_ctypes/libffi/src -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes.o "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.15: 1506-276 (S) Syntax error: possible missing '{'? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 248.3: 1506-273 (E) Missing type in declaration of ffi_raw_closure. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.38: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 252.23: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-033 (S) Function ffi_prep_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.23: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.38: 1506-275 (S) Unexpected text ')' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.43: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 258.28: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-033 (S) Function ffi_prep_java_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.28: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.43: 1506-275 (S) Unexpected text ')' encountered. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 71.9: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 77.26: 1506-195 (S) Integral constant expression with a value greater than zero is required. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 2804.31: 1506-068 (E) Operation between types "void*" and "int(*)(void)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3357.28: 1506-280 (E) Function argument assignment between types "int(*)(void)" and "void*" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3417.42: 1506-022 (S) "pcl" is not a member of "struct {...}". "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4749.67: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,const void*,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4750.66: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,int,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4751.69: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const char*,long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4752.64: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(void*,struct _object*,struct _object*)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4754.70: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const unsigned int*,int)" is not allowed. building '_ctypes_test' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.o "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 61.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 68.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 75.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field M must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field N must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field O must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field P must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field Q must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field R must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field S must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_s. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_u. [...] ---------------------------------------------------------------------- >Comment By: Orlando Irrazabal (oirraza) Date: 2007-01-25 11:13 Message: Logged In: YES user_id=876766 Originator: YES I downloaded with eclipse (subclipse addin) and then ftp to my aix machine. It's this ok? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-25 03:20 Message: Logged In: YES user_id=21627 Originator: NO oirraza: when you say "I downloaded", what precisely do you mean? The only sensible way of downloading it is through subversion checkout, i.e. "svn co http:...". ---------------------------------------------------------------------- Comment By: Orlando Irrazabal (oirraza) Date: 2007-01-24 21:08 Message: Logged In: YES user_id=876766 Originator: YES Thomas, i downloaded the subversion maintenance branch from http://svn.python.org/projects/python/branches/release25-maint/ but when i run ./configure fails with the below error message [...] mv: cannot rename config.c to Modules/config.c: No such file or directory ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2007-01-24 18:00 Message: Logged In: YES user_id=11105 Originator: NO There seem to be three separate compilation errors: 1. (build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses) >This looks like the compile does not understand the __attribute__((...)) syntax. 2. In _ctypes_test.c, lines 61/68/75: The source uses C++ comments instead of C comments. 3. The compiler does not seem to support bit fields in structures with type 'short'. For issue 1: oirraza, can you try the compilation with '__attribute__((...))' removed? Issue 2: is fixed now in SVN. Issue 3: Hm, I don't actually know how to approach this. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 17:02 Message: Logged In: YES user_id=21627 Originator: NO oirraza, can you please try the subversion maintenance branch for Python 2.5 instead and report whether the bug has there? It is at http://svn.python.org/projects/python/branches/release25-maint/ Thomas, can you please take a look at this? If not, unassign. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 From noreply at sourceforge.net Thu Jan 25 17:30:48 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 08:30:48 -0800 Subject: [ python-Bugs-1644239 ] Error arrow offset wrong Message-ID: Bugs item #1644239, was opened at 2007-01-25 09:51 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1644239&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: IDLE Group: Python 2.4 >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Cees Timmerman (ctimmerman) Assigned to: Nobody/Anonymous (nobody) Summary: Error arrow offset wrong Initial Comment: >>> def check_path(f): ... asert not '"' in f File "", line 2 asert not '"' in f ^ SyntaxError: invalid syntax It looks like the tab i used to indent was converted to 4 spaces and then each space back to tabs which each got converted to 4 spaces. Python 2.4.4c1 (#2, Oct 11 2006, 21:51:02) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu5)] on linux2 ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-25 16:30 Message: Logged In: YES user_id=849994 Originator: NO This has nothing to do with tabs, the arrow is at the same position when indenting with spaces. An "asert" alone on a line is not invalid syntax. A line starting with "asert not " is not necessarily invalid too since e.g. "in x" could follow. But as soon as you add "'", it's invalid, so the parsers shows the arrow there. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1644239&group_id=5470 From noreply at sourceforge.net Thu Jan 25 17:33:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 08:33:13 -0800 Subject: [ python-Feature Requests-602345 ] option for not writing .py[co] files Message-ID: Feature Requests item #602345, was opened at 2002-08-30 11:13 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=602345&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. >Category: Python Interpreter Core >Group: None Status: Open Resolution: None Priority: 3 Private: No Submitted By: Matthias Klose (doko) Assigned to: Skip Montanaro (montanaro) Summary: option for not writing .py[co] files Initial Comment: [destilled from http://bugs.debian.org/96111] Currently python tries to write the .py[co] files even in situations, where it will fail, like on read-only mounted file systems. In other situations I don't want python trying to write the compiled files, i.e. having installed the modules as root as part of a distribution, compiled them correctly, there is no need to write them. Or compiling .py files which are configuration files. Is it reasonable to add an option to python (--dont-write-compiled-files) to the interpreter, which doesn't write them? This would not affect existing code at all. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-25 16:33 Message: Logged In: YES user_id=849994 Originator: NO Turning in a feature request. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2003-05-22 22:20 Message: Logged In: YES user_id=44345 I have a c.l.py message buried in my python mailbox which raises some Windows-related problems. I have yet to figure that out, but they looked somewhat difficult on first glance. I'll try to dredge that up and attach it to this id. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-05-22 22:03 Message: Logged In: YES user_id=33168 I think Skip now owns this because of his PEP. :-) ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-05-21 05:50 Message: Logged In: YES user_id=357491 PEP 304 now handles this situation. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-01-12 21:39 Message: Logged In: YES user_id=33168 You are correct about the patch being incomplete. I still have to do all the doc. I hadn't thought about an env't variable or variable in sys. Both are certainly reasonable. I will update the patch. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-01-12 19:47 Message: Logged In: YES user_id=21627 The patch looks good, but is surely incomplete: there should be patches to the documentation, in particular to the man page. It might be also desirable to parallel this option with an environment variable, and/or to expose it writable through the sys module. With the environment variable, people could run Python scripts that won't create .pyc files (as #! /usr/bin/env python does not allow for further command line options). With the API, certain applications could declare that they never want to write .pyc files as they expect to run in parallel with itself, and might cause .pyc conflicts. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-09-03 15:39 Message: Logged In: YES user_id=6380 I think it's a good idea, but please use a single upper case letter for the option. Python doesn't support long options and I'm not about to start doing so. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2002-09-01 23:30 Message: Logged In: YES user_id=33168 Guido, do you think this is a good idea? If so, assign back to me and I'll work up a patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=602345&group_id=5470 From noreply at sourceforge.net Thu Jan 25 17:45:33 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 08:45:33 -0800 Subject: [ python-Feature Requests-602345 ] option for not writing .py[co] files Message-ID: Feature Requests item #602345, was opened at 2002-08-30 06:13 Message generated for change (Comment added) made by montanaro You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=602345&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: None Status: Open Resolution: None Priority: 3 Private: No Submitted By: Matthias Klose (doko) Assigned to: Skip Montanaro (montanaro) Summary: option for not writing .py[co] files Initial Comment: [destilled from http://bugs.debian.org/96111] Currently python tries to write the .py[co] files even in situations, where it will fail, like on read-only mounted file systems. In other situations I don't want python trying to write the compiled files, i.e. having installed the modules as root as part of a distribution, compiled them correctly, there is no need to write them. Or compiling .py files which are configuration files. Is it reasonable to add an option to python (--dont-write-compiled-files) to the interpreter, which doesn't write them? This would not affect existing code at all. ---------------------------------------------------------------------- >Comment By: Skip Montanaro (montanaro) Date: 2007-01-25 10:45 Message: Logged In: YES user_id=44345 Originator: NO Took me awhile (nearly four years!) to find it, but I finally found the c.l.py message I referred to regarding Windows problems. It's attached. Skip File Added: kew-msg ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-25 10:33 Message: Logged In: YES user_id=849994 Originator: NO Turning in a feature request. ---------------------------------------------------------------------- Comment By: Skip Montanaro (montanaro) Date: 2003-05-22 17:20 Message: Logged In: YES user_id=44345 I have a c.l.py message buried in my python mailbox which raises some Windows-related problems. I have yet to figure that out, but they looked somewhat difficult on first glance. I'll try to dredge that up and attach it to this id. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-05-22 17:03 Message: Logged In: YES user_id=33168 I think Skip now owns this because of his PEP. :-) ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2003-05-21 00:50 Message: Logged In: YES user_id=357491 PEP 304 now handles this situation. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2003-01-12 15:39 Message: Logged In: YES user_id=33168 You are correct about the patch being incomplete. I still have to do all the doc. I hadn't thought about an env't variable or variable in sys. Both are certainly reasonable. I will update the patch. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2003-01-12 13:47 Message: Logged In: YES user_id=21627 The patch looks good, but is surely incomplete: there should be patches to the documentation, in particular to the man page. It might be also desirable to parallel this option with an environment variable, and/or to expose it writable through the sys module. With the environment variable, people could run Python scripts that won't create .pyc files (as #! /usr/bin/env python does not allow for further command line options). With the API, certain applications could declare that they never want to write .pyc files as they expect to run in parallel with itself, and might cause .pyc conflicts. ---------------------------------------------------------------------- Comment By: Guido van Rossum (gvanrossum) Date: 2002-09-03 10:39 Message: Logged In: YES user_id=6380 I think it's a good idea, but please use a single upper case letter for the option. Python doesn't support long options and I'm not about to start doing so. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2002-09-01 18:30 Message: Logged In: YES user_id=33168 Guido, do you think this is a good idea? If so, assign back to me and I'll work up a patch. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=602345&group_id=5470 From noreply at sourceforge.net Thu Jan 25 19:50:44 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 10:50:44 -0800 Subject: [ python-Bugs-1643943 ] strptime %U broken Message-ID: Bugs item #1643943, was opened at 2007-01-24 15:00 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643943&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Brian Nahas (bnahas) >Assigned to: Brett Cannon (bcannon) Summary: strptime %U broken Initial Comment: Python 2.4.1 (#1, May 16 2005, 15:19:29) [GCC 4.0.0 20050512 (Red Hat 4.0.0-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from time import strptime >>> strptime('2006-53-0', '%Y-%U-%w') (2006, 12, 31, 0, 0, 0, 6, 365, -1) >>> strptime('2007-00-0', '%Y-%U-%w') (2006, 12, 24, 0, 0, 0, 6, -7, -1) >>> strptime('2007-01-0', '%Y-%U-%w') (2007, 1, 7, 0, 0, 0, 6, 7, -1) >>> strptime('2007-02-0', '%Y-%U-%w') (2007, 1, 7, 0, 0, 0, 6, 7, -1) >>> strptime('2007-03-0', '%Y-%U-%w') (2007, 1, 14, 0, 0, 0, 6, 14, -1) >>> Note that in the above test, Sunday of week 1 and week 2 for 2007 reported the date as 2007-01-07 and Sunday of week 0 was reported as 2006-12-24, not 2006-12-31. I'm not sure exactly what is correct, but the inconsistencies are bothersome. Same results on: Python 2.4.4c1 (#70, Oct 11 2006, 10:59:14) [MSC v.1310 32 bit (Intel)] on win32 ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2007-01-25 10:50 Message: Logged In: YES user_id=357491 Originator: NO I will try to fix this when I can. Just to warn you, Brian, I really doubt I will put the effort into backporting this to 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643943&group_id=5470 From noreply at sourceforge.net Thu Jan 25 20:39:57 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 11:39:57 -0800 Subject: [ python-Bugs-1643943 ] strptime %U broken Message-ID: Bugs item #1643943, was opened at 2007-01-24 23:00 Message generated for change (Comment added) made by bnahas You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643943&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Brian Nahas (bnahas) Assigned to: Brett Cannon (bcannon) Summary: strptime %U broken Initial Comment: Python 2.4.1 (#1, May 16 2005, 15:19:29) [GCC 4.0.0 20050512 (Red Hat 4.0.0-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from time import strptime >>> strptime('2006-53-0', '%Y-%U-%w') (2006, 12, 31, 0, 0, 0, 6, 365, -1) >>> strptime('2007-00-0', '%Y-%U-%w') (2006, 12, 24, 0, 0, 0, 6, -7, -1) >>> strptime('2007-01-0', '%Y-%U-%w') (2007, 1, 7, 0, 0, 0, 6, 7, -1) >>> strptime('2007-02-0', '%Y-%U-%w') (2007, 1, 7, 0, 0, 0, 6, 7, -1) >>> strptime('2007-03-0', '%Y-%U-%w') (2007, 1, 14, 0, 0, 0, 6, 14, -1) >>> Note that in the above test, Sunday of week 1 and week 2 for 2007 reported the date as 2007-01-07 and Sunday of week 0 was reported as 2006-12-24, not 2006-12-31. I'm not sure exactly what is correct, but the inconsistencies are bothersome. Same results on: Python 2.4.4c1 (#70, Oct 11 2006, 10:59:14) [MSC v.1310 32 bit (Intel)] on win32 ---------------------------------------------------------------------- >Comment By: Brian Nahas (bnahas) Date: 2007-01-25 19:39 Message: Logged In: YES user_id=562121 Originator: YES No worries. Here's what I'm doing as a work-around. I needed to convert the results of a mysql YEARWEEK field to the sunday at the start of that week: import datetime def mysqlWeekToSundayDate(yearweek): year = int(yearweek[0:4]) week = int(yearweek[4:6]) day = datetime.date(year, 1, 1) dayDelta = datetime.timedelta(1) weekDelta = datetime.timedelta(7) while day.strftime("%w") != "0": day = day + dayDelta day = day + ((week - 1) * weekDelta) return day I'm relatively new to Python so it is probably not the most efficient method but it does the job. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-25 18:50 Message: Logged In: YES user_id=357491 Originator: NO I will try to fix this when I can. Just to warn you, Brian, I really doubt I will put the effort into backporting this to 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643943&group_id=5470 From noreply at sourceforge.net Thu Jan 25 22:07:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 13:07:13 -0800 Subject: [ python-Bugs-1637120 ] Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Message-ID: Bugs item #1637120, was opened at 2007-01-16 22:06 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Orlando Irrazabal (oirraza) Assigned to: Thomas Heller (theller) Summary: Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Initial Comment: Initial Comment: Build of Python 2.5 on AIX 5.3 with xlc_r fails with the below error message. The configure line is: ./configure --with-gcc="xlc_r -q64" --with-cxx="xlC_r -q64" --disable-ipv6 AR="ar -X64" [...] building '_ctypes' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -Ibuild/temp.aix-5.3-2.5/libffi/include -Ibuild/temp.aix-5.3-2.5/libffi -I/sw_install/python-2.5/Modules/_ctypes/libffi/src -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes.o "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.15: 1506-276 (S) Syntax error: possible missing '{'? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 248.3: 1506-273 (E) Missing type in declaration of ffi_raw_closure. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.38: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 252.23: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-033 (S) Function ffi_prep_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.23: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.38: 1506-275 (S) Unexpected text ')' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.43: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 258.28: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-033 (S) Function ffi_prep_java_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.28: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.43: 1506-275 (S) Unexpected text ')' encountered. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 71.9: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 77.26: 1506-195 (S) Integral constant expression with a value greater than zero is required. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 2804.31: 1506-068 (E) Operation between types "void*" and "int(*)(void)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3357.28: 1506-280 (E) Function argument assignment between types "int(*)(void)" and "void*" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3417.42: 1506-022 (S) "pcl" is not a member of "struct {...}". "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4749.67: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,const void*,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4750.66: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,int,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4751.69: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const char*,long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4752.64: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(void*,struct _object*,struct _object*)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4754.70: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const unsigned int*,int)" is not allowed. building '_ctypes_test' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.o "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 61.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 68.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 75.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field M must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field N must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field O must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field P must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field Q must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field R must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field S must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_s. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_u. [...] ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-25 22:07 Message: Logged In: YES user_id=21627 Originator: NO That should have worked. Can you please debug the build process yourself a bit. It will be very tedious to communicate individual commands, then wait a day, communicate the next command. Start looking at the command immediately before the mv. It should have been a "makesetup" invocation, which should have produced config.c which it then tried to move. ---------------------------------------------------------------------- Comment By: Orlando Irrazabal (oirraza) Date: 2007-01-25 15:13 Message: Logged In: YES user_id=876766 Originator: YES I downloaded with eclipse (subclipse addin) and then ftp to my aix machine. It's this ok? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-25 07:20 Message: Logged In: YES user_id=21627 Originator: NO oirraza: when you say "I downloaded", what precisely do you mean? The only sensible way of downloading it is through subversion checkout, i.e. "svn co http:...". ---------------------------------------------------------------------- Comment By: Orlando Irrazabal (oirraza) Date: 2007-01-25 01:08 Message: Logged In: YES user_id=876766 Originator: YES Thomas, i downloaded the subversion maintenance branch from http://svn.python.org/projects/python/branches/release25-maint/ but when i run ./configure fails with the below error message [...] mv: cannot rename config.c to Modules/config.c: No such file or directory ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2007-01-24 22:00 Message: Logged In: YES user_id=11105 Originator: NO There seem to be three separate compilation errors: 1. (build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses) >This looks like the compile does not understand the __attribute__((...)) syntax. 2. In _ctypes_test.c, lines 61/68/75: The source uses C++ comments instead of C comments. 3. The compiler does not seem to support bit fields in structures with type 'short'. For issue 1: oirraza, can you try the compilation with '__attribute__((...))' removed? Issue 2: is fixed now in SVN. Issue 3: Hm, I don't actually know how to approach this. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 21:02 Message: Logged In: YES user_id=21627 Originator: NO oirraza, can you please try the subversion maintenance branch for Python 2.5 instead and report whether the bug has there? It is at http://svn.python.org/projects/python/branches/release25-maint/ Thomas, can you please take a look at this? If not, unassign. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 From noreply at sourceforge.net Thu Jan 25 22:33:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 13:33:13 -0800 Subject: [ python-Bugs-1643943 ] strptime %U broken Message-ID: Bugs item #1643943, was opened at 2007-01-24 15:00 Message generated for change (Comment added) made by bcannon You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643943&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Brian Nahas (bnahas) Assigned to: Brett Cannon (bcannon) Summary: strptime %U broken Initial Comment: Python 2.4.1 (#1, May 16 2005, 15:19:29) [GCC 4.0.0 20050512 (Red Hat 4.0.0-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from time import strptime >>> strptime('2006-53-0', '%Y-%U-%w') (2006, 12, 31, 0, 0, 0, 6, 365, -1) >>> strptime('2007-00-0', '%Y-%U-%w') (2006, 12, 24, 0, 0, 0, 6, -7, -1) >>> strptime('2007-01-0', '%Y-%U-%w') (2007, 1, 7, 0, 0, 0, 6, 7, -1) >>> strptime('2007-02-0', '%Y-%U-%w') (2007, 1, 7, 0, 0, 0, 6, 7, -1) >>> strptime('2007-03-0', '%Y-%U-%w') (2007, 1, 14, 0, 0, 0, 6, 14, -1) >>> Note that in the above test, Sunday of week 1 and week 2 for 2007 reported the date as 2007-01-07 and Sunday of week 0 was reported as 2006-12-24, not 2006-12-31. I'm not sure exactly what is correct, but the inconsistencies are bothersome. Same results on: Python 2.4.4c1 (#70, Oct 11 2006, 10:59:14) [MSC v.1310 32 bit (Intel)] on win32 ---------------------------------------------------------------------- >Comment By: Brett Cannon (bcannon) Date: 2007-01-25 13:33 Message: Logged In: YES user_id=357491 Originator: NO Rev. 53564 (trunk) has the fix and 2.5 will as soon as a commit problem I am having is fixed. I basically rewrote the algorithm to have a generic calculation for the Julian day and just shifted the length of week 0 and the day of the week based on whether %U or %W was specified. Cut out all the other edge cases which were messy and confusing. ---------------------------------------------------------------------- Comment By: Brian Nahas (bnahas) Date: 2007-01-25 11:39 Message: Logged In: YES user_id=562121 Originator: YES No worries. Here's what I'm doing as a work-around. I needed to convert the results of a mysql YEARWEEK field to the sunday at the start of that week: import datetime def mysqlWeekToSundayDate(yearweek): year = int(yearweek[0:4]) week = int(yearweek[4:6]) day = datetime.date(year, 1, 1) dayDelta = datetime.timedelta(1) weekDelta = datetime.timedelta(7) while day.strftime("%w") != "0": day = day + dayDelta day = day + ((week - 1) * weekDelta) return day I'm relatively new to Python so it is probably not the most efficient method but it does the job. ---------------------------------------------------------------------- Comment By: Brett Cannon (bcannon) Date: 2007-01-25 10:50 Message: Logged In: YES user_id=357491 Originator: NO I will try to fix this when I can. Just to warn you, Brian, I really doubt I will put the effort into backporting this to 2.4. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643943&group_id=5470 From noreply at sourceforge.net Thu Jan 25 23:29:02 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 14:29:02 -0800 Subject: [ python-Bugs-1637120 ] Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Message-ID: Bugs item #1637120, was opened at 2007-01-16 18:06 Message generated for change (Comment added) made by oirraza You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Orlando Irrazabal (oirraza) Assigned to: Thomas Heller (theller) Summary: Python 2.5 fails to build on AIX 5.3 (xlc_r compiler) Initial Comment: Initial Comment: Build of Python 2.5 on AIX 5.3 with xlc_r fails with the below error message. The configure line is: ./configure --with-gcc="xlc_r -q64" --with-cxx="xlC_r -q64" --disable-ipv6 AR="ar -X64" [...] building '_ctypes' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -Ibuild/temp.aix-5.3-2.5/libffi/include -Ibuild/temp.aix-5.3-2.5/libffi -I/sw_install/python-2.5/Modules/_ctypes/libffi/src -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes.o "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.15: 1506-276 (S) Syntax error: possible missing '{'? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 248.3: 1506-273 (E) Missing type in declaration of ffi_raw_closure. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.38: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 252.23: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-033 (S) Function ffi_prep_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 251.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.23: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 254.38: 1506-275 (S) Unexpected text ')' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.43: 1506-275 (S) Unexpected text '*' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 258.28: 1506-276 (S) Syntax error: possible missing identifier? "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-033 (S) Function ffi_prep_java_raw_closure is not valid. Function cannot return a function. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 257.1: 1506-282 (S) The type of the parameters must be specified in a prototype. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.28: 1506-275 (S) Unexpected text 'void' encountered. "build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 260.43: 1506-275 (S) Unexpected text ')' encountered. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 71.9: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/ctypes.h", line 77.26: 1506-195 (S) Integral constant expression with a value greater than zero is required. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 2804.31: 1506-068 (E) Operation between types "void*" and "int(*)(void)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3357.28: 1506-280 (E) Function argument assignment between types "int(*)(void)" and "void*" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 3417.42: 1506-022 (S) "pcl" is not a member of "struct {...}". "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4749.67: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,const void*,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4750.66: 1506-280 (E) Function argument assignment between types "void*" and "void*(*)(void*,int,unsigned long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4751.69: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const char*,long)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4752.64: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(void*,struct _object*,struct _object*)" is not allowed. "/sw_install/python-2.5/Modules/_ctypes/_ctypes.c", line 4754.70: 1506-280 (E) Function argument assignment between types "void*" and "struct _object*(*)(const unsigned int*,int)" is not allowed. building '_ctypes_test' extension xlc_r -q64 -DNDEBUG -O -I. -I/sw_install/python-2.5/./Include -I./Include -I. -I/sw_install/python-2.5/Include -I/sw_install/python-2.5 -c /sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c -o build/temp.aix-5.3-2.5/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.o "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 61.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 68.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 75.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field M must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field N must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field O must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field P must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field Q must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field R must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 320.9: 1506-009 (S) Bit-field S must be of type signed int, unsigned int or int. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 371.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_s. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.1: 1506-046 (S) Syntax error. "/sw_install/python-2.5/Modules/_ctypes/_ctypes_test.c", line 372.31: 1506-045 (S) Undeclared identifier get_last_tf_arg_u. [...] ---------------------------------------------------------------------- >Comment By: Orlando Irrazabal (oirraza) Date: 2007-01-25 19:29 Message: Logged In: YES user_id=876766 Originator: YES This is the debug output from makesetup command (add "set -x" to begining): + usage= usage: makesetup [-s srcdir] [-l libdir] [-c config.c.in] [-m Makefile.pre] [Setup] ... [-n [Setup] ...] + srcdir=. + libdir= + config= + makepre= + noobjects= + doconfig=yes + : + shift + config=./Modules/config.c.in + shift + : + shift + srcdir=Modules + shift + : + break + + sed s,/[^/]*$,, + echo ./Modules/makesetup libdir=./Modules + makepre=Makefile.pre + NL=\ + uname -s + echo *doconfig* + cat Modules/Setup.config + sed -e s/[ ]*#.*// -e /^[ ]*$/d + rulesf=@rules.843936 + trap rm -f $rulesf 0 1 2 3 + echo # Rules appended by makedepend + 1> @rules.843936 + DEFS= + MODS= + SHAREDMODS= + OBJS= + LIBS= + LOCALLIBS= + BASELIBS= + read line + echo *doconfig* + cat Modules/Setup.local + echo *doconfig* + cat Modules/Setup + grep \\$ + echo *doconfig* + 1> /dev/null + doconfig=yes + continue + read line + grep \\$ + echo thread threadmodule.c + 1> /dev/null + srcs= + cpps= + libs= + mods= + skip= + mods= thread + srcs= threadmodule.c + LIBS= + MODS= thread + objs= + + basename threadmodule.c .c obj=threadmodule.o + cc=$(CC) + obj=Modules/threadmodule.o + objs= Modules/threadmodule.o + src=$(srcdir)/Modules/threadmodule.c + cc=$(CC) $(PY_CFLAGS) + rule=Modules/threadmodule.o: $(srcdir)/Modules/threadmodule.c; $(CC) $(PY_CFLAGS) -c $(srcdir)/Modules/threadmodule.c -o Modules/threadmodule.o + echo Modules/threadmodule.o: $(srcdir)/Modules/threadmodule.c; $(CC) $(PY_CFLAGS) -c $(srcdir)/Modules/threadmodule.c -o Modules/threadmodule.o + 1>> @rules.843936 + OBJS= Modules/threadmodule.o + base=threadmodule + file=Modules/threadmodule$(SO) + rule=Modules/threadmodule$(SO): Modules/threadmodule.o + rule=Modules/threadmodule$(SO): Modules/threadmodule.o; $(LDSHARED) Modules/threadmodule.o -o Modules/threadmodule$(SO) + echo Modules/threadmodule$(SO): Modules/threadmodule.o; $(LDSHARED) Modules/threadmodule.o -o Modules/threadmodule$(SO) + 1>> @rules.843936 + read line + echo signal signalmodule.c + grep \\$ + 1> /dev/null + srcs= + cpps= + libs= + mods= + skip= + mods= signal + srcs= signalmodule.c + LIBS= + MODS= thread signal + objs= + + basename signalmodule.c .c obj=signalmodule.o + cc=$(CC) + obj=Modules/signalmodule.o + objs= Modules/signalmodule.o + src=$(srcdir)/Modules/signalmodule.c + cc=$(CC) $(PY_CFLAGS) + rule=Modules/signalmodule.o: $(srcdir)/Modules/signalmodule.c; $(CC) $(PY_CFLAGS) -c $(srcdir)/Modules/signalmodule.c -o Modules/signalmodule.o + echo Modules/signalmodule.o: $(srcdir)/Modules/signalmodule.c; $(CC) $(PY_CFLAGS) -c $(srcdir)/Modules/signalmodule.c -o Modules/signalmodule.o + 1>> @rules.843936 + OBJS= Modules/threadmodule.o Modules/signalmodule.o + base=signalmodule + file=Modules/signalmodule$(SO) + rule=Modules/signalmodule$(SO): Modules/signalmodule.o + rule=Modules/signalmodule$(SO): Modules/signalmodule.o; $(LDSHARED) Modules/signalmodule.o -o Modules/signalmodule$(SO) + echo Modules/signalmodule$(SO): Modules/signalmodule.o; $(LDSHARED) Modules/signalmodule.o -o Modules/signalmodule$(SO) + 1>> @rules.843936 + read line + grep \\$ + echo *doconfig* + 1> /dev/null + doconfig=yes + continue + read line + grep \\$ + echo *doconfig* + 1> /dev/null + doconfig=yes + continue + read line + grep \\$ + echo + 1> /dev/null + srcs= + cpps= + libs= + mods= + skip= in ho bad word + 1>& 2 in word + exit 1 + rm -f @rules.843936 ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-25 18:07 Message: Logged In: YES user_id=21627 Originator: NO That should have worked. Can you please debug the build process yourself a bit. It will be very tedious to communicate individual commands, then wait a day, communicate the next command. Start looking at the command immediately before the mv. It should have been a "makesetup" invocation, which should have produced config.c which it then tried to move. ---------------------------------------------------------------------- Comment By: Orlando Irrazabal (oirraza) Date: 2007-01-25 11:13 Message: Logged In: YES user_id=876766 Originator: YES I downloaded with eclipse (subclipse addin) and then ftp to my aix machine. It's this ok? ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-25 03:20 Message: Logged In: YES user_id=21627 Originator: NO oirraza: when you say "I downloaded", what precisely do you mean? The only sensible way of downloading it is through subversion checkout, i.e. "svn co http:...". ---------------------------------------------------------------------- Comment By: Orlando Irrazabal (oirraza) Date: 2007-01-24 21:08 Message: Logged In: YES user_id=876766 Originator: YES Thomas, i downloaded the subversion maintenance branch from http://svn.python.org/projects/python/branches/release25-maint/ but when i run ./configure fails with the below error message [...] mv: cannot rename config.c to Modules/config.c: No such file or directory ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2007-01-24 18:00 Message: Logged In: YES user_id=11105 Originator: NO There seem to be three separate compilation errors: 1. (build/temp.aix-5.3-2.5/libffi/include/ffi.h", line 221.3: 1506-166 (S) Definition of function ffi_closure requires parentheses) >This looks like the compile does not understand the __attribute__((...)) syntax. 2. In _ctypes_test.c, lines 61/68/75: The source uses C++ comments instead of C comments. 3. The compiler does not seem to support bit fields in structures with type 'short'. For issue 1: oirraza, can you try the compilation with '__attribute__((...))' removed? Issue 2: is fixed now in SVN. Issue 3: Hm, I don't actually know how to approach this. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-18 17:02 Message: Logged In: YES user_id=21627 Originator: NO oirraza, can you please try the subversion maintenance branch for Python 2.5 instead and report whether the bug has there? It is at http://svn.python.org/projects/python/branches/release25-maint/ Thomas, can you please take a look at this? If not, unassign. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1637120&group_id=5470 From noreply at sourceforge.net Fri Jan 26 06:34:09 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Thu, 25 Jan 2007 21:34:09 -0800 Subject: [ python-Bugs-1644987 ] ./configure --prefix=/ breaks, won't build C modules Message-ID: Bugs item #1644987, was opened at 2007-01-25 21:34 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1644987&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jim Shankland (jas11c) Assigned to: Nobody/Anonymous (nobody) Summary: ./configure --prefix=/ breaks, won't build C modules Initial Comment: This appears to be a new issue with Python 2.5 Building Python 2.5 on Fedora Core 5: ./configure --prefix=/ --enable-shared make fails to build the C modules, as a "-L." is missing from the gcc command line used to generate the module.so file from the module.o file. Using any other value for --prefix works. Setting the environment variable LDFLAGS to "-L." before running ./configure appears to be a successful workaround. Here is a representative failure (all the C modules fail). Note the "cannot find -lpython2.5" message; this is because -L. is missing frmo the gcc command line. building 'crypt' extension gcc -pthread -fPIC -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -I. -I/local/home/jas/Software/Languages/Python/Python-2.5/./Include -I./Include -I. -I/usr/local/include -I/local/home/jas/Software/Languages/Python/Python-2.5/Include -I/local/home/jas/Software/Languages/Python/Python-2.5 -c /local/home/jas/Software/Languages/Python/Python-2.5/Modules/cryptmodule.c -o build/temp.linux-i686-2.5/local/home/jas/Software/Languages/Python/Python-2.5/Modules/cryptmodule.o gcc -pthread -shared build/temp.linux-i686-2.5/local/home/jas/Software/Languages/Python/Python-2.5/Modules/cryptmodule.o -L//lib -L/usr/local/lib -L/lib/python2.5/config -lcrypt -lpython2.5 -o build/lib.linux-i686-2.5/crypt.so /usr/bin/ld: cannot find -lpython2.5 collect2: ld returned 1 exit status ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1644987&group_id=5470 From noreply at sourceforge.net Fri Jan 26 11:04:06 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 26 Jan 2007 02:04:06 -0800 Subject: [ python-Bugs-1645148 ] MIME renderer: wrong header line break with long subject? Message-ID: Bugs item #1645148, was opened at 2007-01-26 11:04 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1645148&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: kxroberto (kxroberto) Assigned to: Nobody/Anonymous (nobody) Summary: MIME renderer: wrong header line break with long subject? Initial Comment: >>> from email.MIMEText import MIMEText >>> o=MIMEText('hello') >>> o['Subject']='1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 ' >>> o.as_string() 'Content-Type: text/plain; charset="us-ascii"\nMIME-Version: 1.0\nContent-Transf er-Encoding: 7bit\nSubject: 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8\n\t9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 \n\ nhello' >>> The '6 7 8\n\t9 1 2 3' clashes together to 6 7 89 1 2 3 without space between 89 in usual mail readers. Is this an error and should be : '6 7 8 \n\t9 1 2 3' ? as there is also the space preserved in '6 7 8 9 \n\nhello' ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1645148&group_id=5470 From noreply at sourceforge.net Fri Jan 26 12:02:35 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 26 Jan 2007 03:02:35 -0800 Subject: [ python-Bugs-1574588 ] ctypes: Pointer-to-pointer unchanged in callback Message-ID: Bugs item #1574588, was opened at 2006-10-10 17:16 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1574588&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Albert Strasheim (albertstrasheim) Assigned to: Thomas Heller (theller) Summary: ctypes: Pointer-to-pointer unchanged in callback Initial Comment: This problem is from another post I made to ctypes-users that didn't show up in the ctypes-users archive. C function: extern CALLBACK_API void foo(void(*callback)(void**)) { void* p = 123; printf("foo calling callback\n"); callback(&p); printf("callback returned in foo\n"); printf("p = 0x%p\n", p); } I figured that while I try to find out why returning c_void_p from a callback gives an error, I might as well return the address via a pointer to a pointer. In the Python code I have: import sys print sys.version from ctypes import * x_t = c_int*10 x = x_t() def callback(ptr): print x print ptr ptr.contents = cast(addressof(x), c_void_p) print ptr.contents #lib = cdll['libcallback.so'] lib = cdll['callback.dll'] lib.foo.argtypes = [CFUNCTYPE(None, POINTER(c_void_p))] lib.foo(lib.foo.argtypes[0](callback)) Output when I running this script under Python 2.4.3 with ctypes 1.0.0 (I get identical results with Python 2.5 and ctypes 1.0.1): 2.4.3 (#69, Mar 29 2006, 17:35:34) [MSC v.1310 32 bit (Intel)] foo calling callback <__main__.c_long_Array_10 object at 0x00963E90> c_void_p(10048496) callback returned in foo p = 0x0000007B For some reason, the value I assigned to ptr.contents isn't present when we return to the C code. ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2007-01-26 12:02 Message: Logged In: YES user_id=11105 Originator: NO Sorry for the late reply. This is not a bug. To dereference a pointer in ctypes you should index with 0: print ptr[0] ptr[0] = When you replace 'ptr.contents' with 'ptr[0]' in your code then it works as expected. In C, these two idioms are identical: ptr[0] *ptr The sematics of ptr.contents is different although somewhat difficult to explain. Changing ptr.contents does not change the value that the pointer points to, instead it changes the location that the pointer points to. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1574588&group_id=5470 From noreply at sourceforge.net Fri Jan 26 16:39:25 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 26 Jan 2007 07:39:25 -0800 Subject: [ python-Bugs-1645148 ] MIME renderer: wrong header line break with long subject? Message-ID: Bugs item #1645148, was opened at 2007-01-26 10:04 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1645148&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: kxroberto (kxroberto) >Assigned to: Barry A. Warsaw (bwarsaw) Summary: MIME renderer: wrong header line break with long subject? Initial Comment: >>> from email.MIMEText import MIMEText >>> o=MIMEText('hello') >>> o['Subject']='1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 ' >>> o.as_string() 'Content-Type: text/plain; charset="us-ascii"\nMIME-Version: 1.0\nContent-Transf er-Encoding: 7bit\nSubject: 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8\n\t9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 \n\ nhello' >>> The '6 7 8\n\t9 1 2 3' clashes together to 6 7 89 1 2 3 without space between 89 in usual mail readers. Is this an error and should be : '6 7 8 \n\t9 1 2 3' ? as there is also the space preserved in '6 7 8 9 \n\nhello' ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1645148&group_id=5470 From noreply at sourceforge.net Fri Jan 26 21:47:11 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Fri, 26 Jan 2007 12:47:11 -0800 Subject: [ python-Bugs-969718 ] BASECFLAGS are not passed to module build line Message-ID: Bugs item #969718, was opened at 2004-06-09 17:56 Message generated for change (Comment added) made by marienz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969718&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jason Beardsley (vaxhacker) Assigned to: Nobody/Anonymous (nobody) Summary: BASECFLAGS are not passed to module build line Initial Comment: The value of BASECFLAGS from /prefix/lib/pythonver/config/Makefile is not present on the compile command for modules being built by distutils ("python setup.py build"). It seems that only the value of OPT is passed along. This is insufficient when BASECFLAGS contains "-fno-static-aliasing", since recent versions of gcc will emit incorrect (crashing) code if this flag is not provided, when compiling certain modules (the mx products from egenix, for example). I did try to set CFLAGS in my environment, as directed by documentation, but this also had zero effect on the final build command. ---------------------------------------------------------------------- Comment By: Marien Zwart (marienz) Date: 2007-01-26 21:47 Message: Logged In: YES user_id=857292 Originator: NO I'm seeing a variation of this bug in python 2.5. As far as I can tell in python 2.4.3 on linux it passes BASECFLAGS and OPT, appending CFLAGS from the environment to that if set. In python 2.5 it passes CFLAGS from the Makefile (which is defined as $(BASECFLAGS) $(OPT) $(EXTRA_CFLAGS)), or OPT and the CFLAGS from the environment if CFLAGS is set there (this change was made in revision 45232). That means that if you run setup.py with CFLAGS set they must include -fno-strict-aliasing if using python 2.5. I think it would be preferable to prepend BASECFLAGS instead of OPT if CFLAGS is set in the environment. On my linux machine after building python 2.5 with CFLAGS set to "-O2 -march=athlon-xp" the Makefile has: OPT= -DNDEBUG -g -O3 -Wall -Wstrict-prototypes BASECFLAGS= -fno-strict-aliasing CFLAGS= $(BASECFLAGS) $(OPT) $(EXTRA_CFLAGS) If I run a setup.py with CFLAGS unset it runs: gcc -pthread -fno-strict-aliasing -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -fPIC ... Which is reasonable. If I run it with CFLAGS="-O2 -march=athlon-xp": gcc -pthread -DNDEBUG -g -O3 -Wall -Wstrict-prototypes -O2 -march=athlon-xp -fPIC ... Which misses -fno-strict-aliasing and still includes all the general flags that I'm trying to set through CFLAGS. If it used BASECFLAGS from the Makefile instead of OPT it would be: gcc -pthread -fno-strict-aliasing -O2 -march=athlon-xp -fPIC ... Which is what I think is the desired result here. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-04-12 10:05 Message: Logged In: YES user_id=21627 I don't think I will do anything about this anytime soon, so unassigning myself. ---------------------------------------------------------------------- Comment By: nyogtha (nyogtha) Date: 2006-01-13 22:19 Message: Logged In: YES user_id=1426882 This is still a bug in Python 2.4.2. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=969718&group_id=5470 From noreply at sourceforge.net Sat Jan 27 15:42:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 27 Jan 2007 06:42:13 -0800 Subject: [ python-Bugs-1645944 ] os.access now returns bool but docstring is not updated Message-ID: Bugs item #1645944, was opened at 2007-01-27 23:42 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1645944&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: Seo Sanghyeon (sanxiyn) Assigned to: Nobody/Anonymous (nobody) Summary: os.access now returns bool but docstring is not updated Initial Comment: $ pydoc os.access os.access = access(...) access(path, mode) -> 1 if granted, 0 otherwise os.access now returns True/False, so this docstring is incorrect. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1645944&group_id=5470 From noreply at sourceforge.net Sat Jan 27 19:23:44 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 27 Jan 2007 10:23:44 -0800 Subject: [ python-Bugs-1646068 ] Dict lookups fail if sizeof(Py_ssize_t) < sizeof(long) Message-ID: Bugs item #1646068, was opened at 2007-01-27 18:23 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646068&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: ked-tao (ked-tao) Assigned to: Nobody/Anonymous (nobody) Summary: Dict lookups fail if sizeof(Py_ssize_t) < sizeof(long) Initial Comment: Portation problem. Include/dictobject.h defines PyDictEntry.me_hash as a Py_ssize_t. Everywhere else uses a C 'long' for hashes. On the system I'm porting to, ints and pointers (and ssize_t) are 32-bit, but longs and long longs are 64-bit. Therefore, the assignments to me_hash truncate the hash and subsequent lookups fail. I've changed the definition of me_hash to 'long' and (in Objects/dictobject.c) removed the casting from the various assignments and changed the definition of 'i' in dict_popitem(). This has fixed my immediate problems, but I guess I've just reintroduced whatever problem it got changed for. The comment in the header says: /* Cached hash code of me_key. Note that hash codes are C longs. * We have to use Py_ssize_t instead because dict_popitem() abuses * me_hash to hold a search finger. */ ... but that doesn't really explain what it is about dict_popitem() that requires the different type. Thanks. Kev. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646068&group_id=5470 From noreply at sourceforge.net Sat Jan 27 20:39:11 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 27 Jan 2007 11:39:11 -0800 Subject: [ python-Bugs-1645944 ] os.access now returns bool but docstring is not updated Message-ID: Bugs item #1645944, was opened at 2007-01-27 14:42 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1645944&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: None >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Seo Sanghyeon (sanxiyn) >Assigned to: Georg Brandl (gbrandl) Summary: os.access now returns bool but docstring is not updated Initial Comment: $ pydoc os.access os.access = access(...) access(path, mode) -> 1 if granted, 0 otherwise os.access now returns True/False, so this docstring is incorrect. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-27 19:39 Message: Logged In: YES user_id=849994 Originator: NO Thanks for the report, fixed in rev. 53579, 53580 (2.5) ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1645944&group_id=5470 From noreply at sourceforge.net Sat Jan 27 20:40:35 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 27 Jan 2007 11:40:35 -0800 Subject: [ python-Bugs-1646068 ] Dict lookups fail if sizeof(Py_ssize_t) < sizeof(long) Message-ID: Bugs item #1646068, was opened at 2007-01-27 18:23 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646068&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None >Priority: 6 Private: No Submitted By: ked-tao (ked-tao) >Assigned to: Tim Peters (tim_one) Summary: Dict lookups fail if sizeof(Py_ssize_t) < sizeof(long) Initial Comment: Portation problem. Include/dictobject.h defines PyDictEntry.me_hash as a Py_ssize_t. Everywhere else uses a C 'long' for hashes. On the system I'm porting to, ints and pointers (and ssize_t) are 32-bit, but longs and long longs are 64-bit. Therefore, the assignments to me_hash truncate the hash and subsequent lookups fail. I've changed the definition of me_hash to 'long' and (in Objects/dictobject.c) removed the casting from the various assignments and changed the definition of 'i' in dict_popitem(). This has fixed my immediate problems, but I guess I've just reintroduced whatever problem it got changed for. The comment in the header says: /* Cached hash code of me_key. Note that hash codes are C longs. * We have to use Py_ssize_t instead because dict_popitem() abuses * me_hash to hold a search finger. */ ... but that doesn't really explain what it is about dict_popitem() that requires the different type. Thanks. Kev. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-27 19:40 Message: Logged In: YES user_id=849994 Originator: NO This is your code, Tim. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646068&group_id=5470 From noreply at sourceforge.net Sun Jan 28 12:57:31 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 28 Jan 2007 03:57:31 -0800 Subject: [ python-Bugs-1643738 ] Problem with signals in a single-threaded application Message-ID: Bugs item #1643738, was opened at 2007-01-24 11:14 Message generated for change (Comment added) made by rhamphoryncus You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ulisses Furquim (ulissesf) Assigned to: Nobody/Anonymous (nobody) Summary: Problem with signals in a single-threaded application Initial Comment: I'm aware of the problems with signals in a multithreaded application, but I was using signals in a single-threaded application and noticed something that seemed wrong. Some signals were apparently being lost, but when another signal came in the python handler for that "lost" signal was being called. The problem seems to be inside the signal module. The global variable is_tripped is incremented every time a signal arrives. Then, inside PyErr_CheckSignals() (the pending call that calls all python handlers for signals that arrived) we can return immediately if is_tripped is zero. If is_tripped is different than zero, we loop through all signals calling the registered python handlers and after that we zero is_tripped. This seems to be ok, but what happens if a signal arrives after we've returned from its handler (or even after we've checked if that signal arrived) and before we zero is_tripped? I guess we can have a situation where is_tripped is zero but some Handlers[i].tripped are not. In fact, I've inserted some debugging output and could see that this actually happens and then I've written the attached test program to reproduce the problem. When we run this program, the handler for the SIGALRM isn't called after we return from the SIGIO handler. We return to our main loop and print 'Loop!' every 3 seconds aprox. and the SIGALRM handler is called only when another signal arrives (like when we hit Ctrl-C). ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-28 04:57 Message: Logged In: YES user_id=12364 Originator: NO Your PyErr_SetInterrupt needs to set is_tripped twice, like so: is_tripped = 1; Handlers[SIGINT].tripped = 1; Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); is_tripped = 1; The reason is that the signal handler run in a thread while the main thread goes through check ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 14:09 Message: Logged In: YES user_id=1578960 Originator: YES Yep, you're right, Tony Nelson. We overlooked this case but we can zero is_tripped after the test for threading as you've already said. The patch was updated and it also includes the code comment Tim Peters suggested. Please, I don't know if the wording is right so feel free to comment on it. I still plan to write a test case for the problem being solved (as soon as I understand how test_signals.py work :-). File Added: signals-v1.patch ---------------------------------------------------------------------- Comment By: Tony Nelson (tony_nelson) Date: 2007-01-24 13:24 Message: Logged In: YES user_id=1356214 Originator: NO ISTM that is_tripped should be zeroed after the test for threading, so that signals will finally get handled when the proper thread is running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2007-01-24 13:19 Message: Logged In: YES user_id=31435 Originator: NO Very nice! I'd add a description of the minor pathology remaining you described here as a code comment, at the point is_tripped is set to 0. If this stuff were screamingly obvious, the bug you fixed wouldn't have persisted for 15 years ;-) ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 12:46 Message: Logged In: YES user_id=1578960 Originator: YES This patch is very simple. We didn't want to remove the is_tripped variable because PyErr_CheckSignals() is called several times directly so it would be nice if we could return immediately if no signals arrived. We also didn't want to run the registered handlers with any set of signals blocked. Thus, we thought of zeroing is_tripped as soon as we know there are signals to be handled (after we test is_tripped). This way most of the times we can return immediately because is_tripped is zero and we also don't need to block any signals. However, with this approach we can have a situation where is_tripped isn't zero but we have no signals to handle, so we'll loop through all signals and no registered handler will be called. This happens when we receive a signal after we zero is_tripped and before we check Handlers[i].tripped for that signal. Any comments? File Added: signals-v0.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 From noreply at sourceforge.net Sun Jan 28 13:02:06 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 28 Jan 2007 04:02:06 -0800 Subject: [ python-Bugs-1643738 ] Problem with signals in a single-threaded application Message-ID: Bugs item #1643738, was opened at 2007-01-24 11:14 Message generated for change (Comment added) made by rhamphoryncus You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ulisses Furquim (ulissesf) Assigned to: Nobody/Anonymous (nobody) Summary: Problem with signals in a single-threaded application Initial Comment: I'm aware of the problems with signals in a multithreaded application, but I was using signals in a single-threaded application and noticed something that seemed wrong. Some signals were apparently being lost, but when another signal came in the python handler for that "lost" signal was being called. The problem seems to be inside the signal module. The global variable is_tripped is incremented every time a signal arrives. Then, inside PyErr_CheckSignals() (the pending call that calls all python handlers for signals that arrived) we can return immediately if is_tripped is zero. If is_tripped is different than zero, we loop through all signals calling the registered python handlers and after that we zero is_tripped. This seems to be ok, but what happens if a signal arrives after we've returned from its handler (or even after we've checked if that signal arrived) and before we zero is_tripped? I guess we can have a situation where is_tripped is zero but some Handlers[i].tripped are not. In fact, I've inserted some debugging output and could see that this actually happens and then I've written the attached test program to reproduce the problem. When we run this program, the handler for the SIGALRM isn't called after we return from the SIGIO handler. We return to our main loop and print 'Loop!' every 3 seconds aprox. and the SIGALRM handler is called only when another signal arrives (like when we hit Ctrl-C). ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-28 05:02 Message: Logged In: YES user_id=12364 Originator: NO Augh, bloody firefox messed up my focus. Your PyErr_SetInterrupt needs to set the flags after, like so: Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); Handlers[SIGINT].tripped = 1; is_tripped = 1; The reason is that the signal handler run in a thread while the main thread goes through PyErr_CheckSignals, the main thread may notice the flags, clear them flags, find nothing, then exit. You need the signal handler to supply all the data before setting the flags. Really though, if you fix enough signal problems you'll converge with the patch at http://sourceforge.net/tracker/index.php?func=detail&aid=1564547&group_id=5470&atid=305470 No need for two patches that do the same thing. ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-28 04:57 Message: Logged In: YES user_id=12364 Originator: NO Your PyErr_SetInterrupt needs to set is_tripped twice, like so: is_tripped = 1; Handlers[SIGINT].tripped = 1; Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); is_tripped = 1; The reason is that the signal handler run in a thread while the main thread goes through check ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 14:09 Message: Logged In: YES user_id=1578960 Originator: YES Yep, you're right, Tony Nelson. We overlooked this case but we can zero is_tripped after the test for threading as you've already said. The patch was updated and it also includes the code comment Tim Peters suggested. Please, I don't know if the wording is right so feel free to comment on it. I still plan to write a test case for the problem being solved (as soon as I understand how test_signals.py work :-). File Added: signals-v1.patch ---------------------------------------------------------------------- Comment By: Tony Nelson (tony_nelson) Date: 2007-01-24 13:24 Message: Logged In: YES user_id=1356214 Originator: NO ISTM that is_tripped should be zeroed after the test for threading, so that signals will finally get handled when the proper thread is running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2007-01-24 13:19 Message: Logged In: YES user_id=31435 Originator: NO Very nice! I'd add a description of the minor pathology remaining you described here as a code comment, at the point is_tripped is set to 0. If this stuff were screamingly obvious, the bug you fixed wouldn't have persisted for 15 years ;-) ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 12:46 Message: Logged In: YES user_id=1578960 Originator: YES This patch is very simple. We didn't want to remove the is_tripped variable because PyErr_CheckSignals() is called several times directly so it would be nice if we could return immediately if no signals arrived. We also didn't want to run the registered handlers with any set of signals blocked. Thus, we thought of zeroing is_tripped as soon as we know there are signals to be handled (after we test is_tripped). This way most of the times we can return immediately because is_tripped is zero and we also don't need to block any signals. However, with this approach we can have a situation where is_tripped isn't zero but we have no signals to handle, so we'll loop through all signals and no registered handler will be called. This happens when we receive a signal after we zero is_tripped and before we check Handlers[i].tripped for that signal. Any comments? File Added: signals-v0.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 From noreply at sourceforge.net Sun Jan 28 23:18:20 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 28 Jan 2007 14:18:20 -0800 Subject: [ python-Bugs-1646630 ] ctypes.string_at(buf, 0) is seen as zero-terminated-string Message-ID: Bugs item #1646630, was opened at 2007-01-28 22:18 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646630&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes H?lzl (johannes) Assigned to: Nobody/Anonymous (nobody) Summary: ctypes.string_at(buf, 0) is seen as zero-terminated-string Initial Comment: ctypes.string_at() interprets size=0 wrong. When the size argument is 0, ctypes.string_at (and probably wstring_at too) string_at tries to read an zero-terminated string instead of an empty string. Python 2.5 (r25:51908, Oct 6 2006, 15:22:41) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from ctypes import * >>> bytes = (c_char*3)("1", "2", "\0") >>> string_at(pointer(bytes)) '12' >>> string_at(pointer(bytes), 0) '12' >>> string_at(pointer(bytes), 1) '1' instead of: >>> string_at(pointer(bytes), 0) '' ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646630&group_id=5470 From noreply at sourceforge.net Mon Jan 29 03:21:31 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 28 Jan 2007 18:21:31 -0800 Subject: [ python-Bugs-1646728 ] datetime.fromtimestamp fails with negative fractional times Message-ID: Bugs item #1646728, was opened at 2007-01-29 10:21 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646728&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: James Henstridge (jhenstridge) Assigned to: Nobody/Anonymous (nobody) Summary: datetime.fromtimestamp fails with negative fractional times Initial Comment: The datetime.fromtimestamp() function works fine with integer timestamps and positive fractional timestamps, but fails if I pass a negative fractional timestamp. For example: >>> import datetime >>> datetime.datetime.fromtimestamp(-1.05) Traceback (most recent call last): File "", line 1, in ValueError: microsecond must be in 0..999999 It should return the same result as datetime.fromtimestamp(-1) - timedelta(seconds=.5). The same bug can be triggered in datetime.utcfromtimestamp(). I have been able to reproduce this bug in Python 2.4.4 and Python 2.5 on Linux. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646728&group_id=5470 From noreply at sourceforge.net Mon Jan 29 08:30:15 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sun, 28 Jan 2007 23:30:15 -0800 Subject: [ python-Bugs-1646630 ] ctypes.string_at(buf, 0) is seen as zero-terminated-string Message-ID: Bugs item #1646630, was opened at 2007-01-28 23:18 Message generated for change (Settings changed) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646630&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes H?lzl (johannes) >Assigned to: Thomas Heller (theller) Summary: ctypes.string_at(buf, 0) is seen as zero-terminated-string Initial Comment: ctypes.string_at() interprets size=0 wrong. When the size argument is 0, ctypes.string_at (and probably wstring_at too) string_at tries to read an zero-terminated string instead of an empty string. Python 2.5 (r25:51908, Oct 6 2006, 15:22:41) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from ctypes import * >>> bytes = (c_char*3)("1", "2", "\0") >>> string_at(pointer(bytes)) '12' >>> string_at(pointer(bytes), 0) '12' >>> string_at(pointer(bytes), 1) '1' instead of: >>> string_at(pointer(bytes), 0) '' ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646630&group_id=5470 From noreply at sourceforge.net Mon Jan 29 09:07:41 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 00:07:41 -0800 Subject: [ python-Bugs-1646838 ] os.path, %HOME% set: realpath contradicts expanduser on '~' Message-ID: Bugs item #1646838, was opened at 2007-01-29 09:07 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646838&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: wrstl prmpft (wrstlprmpft) Assigned to: Nobody/Anonymous (nobody) Summary: os.path, %HOME% set: realpath contradicts expanduser on '~' Initial Comment: This might be intentional, but it is still confusing. On Windows XP (german):: Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] ... In [1]: import os.path as path In [2]: import os; os.environ['HOME'] Out[2]: 'D:\\HOME' In [3]: path.realpath('~') Out[3]: 'C:\\Dokumente und Einstellungen\\wrstl\\~' In [4]: path.expanduser('~') Out[4]: 'D:\\HOME' The cause: realpath uses path._getfullpathname which seems to do the '~' expansion, while path.expanduser has special code to look for HOME* environment variables. I would expect that the HOME setting should always be honored if expansion is done. cheers, stefan ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1646838&group_id=5470 From noreply at sourceforge.net Mon Jan 29 09:13:07 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 00:13:07 -0800 Subject: [ python-Bugs-1643738 ] Problem with signals in a single-threaded application Message-ID: Bugs item #1643738, was opened at 2007-01-24 19:14 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ulisses Furquim (ulissesf) Assigned to: Nobody/Anonymous (nobody) Summary: Problem with signals in a single-threaded application Initial Comment: I'm aware of the problems with signals in a multithreaded application, but I was using signals in a single-threaded application and noticed something that seemed wrong. Some signals were apparently being lost, but when another signal came in the python handler for that "lost" signal was being called. The problem seems to be inside the signal module. The global variable is_tripped is incremented every time a signal arrives. Then, inside PyErr_CheckSignals() (the pending call that calls all python handlers for signals that arrived) we can return immediately if is_tripped is zero. If is_tripped is different than zero, we loop through all signals calling the registered python handlers and after that we zero is_tripped. This seems to be ok, but what happens if a signal arrives after we've returned from its handler (or even after we've checked if that signal arrived) and before we zero is_tripped? I guess we can have a situation where is_tripped is zero but some Handlers[i].tripped are not. In fact, I've inserted some debugging output and could see that this actually happens and then I've written the attached test program to reproduce the problem. When we run this program, the handler for the SIGALRM isn't called after we return from the SIGIO handler. We return to our main loop and print 'Loop!' every 3 seconds aprox. and the SIGALRM handler is called only when another signal arrives (like when we hit Ctrl-C). ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-29 09:13 Message: Logged In: YES user_id=21627 Originator: NO What I dislike about #1564547 is the introduction of the pipe. I don't think this is an appropriate change, and unnecessary to fix the problems discussed here. So if one of the patches is dropped, I'd rather drop #1564547. Also, I don't think it is necessary to set .tripped after Py_AddPendingCall. If there is a CheckSignals invocation already going on, it will invoke the handler just fine. What *is* necessary (IMO) is to set is_tripped after setting .tripped: Otherwise, an in-progress CheckSignals call might clear is_tripped before .tripped gets set, and thus not invoke the signal handler. The subsequent CheckSignals would quit early because is_tripped is not set. So I think "a" right sequence is Handlers[SIGINT].tripped = 1; is_tripped = 1; /* Set is_tripped after setting .tripped, as it gets cleared before .tripped. */ Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-28 13:02 Message: Logged In: YES user_id=12364 Originator: NO Augh, bloody firefox messed up my focus. Your PyErr_SetInterrupt needs to set the flags after, like so: Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); Handlers[SIGINT].tripped = 1; is_tripped = 1; The reason is that the signal handler run in a thread while the main thread goes through PyErr_CheckSignals, the main thread may notice the flags, clear them flags, find nothing, then exit. You need the signal handler to supply all the data before setting the flags. Really though, if you fix enough signal problems you'll converge with the patch at http://sourceforge.net/tracker/index.php?func=detail&aid=1564547&group_id=5470&atid=305470 No need for two patches that do the same thing. ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-28 12:57 Message: Logged In: YES user_id=12364 Originator: NO Your PyErr_SetInterrupt needs to set is_tripped twice, like so: is_tripped = 1; Handlers[SIGINT].tripped = 1; Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); is_tripped = 1; The reason is that the signal handler run in a thread while the main thread goes through check ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 22:09 Message: Logged In: YES user_id=1578960 Originator: YES Yep, you're right, Tony Nelson. We overlooked this case but we can zero is_tripped after the test for threading as you've already said. The patch was updated and it also includes the code comment Tim Peters suggested. Please, I don't know if the wording is right so feel free to comment on it. I still plan to write a test case for the problem being solved (as soon as I understand how test_signals.py work :-). File Added: signals-v1.patch ---------------------------------------------------------------------- Comment By: Tony Nelson (tony_nelson) Date: 2007-01-24 21:24 Message: Logged In: YES user_id=1356214 Originator: NO ISTM that is_tripped should be zeroed after the test for threading, so that signals will finally get handled when the proper thread is running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2007-01-24 21:19 Message: Logged In: YES user_id=31435 Originator: NO Very nice! I'd add a description of the minor pathology remaining you described here as a code comment, at the point is_tripped is set to 0. If this stuff were screamingly obvious, the bug you fixed wouldn't have persisted for 15 years ;-) ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 20:46 Message: Logged In: YES user_id=1578960 Originator: YES This patch is very simple. We didn't want to remove the is_tripped variable because PyErr_CheckSignals() is called several times directly so it would be nice if we could return immediately if no signals arrived. We also didn't want to run the registered handlers with any set of signals blocked. Thus, we thought of zeroing is_tripped as soon as we know there are signals to be handled (after we test is_tripped). This way most of the times we can return immediately because is_tripped is zero and we also don't need to block any signals. However, with this approach we can have a situation where is_tripped isn't zero but we have no signals to handle, so we'll loop through all signals and no registered handler will be called. This happens when we receive a signal after we zero is_tripped and before we check Handlers[i].tripped for that signal. Any comments? File Added: signals-v0.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 From noreply at sourceforge.net Mon Jan 29 13:31:17 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 04:31:17 -0800 Subject: [ python-Bugs-1647037 ] cookielib.CookieJar does not handle cookies when port in url Message-ID: Bugs item #1647037, was opened at 2007-01-29 12:31 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647037&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: STS (tools-sts) Assigned to: Nobody/Anonymous (nobody) Summary: cookielib.CookieJar does not handle cookies when port in url Initial Comment: In Python 2.5 the cookielib.CookieJar does not handle cookies (i.e., recognise the Set-Cookie: header) when the port is specified in the URL. e.g., import urllib2, cookielib cookiejar = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar)) # add proxy to view results proxy_handler = urllib2.ProxyHandler({'http':'127.0.0.1:8080'}) opener.add_handler(proxy_handler) # Install opener globally so it can be used with urllib2. urllib2.install_opener(opener) # The ':80' will cause the CookieJar to never handle the # cookie set by Google request = urllib2.Request('http://www.google.com.au:80/') response = opener.open(request) response = opener.open(request) # No Cookie: # But this works request = urllib2.Request('http://www.google.com.au/') response = opener.open(request) response = opener.open(request)# Cookie: PREF=ID=d2de0.. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647037&group_id=5470 From noreply at sourceforge.net Mon Jan 29 21:54:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 12:54:28 -0800 Subject: [ python-Bugs-1227748 ] subprocess: inheritance of std descriptors inconsistent Message-ID: Bugs item #1227748, was opened at 2005-06-26 15:37 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1227748&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Andr? Malo (ndparker) Assigned to: Peter ??strand (astrand) Summary: subprocess: inheritance of std descriptors inconsistent Initial Comment: The inheritance of std descriptors is inconsistent between Unix and Windows implementations. If one calls Popen with stdin = stdout = stderr = None, the caller's std descriptors are inherited on *x, but not on Windows, because of the following optimization (from subprocess.py r1.20): 655 def _get_handles(self, stdin, stdout, stderr): 656 """Construct and return tupel with IO objects: 657 p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite 658 """ 659 if stdin is None and stdout is None and stderr is None: 660 return (None, None, None, None, None, None) 661 I suggest to just remove those lines 659 and 660. The current workaround is to duplicate the handles by the application and supply an own STARTUPINFO structure. ---------------------------------------------------------------------- >Comment By: Peter ??strand (astrand) Date: 2007-01-29 21:54 Message: Logged In: YES user_id=344921 Originator: NO >If one calls Popen with stdin = stdout = stderr = None, >the caller's std descriptors are inherited on *x, but >not on Windows, This is a correct observation. However, the current implementation is not necessarily wrong. This could instead be seen as a consequence of the different environments. The subprocess documentation states that "With None, no redirection will occur". So, it becomes an interpretation of what this really mean. Since the "default" behaviour on UNIX is to inherit and the default behaviour on Windows is to attach the standard handles to (an often newly created) console window, one could argue that this fits fairly good with the description "no redirection will occur". If we would change this, so that the parents handles are always inherited, then how would you specify that you want to attach the standard handles to the new console window? For best flexibility, the API should allow both cases: Both inherit all handles from the parent as well as attaching all standard handles to the new console window. As you point out, the current API allows this. So why change this? One thing that's clearly an bug is the second part of the documentation: "With None, no redirection will occur; the child's file handles will be inherited from the parent" This is currently only true on UNIX. If we should keep the current behaviour, at least the comment needs to be fixed. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1227748&group_id=5470 From noreply at sourceforge.net Mon Jan 29 22:42:51 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 13:42:51 -0800 Subject: [ python-Bugs-1124861 ] subprocess fails on GetStdHandle in interactive GUI Message-ID: Bugs item #1124861, was opened at 2005-02-17 17:23 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: Python 2.4 Status: Open Resolution: None Priority: 7 Private: No Submitted By: davids (davidschein) Assigned to: Nobody/Anonymous (nobody) Summary: subprocess fails on GetStdHandle in interactive GUI Initial Comment: Using the suprocess module from with IDLE or PyWindows, it appears that calls GetStdHandle (STD__HANDLE) returns None, which causes an error. (All appears fine on Linux, the standard Python command-line, and ipython.) For example: >>> import subprocess >>> p = subprocess.Popen("dir", stdout=subprocess.PIPE) Traceback (most recent call last): File "", line 1, in -toplevel- p = subprocess.Popen("dir", stdout=subprocess.PIPE) File "C:\Python24\lib\subprocess.py", line 545, in __init__ (p2cread, p2cwrite, File "C:\Python24\lib\subprocess.py", line 605, in _get_handles p2cread = self._make_inheritable(p2cread) File "C:\Python24\lib\subprocess.py", line 646, in _make_inheritable DUPLICATE_SAME_ACCESS) TypeError: an integer is required The error originates in the mswindows implementation of _get_handles. You need to set one of stdin, stdout, or strerr because the first line in the method is: if stdin == None and stdout == None and stderr == None: ...return (None, None, None, None, None, None) I added "if not handle: return GetCurrentProcess()" to _make_inheritable() as below and it worked. Of course, I really do not know what is going on, so I am letting go now... def _make_inheritable(self, handle): ..."""Return a duplicate of handle, which is inheritable""" ...if not handle: return GetCurrentProcess() ...return DuplicateHandle(GetCurrentProcess(), handle, ....................................GetCurrentProcess(), 0, 1, ....................................DUPLICATE_SAME_ACCESS) ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-29 22:42 Message: Logged In: YES user_id=344921 Originator: NO Some ideas of possible solutions for this bug: 1) As Roger Upole suggests, throw an readable error when GetStdHandle fails. This would not really change much, besides of subprocess being a little less confusing. 2) Automatically create PIPEs for those handles that fails. The PIPE could either be left open or closed. A WriteFile in the child would get ERROR_BROKEN_PIPE, if the parent has closed it. Not as good as ERROR_INVALID_HANDLE, but pretty close. (Or should I say pretty closed? :-) 3) Try to attach the handles to a NUL device, as 1238747 suggests. 4) Hope for the best and actually pass invalid handles in startupinfo.hStdInput, startupinfo.hStdOutput, or startupinfo.hStdError. It would be nice if this was possible: If GetStdHandle fails in the current process, it makes sense that GetStdHandle will fail in the child as well. But, as far as I understand, it's not possible or safe to pass invalid handles in the startupinfo structure. Currently, I'm leaning towards solution 2), with closing the parents PIPE ends. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-22 20:36 Message: Logged In: YES user_id=344921 Originator: NO The following bugs have been marked as duplicate of this bug: 1358527 1603907 1126208 1238747 ---------------------------------------------------------------------- Comment By: craig (codecraig) Date: 2006-10-13 17:54 Message: Logged In: YES user_id=1258995 On windows, this seems to work from subprocess import * p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) ....in some cases (depending on what command you are executing, a command prompt window may appear). Do not show a window use this... import win32con p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE, creationflags=win32con.CREATE_NO_WINDOW) ...google for Microsoft Process Creation Flags for more info ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-09-26 16:53 Message: Logged In: YES user_id=945502 This issue was discussed on comp.lang.python[1] and Roger Upole suggested: """ Basically, gui apps like VS don't have a console, so GetStdHandle returns 0. _subprocess.GetStdHandle returns None if the handle is 0, which gives the original error. Pywin32 just returns the 0, so the process gets one step further but still hits the above error. Subprocess.py should probably check the result of GetStdHandle for None (or 0) and throw a readable error that says something like "No standard handle available, you must specify one" """ [1]http://mail.python.org/pipermail/python-list/2005-September/300744.html ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-08-13 22:37 Message: Logged In: YES user_id=945502 I ran into a similar problem in Ellogon (www.ellogon.org) which interfaces with Python through tclpython (http://jfontain.free.fr/tclpython.htm). My current workaround is to always set all of stdin, stdout, and stderr to subprocess.PIPE. I never use the stderr pipe, but at least this keeps the broken GetStdHandle calls from happening. Looking at the code, I kinda think the fix should be:: if handle is None: return handle return DuplicateHandle(GetCurrentProcess(), ... where if handle is None, it stays None. But I'm also probably in over my head here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470 From noreply at sourceforge.net Mon Jan 29 22:45:32 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 13:45:32 -0800 Subject: [ python-Bugs-1643738 ] Problem with signals in a single-threaded application Message-ID: Bugs item #1643738, was opened at 2007-01-24 11:14 Message generated for change (Comment added) made by rhamphoryncus You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ulisses Furquim (ulissesf) Assigned to: Nobody/Anonymous (nobody) Summary: Problem with signals in a single-threaded application Initial Comment: I'm aware of the problems with signals in a multithreaded application, but I was using signals in a single-threaded application and noticed something that seemed wrong. Some signals were apparently being lost, but when another signal came in the python handler for that "lost" signal was being called. The problem seems to be inside the signal module. The global variable is_tripped is incremented every time a signal arrives. Then, inside PyErr_CheckSignals() (the pending call that calls all python handlers for signals that arrived) we can return immediately if is_tripped is zero. If is_tripped is different than zero, we loop through all signals calling the registered python handlers and after that we zero is_tripped. This seems to be ok, but what happens if a signal arrives after we've returned from its handler (or even after we've checked if that signal arrived) and before we zero is_tripped? I guess we can have a situation where is_tripped is zero but some Handlers[i].tripped are not. In fact, I've inserted some debugging output and could see that this actually happens and then I've written the attached test program to reproduce the problem. When we run this program, the handler for the SIGALRM isn't called after we return from the SIGIO handler. We return to our main loop and print 'Loop!' every 3 seconds aprox. and the SIGALRM handler is called only when another signal arrives (like when we hit Ctrl-C). ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-29 14:45 Message: Logged In: YES user_id=12364 Originator: NO To my knowledge, a pipe is the *only* way to reliably wakeup the main thread from a signal handler in another thread. It's not necessary here simply because this bug only names a subset of the signal problems, whereas #1564547 attempts to fix all of them. Dropping it would be silly unless it were officially declared that the signal module and the threading module were incompatible. You're right about the .tripped/Py_AddPendingCall order. I got myself confused as to what Py_AddPendingCall did. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-29 01:13 Message: Logged In: YES user_id=21627 Originator: NO What I dislike about #1564547 is the introduction of the pipe. I don't think this is an appropriate change, and unnecessary to fix the problems discussed here. So if one of the patches is dropped, I'd rather drop #1564547. Also, I don't think it is necessary to set .tripped after Py_AddPendingCall. If there is a CheckSignals invocation already going on, it will invoke the handler just fine. What *is* necessary (IMO) is to set is_tripped after setting .tripped: Otherwise, an in-progress CheckSignals call might clear is_tripped before .tripped gets set, and thus not invoke the signal handler. The subsequent CheckSignals would quit early because is_tripped is not set. So I think "a" right sequence is Handlers[SIGINT].tripped = 1; is_tripped = 1; /* Set is_tripped after setting .tripped, as it gets cleared before .tripped. */ Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-28 05:02 Message: Logged In: YES user_id=12364 Originator: NO Augh, bloody firefox messed up my focus. Your PyErr_SetInterrupt needs to set the flags after, like so: Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); Handlers[SIGINT].tripped = 1; is_tripped = 1; The reason is that the signal handler run in a thread while the main thread goes through PyErr_CheckSignals, the main thread may notice the flags, clear them flags, find nothing, then exit. You need the signal handler to supply all the data before setting the flags. Really though, if you fix enough signal problems you'll converge with the patch at http://sourceforge.net/tracker/index.php?func=detail&aid=1564547&group_id=5470&atid=305470 No need for two patches that do the same thing. ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-28 04:57 Message: Logged In: YES user_id=12364 Originator: NO Your PyErr_SetInterrupt needs to set is_tripped twice, like so: is_tripped = 1; Handlers[SIGINT].tripped = 1; Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); is_tripped = 1; The reason is that the signal handler run in a thread while the main thread goes through check ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 14:09 Message: Logged In: YES user_id=1578960 Originator: YES Yep, you're right, Tony Nelson. We overlooked this case but we can zero is_tripped after the test for threading as you've already said. The patch was updated and it also includes the code comment Tim Peters suggested. Please, I don't know if the wording is right so feel free to comment on it. I still plan to write a test case for the problem being solved (as soon as I understand how test_signals.py work :-). File Added: signals-v1.patch ---------------------------------------------------------------------- Comment By: Tony Nelson (tony_nelson) Date: 2007-01-24 13:24 Message: Logged In: YES user_id=1356214 Originator: NO ISTM that is_tripped should be zeroed after the test for threading, so that signals will finally get handled when the proper thread is running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2007-01-24 13:19 Message: Logged In: YES user_id=31435 Originator: NO Very nice! I'd add a description of the minor pathology remaining you described here as a code comment, at the point is_tripped is set to 0. If this stuff were screamingly obvious, the bug you fixed wouldn't have persisted for 15 years ;-) ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 12:46 Message: Logged In: YES user_id=1578960 Originator: YES This patch is very simple. We didn't want to remove the is_tripped variable because PyErr_CheckSignals() is called several times directly so it would be nice if we could return immediately if no signals arrived. We also didn't want to run the registered handlers with any set of signals blocked. Thus, we thought of zeroing is_tripped as soon as we know there are signals to be handled (after we test is_tripped). This way most of the times we can return immediately because is_tripped is zero and we also don't need to block any signals. However, with this approach we can have a situation where is_tripped isn't zero but we have no signals to handle, so we'll loop through all signals and no registered handler will be called. This happens when we receive a signal after we zero is_tripped and before we check Handlers[i].tripped for that signal. Any comments? File Added: signals-v0.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 From noreply at sourceforge.net Mon Jan 29 23:04:04 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 14:04:04 -0800 Subject: [ python-Bugs-1643738 ] Problem with signals in a single-threaded application Message-ID: Bugs item #1643738, was opened at 2007-01-24 19:14 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Ulisses Furquim (ulissesf) Assigned to: Nobody/Anonymous (nobody) Summary: Problem with signals in a single-threaded application Initial Comment: I'm aware of the problems with signals in a multithreaded application, but I was using signals in a single-threaded application and noticed something that seemed wrong. Some signals were apparently being lost, but when another signal came in the python handler for that "lost" signal was being called. The problem seems to be inside the signal module. The global variable is_tripped is incremented every time a signal arrives. Then, inside PyErr_CheckSignals() (the pending call that calls all python handlers for signals that arrived) we can return immediately if is_tripped is zero. If is_tripped is different than zero, we loop through all signals calling the registered python handlers and after that we zero is_tripped. This seems to be ok, but what happens if a signal arrives after we've returned from its handler (or even after we've checked if that signal arrived) and before we zero is_tripped? I guess we can have a situation where is_tripped is zero but some Handlers[i].tripped are not. In fact, I've inserted some debugging output and could see that this actually happens and then I've written the attached test program to reproduce the problem. When we run this program, the handler for the SIGALRM isn't called after we return from the SIGIO handler. We return to our main loop and print 'Loop!' every 3 seconds aprox. and the SIGALRM handler is called only when another signal arrives (like when we hit Ctrl-C). ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-29 23:04 Message: Logged In: YES user_id=21627 Originator: NO rhamphoryncus, see the discussion on #1564547 about that patch. I believe there are better ways to address the issues it raises, in particular by means of pthread_kill. It's certainly more reliable than a pipe (which wakes up the main thread only if it was polling the pipe). ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-29 22:45 Message: Logged In: YES user_id=12364 Originator: NO To my knowledge, a pipe is the *only* way to reliably wakeup the main thread from a signal handler in another thread. It's not necessary here simply because this bug only names a subset of the signal problems, whereas #1564547 attempts to fix all of them. Dropping it would be silly unless it were officially declared that the signal module and the threading module were incompatible. You're right about the .tripped/Py_AddPendingCall order. I got myself confused as to what Py_AddPendingCall did. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2007-01-29 09:13 Message: Logged In: YES user_id=21627 Originator: NO What I dislike about #1564547 is the introduction of the pipe. I don't think this is an appropriate change, and unnecessary to fix the problems discussed here. So if one of the patches is dropped, I'd rather drop #1564547. Also, I don't think it is necessary to set .tripped after Py_AddPendingCall. If there is a CheckSignals invocation already going on, it will invoke the handler just fine. What *is* necessary (IMO) is to set is_tripped after setting .tripped: Otherwise, an in-progress CheckSignals call might clear is_tripped before .tripped gets set, and thus not invoke the signal handler. The subsequent CheckSignals would quit early because is_tripped is not set. So I think "a" right sequence is Handlers[SIGINT].tripped = 1; is_tripped = 1; /* Set is_tripped after setting .tripped, as it gets cleared before .tripped. */ Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-28 13:02 Message: Logged In: YES user_id=12364 Originator: NO Augh, bloody firefox messed up my focus. Your PyErr_SetInterrupt needs to set the flags after, like so: Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); Handlers[SIGINT].tripped = 1; is_tripped = 1; The reason is that the signal handler run in a thread while the main thread goes through PyErr_CheckSignals, the main thread may notice the flags, clear them flags, find nothing, then exit. You need the signal handler to supply all the data before setting the flags. Really though, if you fix enough signal problems you'll converge with the patch at http://sourceforge.net/tracker/index.php?func=detail&aid=1564547&group_id=5470&atid=305470 No need for two patches that do the same thing. ---------------------------------------------------------------------- Comment By: Adam Olsen (rhamphoryncus) Date: 2007-01-28 12:57 Message: Logged In: YES user_id=12364 Originator: NO Your PyErr_SetInterrupt needs to set is_tripped twice, like so: is_tripped = 1; Handlers[SIGINT].tripped = 1; Py_AddPendingCall((int (*)(void *))PyErr_CheckSignals, NULL); is_tripped = 1; The reason is that the signal handler run in a thread while the main thread goes through check ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 22:09 Message: Logged In: YES user_id=1578960 Originator: YES Yep, you're right, Tony Nelson. We overlooked this case but we can zero is_tripped after the test for threading as you've already said. The patch was updated and it also includes the code comment Tim Peters suggested. Please, I don't know if the wording is right so feel free to comment on it. I still plan to write a test case for the problem being solved (as soon as I understand how test_signals.py work :-). File Added: signals-v1.patch ---------------------------------------------------------------------- Comment By: Tony Nelson (tony_nelson) Date: 2007-01-24 21:24 Message: Logged In: YES user_id=1356214 Originator: NO ISTM that is_tripped should be zeroed after the test for threading, so that signals will finally get handled when the proper thread is running. ---------------------------------------------------------------------- Comment By: Tim Peters (tim_one) Date: 2007-01-24 21:19 Message: Logged In: YES user_id=31435 Originator: NO Very nice! I'd add a description of the minor pathology remaining you described here as a code comment, at the point is_tripped is set to 0. If this stuff were screamingly obvious, the bug you fixed wouldn't have persisted for 15 years ;-) ---------------------------------------------------------------------- Comment By: Ulisses Furquim (ulissesf) Date: 2007-01-24 20:46 Message: Logged In: YES user_id=1578960 Originator: YES This patch is very simple. We didn't want to remove the is_tripped variable because PyErr_CheckSignals() is called several times directly so it would be nice if we could return immediately if no signals arrived. We also didn't want to run the registered handlers with any set of signals blocked. Thus, we thought of zeroing is_tripped as soon as we know there are signals to be handled (after we test is_tripped). This way most of the times we can return immediately because is_tripped is zero and we also don't need to block any signals. However, with this approach we can have a situation where is_tripped isn't zero but we have no signals to handle, so we'll loop through all signals and no registered handler will be called. This happens when we receive a signal after we zero is_tripped and before we check Handlers[i].tripped for that signal. Any comments? File Added: signals-v0.patch ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1643738&group_id=5470 From noreply at sourceforge.net Mon Jan 29 23:35:22 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 14:35:22 -0800 Subject: [ python-Bugs-1647489 ] zero-length match confuses re.finditer() Message-ID: Bugs item #1647489, was opened at 2007-01-29 14:35 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647489&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Regular Expressions Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Jacques Frechet (jfrechet) Assigned to: Gustavo Niemeyer (niemeyer) Summary: zero-length match confuses re.finditer() Initial Comment: Hi! re.finditer() seems to incorrectly increment the current position immediately after matching a zero-length substring. For example: >>> [m.groups() for m in re.finditer(r'(^z*)|(\w+)', 'abc')] [('', None), (None, 'bc')] What happened to the 'a'? I expected this result: [('', None), (None, 'abc')] Perl agrees with me: % perl -le 'print defined($1)?"\"$1\"":"undef",",",defined($2)?"\"$2\"":"undef" while "abc" =~ /(z*)|(\w+)/g' "",undef undef,"abc" "",undef Similarly, if I remove the ^: >>> [m.groups() for m in re.finditer(r'(z*)|(\w+)', 'abc')] [('', None), ('', None), ('', None), ('', None)] Now all of the letters have fallen through the cracks! I expected this result: [('', None), (None, 'abc'), ('', None)] Again, perl agrees: % perl -le 'print defined($1)?"\"$1\"":"undef",",",defined($2)?"\"$2\"":"undef" while "abc" =~ /(z*)|(\w+)/g' "",undef undef,"abc" "",undef If this bug has already been reported, I apologize -- I wasn't able to find it here. I haven't looked at the code for the re module, but this seems like the sort of bug that might have been accidentally introduced in order to try to prevent the same zero-length match from being returned forever. Thanks, Jacques ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647489&group_id=5470 From noreply at sourceforge.net Tue Jan 30 01:04:47 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 16:04:47 -0800 Subject: [ python-Bugs-1647541 ] SystemError with re.match(array) Message-ID: Bugs item #1647541, was opened at 2007-01-30 00:04 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647541&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: None Status: Open Resolution: None Priority: 4 Private: No Submitted By: Armin Rigo (arigo) Assigned to: Nobody/Anonymous (nobody) Summary: SystemError with re.match(array) Initial Comment: An small issue which I guess is to be found in the implementation of the buffer interface for zero-length arrays: >>> a = array.array("c") >>> r = re.compile("bla") >>> r.match(a) SystemError: error return without exception set ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647541&group_id=5470 From noreply at sourceforge.net Tue Jan 30 06:21:11 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 21:21:11 -0800 Subject: [ python-Bugs-1647541 ] SystemError with re.match(array) Message-ID: Bugs item #1647541, was opened at 2007-01-29 16:04 Message generated for change (Comment added) made by nnorwitz You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647541&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: None Status: Open Resolution: None Priority: 4 Private: No Submitted By: Armin Rigo (arigo) >Assigned to: Armin Rigo (arigo) Summary: SystemError with re.match(array) Initial Comment: An small issue which I guess is to be found in the implementation of the buffer interface for zero-length arrays: >>> a = array.array("c") >>> r = re.compile("bla") >>> r.match(a) SystemError: error return without exception set ---------------------------------------------------------------------- >Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-29 21:21 Message: Logged In: YES user_id=33168 Originator: NO Armin, what do you think of the attached patch? File Added: empty-array.diff ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647541&group_id=5470 From noreply at sourceforge.net Tue Jan 30 06:48:17 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Mon, 29 Jan 2007 21:48:17 -0800 Subject: [ python-Bugs-1647654 ] No obvious and correct way to get the time zone offset Message-ID: Bugs item #1647654, was opened at 2007-01-30 13:48 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647654&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None Status: Open Resolution: None Priority: 5 Private: No Submitted By: James Henstridge (jhenstridge) Assigned to: Nobody/Anonymous (nobody) Summary: No obvious and correct way to get the time zone offset Initial Comment: It would be nice if the Python time module provided an obvious way to get the local time UTC offset for an arbitrary time stamp. The existing constants included in the module are not sufficient to correctly determine this value. As context, the Bazaar version control system (written in Python), the local time UTC offset is recorded in a commit. The method used in releases prior to 0.14 made use of the "daylight", "timezone" and "altzone" constants from the time module like this: if time.localtime(t).tm_isdst and time.daylight: return -time.altzone else: return -time.timezone This worked most of the time, but would occasionally give incorrect results. On Linux, the local time system can handle different daylight saving rules for different spans of years. For years where the rules change, these constants can provide incorrect data. Furthermore, they may be incorrect for time stamps in the past. I personally ran into this problem last December when Western Australia adopted daylight saving -- time.altzone gave an incorrect value until the start of 2007. Having a function in the standard library to calculate this offset would solve the problem. The implementation we ended up with for Bazaar was: offset = datetime.fromtimestamp(t) - datetime.utcfromtimestamp(t) return offset.days * 86400 + offset.seconds Another alternative would be to expose tm_gmtoff on time tuples (perhaps using the above code to synthesise it on platforms that don't have the field). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647654&group_id=5470 From noreply at sourceforge.net Tue Jan 30 14:07:56 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 30 Jan 2007 05:07:56 -0800 Subject: [ python-Bugs-1633863 ] AIX: configure ignores $CC; problems with C++ style comments Message-ID: Bugs item #1633863, was opened at 2007-01-12 09:46 Message generated for change (Comment added) made by jabt You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633863&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: AIX: configure ignores $CC; problems with C++ style comments Initial Comment: CC=xlc_r ./configure does not work on AIX-5.1, because configure unconditionally sets $CC to "cc_r": case $ac_sys_system in AIX*) CC=cc_r without_gcc=;; It would be better to leave $CC and just add "-qthreaded" to $CFLAGS. Furthermore, much of the C source code of Python uses C++ /C99 comments. This is an error with the standard AIX compiler. Please add the compiler flag "-qcpluscmt". An alternative would be to use a default of "xlc_r" for CC on AIX. This calls the compiler in a mode that both accepts C++ comments and generates reentrant code. Regards, Johannes ---------------------------------------------------------------------- >Comment By: Johannes Abt (jabt) Date: 2007-01-30 14:07 Message: Logged In: YES user_id=1563402 Originator: YES Sorry about the C++ comments... all the C++ comments I have found concern Windows, PC or Darwin. I must have confused this with another project I have been compiling. Though there still the issue with setting $CC. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-19 06:47 Message: Logged In: YES user_id=33168 Originator: NO There shouldn't be any C++ comments in the Python code. If there are, it is a mistake. I did see some get removed recently. Could you let me know where you see the C++ comments? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633863&group_id=5470 From noreply at sourceforge.net Tue Jan 30 14:37:55 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 30 Jan 2007 05:37:55 -0800 Subject: [ python-Bugs-1647541 ] SystemError with re.match(array) Message-ID: Bugs item #1647541, was opened at 2007-01-30 00:04 Message generated for change (Comment added) made by arigo You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647541&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Extension Modules Group: None Status: Open Resolution: None Priority: 4 Private: No Submitted By: Armin Rigo (arigo) Assigned to: Armin Rigo (arigo) Summary: SystemError with re.match(array) Initial Comment: An small issue which I guess is to be found in the implementation of the buffer interface for zero-length arrays: >>> a = array.array("c") >>> r = re.compile("bla") >>> r.match(a) SystemError: error return without exception set ---------------------------------------------------------------------- >Comment By: Armin Rigo (arigo) Date: 2007-01-30 13:37 Message: Logged In: YES user_id=4771 Originator: YES It seems to me that an empty array should be equivalent to an empty string. Accessing it as a buffer should return a buffer of length 0, not raise ValueError. In all cases, the fix in _sre.c is sensible. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-30 05:21 Message: Logged In: YES user_id=33168 Originator: NO Armin, what do you think of the attached patch? File Added: empty-array.diff ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1647541&group_id=5470 From noreply at sourceforge.net Tue Jan 30 21:00:01 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 30 Jan 2007 12:00:01 -0800 Subject: [ python-Bugs-1648179 ] set update problem with class derived from dict Message-ID: Bugs item #1648179, was opened at 2007-01-30 20:00 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648179&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: duncan (urubu147) Assigned to: Nobody/Anonymous (nobody) Summary: set update problem with class derived from dict Initial Comment: Class derived from dict with __iter__ method returning itervalues() has keys (rather than values) added to set when using set update method. Works as expected in 2.4. Windows XP (Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] on win32). Unsure of platform for Peter Otten's minimal example in (hopefully) attached file. Duncan Smith ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648179&group_id=5470 From noreply at sourceforge.net Tue Jan 30 21:04:02 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 30 Jan 2007 12:04:02 -0800 Subject: [ python-Bugs-1124861 ] subprocess fails on GetStdHandle in interactive GUI Message-ID: Bugs item #1124861, was opened at 2005-02-17 17:23 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: Python 2.4 Status: Open Resolution: None Priority: 7 Private: No Submitted By: davids (davidschein) Assigned to: Nobody/Anonymous (nobody) Summary: subprocess fails on GetStdHandle in interactive GUI Initial Comment: Using the suprocess module from with IDLE or PyWindows, it appears that calls GetStdHandle (STD__HANDLE) returns None, which causes an error. (All appears fine on Linux, the standard Python command-line, and ipython.) For example: >>> import subprocess >>> p = subprocess.Popen("dir", stdout=subprocess.PIPE) Traceback (most recent call last): File "", line 1, in -toplevel- p = subprocess.Popen("dir", stdout=subprocess.PIPE) File "C:\Python24\lib\subprocess.py", line 545, in __init__ (p2cread, p2cwrite, File "C:\Python24\lib\subprocess.py", line 605, in _get_handles p2cread = self._make_inheritable(p2cread) File "C:\Python24\lib\subprocess.py", line 646, in _make_inheritable DUPLICATE_SAME_ACCESS) TypeError: an integer is required The error originates in the mswindows implementation of _get_handles. You need to set one of stdin, stdout, or strerr because the first line in the method is: if stdin == None and stdout == None and stderr == None: ...return (None, None, None, None, None, None) I added "if not handle: return GetCurrentProcess()" to _make_inheritable() as below and it worked. Of course, I really do not know what is going on, so I am letting go now... def _make_inheritable(self, handle): ..."""Return a duplicate of handle, which is inheritable""" ...if not handle: return GetCurrentProcess() ...return DuplicateHandle(GetCurrentProcess(), handle, ....................................GetCurrentProcess(), 0, 1, ....................................DUPLICATE_SAME_ACCESS) ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-30 21:04 Message: Logged In: YES user_id=344921 Originator: NO File Added: 1124861.3.patch ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-29 22:42 Message: Logged In: YES user_id=344921 Originator: NO Some ideas of possible solutions for this bug: 1) As Roger Upole suggests, throw an readable error when GetStdHandle fails. This would not really change much, besides of subprocess being a little less confusing. 2) Automatically create PIPEs for those handles that fails. The PIPE could either be left open or closed. A WriteFile in the child would get ERROR_BROKEN_PIPE, if the parent has closed it. Not as good as ERROR_INVALID_HANDLE, but pretty close. (Or should I say pretty closed? :-) 3) Try to attach the handles to a NUL device, as 1238747 suggests. 4) Hope for the best and actually pass invalid handles in startupinfo.hStdInput, startupinfo.hStdOutput, or startupinfo.hStdError. It would be nice if this was possible: If GetStdHandle fails in the current process, it makes sense that GetStdHandle will fail in the child as well. But, as far as I understand, it's not possible or safe to pass invalid handles in the startupinfo structure. Currently, I'm leaning towards solution 2), with closing the parents PIPE ends. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-22 20:36 Message: Logged In: YES user_id=344921 Originator: NO The following bugs have been marked as duplicate of this bug: 1358527 1603907 1126208 1238747 ---------------------------------------------------------------------- Comment By: craig (codecraig) Date: 2006-10-13 17:54 Message: Logged In: YES user_id=1258995 On windows, this seems to work from subprocess import * p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) ....in some cases (depending on what command you are executing, a command prompt window may appear). Do not show a window use this... import win32con p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE, creationflags=win32con.CREATE_NO_WINDOW) ...google for Microsoft Process Creation Flags for more info ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-09-26 16:53 Message: Logged In: YES user_id=945502 This issue was discussed on comp.lang.python[1] and Roger Upole suggested: """ Basically, gui apps like VS don't have a console, so GetStdHandle returns 0. _subprocess.GetStdHandle returns None if the handle is 0, which gives the original error. Pywin32 just returns the 0, so the process gets one step further but still hits the above error. Subprocess.py should probably check the result of GetStdHandle for None (or 0) and throw a readable error that says something like "No standard handle available, you must specify one" """ [1]http://mail.python.org/pipermail/python-list/2005-September/300744.html ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-08-13 22:37 Message: Logged In: YES user_id=945502 I ran into a similar problem in Ellogon (www.ellogon.org) which interfaces with Python through tclpython (http://jfontain.free.fr/tclpython.htm). My current workaround is to always set all of stdin, stdout, and stderr to subprocess.PIPE. I never use the stderr pipe, but at least this keeps the broken GetStdHandle calls from happening. Looking at the code, I kinda think the fix should be:: if handle is None: return handle return DuplicateHandle(GetCurrentProcess(), ... where if handle is None, it stays None. But I'm also probably in over my head here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470 From noreply at sourceforge.net Tue Jan 30 21:05:28 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 30 Jan 2007 12:05:28 -0800 Subject: [ python-Bugs-1124861 ] subprocess fails on GetStdHandle in interactive GUI Message-ID: Bugs item #1124861, was opened at 2005-02-17 17:23 Message generated for change (Comment added) made by astrand You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Windows Group: Python 2.4 Status: Open Resolution: None Priority: 7 Private: No Submitted By: davids (davidschein) Assigned to: Nobody/Anonymous (nobody) Summary: subprocess fails on GetStdHandle in interactive GUI Initial Comment: Using the suprocess module from with IDLE or PyWindows, it appears that calls GetStdHandle (STD__HANDLE) returns None, which causes an error. (All appears fine on Linux, the standard Python command-line, and ipython.) For example: >>> import subprocess >>> p = subprocess.Popen("dir", stdout=subprocess.PIPE) Traceback (most recent call last): File "", line 1, in -toplevel- p = subprocess.Popen("dir", stdout=subprocess.PIPE) File "C:\Python24\lib\subprocess.py", line 545, in __init__ (p2cread, p2cwrite, File "C:\Python24\lib\subprocess.py", line 605, in _get_handles p2cread = self._make_inheritable(p2cread) File "C:\Python24\lib\subprocess.py", line 646, in _make_inheritable DUPLICATE_SAME_ACCESS) TypeError: an integer is required The error originates in the mswindows implementation of _get_handles. You need to set one of stdin, stdout, or strerr because the first line in the method is: if stdin == None and stdout == None and stderr == None: ...return (None, None, None, None, None, None) I added "if not handle: return GetCurrentProcess()" to _make_inheritable() as below and it worked. Of course, I really do not know what is going on, so I am letting go now... def _make_inheritable(self, handle): ..."""Return a duplicate of handle, which is inheritable""" ...if not handle: return GetCurrentProcess() ...return DuplicateHandle(GetCurrentProcess(), handle, ....................................GetCurrentProcess(), 0, 1, ....................................DUPLICATE_SAME_ACCESS) ---------------------------------------------------------------------- >Comment By: Peter ?strand (astrand) Date: 2007-01-30 21:05 Message: Logged In: YES user_id=344921 Originator: NO Please review 1124861.3.patch. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-30 21:04 Message: Logged In: YES user_id=344921 Originator: NO File Added: 1124861.3.patch ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-29 22:42 Message: Logged In: YES user_id=344921 Originator: NO Some ideas of possible solutions for this bug: 1) As Roger Upole suggests, throw an readable error when GetStdHandle fails. This would not really change much, besides of subprocess being a little less confusing. 2) Automatically create PIPEs for those handles that fails. The PIPE could either be left open or closed. A WriteFile in the child would get ERROR_BROKEN_PIPE, if the parent has closed it. Not as good as ERROR_INVALID_HANDLE, but pretty close. (Or should I say pretty closed? :-) 3) Try to attach the handles to a NUL device, as 1238747 suggests. 4) Hope for the best and actually pass invalid handles in startupinfo.hStdInput, startupinfo.hStdOutput, or startupinfo.hStdError. It would be nice if this was possible: If GetStdHandle fails in the current process, it makes sense that GetStdHandle will fail in the child as well. But, as far as I understand, it's not possible or safe to pass invalid handles in the startupinfo structure. Currently, I'm leaning towards solution 2), with closing the parents PIPE ends. ---------------------------------------------------------------------- Comment By: Peter ?strand (astrand) Date: 2007-01-22 20:36 Message: Logged In: YES user_id=344921 Originator: NO The following bugs have been marked as duplicate of this bug: 1358527 1603907 1126208 1238747 ---------------------------------------------------------------------- Comment By: craig (codecraig) Date: 2006-10-13 17:54 Message: Logged In: YES user_id=1258995 On windows, this seems to work from subprocess import * p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE) ....in some cases (depending on what command you are executing, a command prompt window may appear). Do not show a window use this... import win32con p = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE, creationflags=win32con.CREATE_NO_WINDOW) ...google for Microsoft Process Creation Flags for more info ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-09-26 16:53 Message: Logged In: YES user_id=945502 This issue was discussed on comp.lang.python[1] and Roger Upole suggested: """ Basically, gui apps like VS don't have a console, so GetStdHandle returns 0. _subprocess.GetStdHandle returns None if the handle is 0, which gives the original error. Pywin32 just returns the 0, so the process gets one step further but still hits the above error. Subprocess.py should probably check the result of GetStdHandle for None (or 0) and throw a readable error that says something like "No standard handle available, you must specify one" """ [1]http://mail.python.org/pipermail/python-list/2005-September/300744.html ---------------------------------------------------------------------- Comment By: Steven Bethard (bediviere) Date: 2005-08-13 22:37 Message: Logged In: YES user_id=945502 I ran into a similar problem in Ellogon (www.ellogon.org) which interfaces with Python through tclpython (http://jfontain.free.fr/tclpython.htm). My current workaround is to always set all of stdin, stdout, and stderr to subprocess.PIPE. I never use the stderr pipe, but at least this keeps the broken GetStdHandle calls from happening. Looking at the code, I kinda think the fix should be:: if handle is None: return handle return DuplicateHandle(GetCurrentProcess(), ... where if handle is None, it stays None. But I'm also probably in over my head here. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1124861&group_id=5470 From noreply at sourceforge.net Tue Jan 30 21:18:50 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 30 Jan 2007 12:18:50 -0800 Subject: [ python-Bugs-1648191 ] Grammatical error Message-ID: Bugs item #1648191, was opened at 2007-01-30 14:18 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648191&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Chris Beelby (mrchris007) Assigned to: Nobody/Anonymous (nobody) Summary: Grammatical error Initial Comment: I was viewing the documentation located at: http://docs.python.org/lib/module-imageop.html Which is section 19.2 (imageop -- Manipulate raw image data) of the Python Library Reference. Under the description for the crop() method the first line reads: "Return the selected part of image, which should **by** width by height in size..." (I added the asterisk's) The word which I highlighted with the asterik's should be "be", not "by". ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648191&group_id=5470 From noreply at sourceforge.net Tue Jan 30 21:22:06 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 30 Jan 2007 12:22:06 -0800 Subject: [ python-Bugs-1648191 ] Grammatical error Message-ID: Bugs item #1648191, was opened at 2007-01-30 20:18 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648191&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Documentation Group: Python 2.5 >Status: Closed >Resolution: Fixed Priority: 5 Private: No Submitted By: Chris Beelby (mrchris007) Assigned to: Nobody/Anonymous (nobody) Summary: Grammatical error Initial Comment: I was viewing the documentation located at: http://docs.python.org/lib/module-imageop.html Which is section 19.2 (imageop -- Manipulate raw image data) of the Python Library Reference. Under the description for the crop() method the first line reads: "Return the selected part of image, which should **by** width by height in size..." (I added the asterisk's) The word which I highlighted with the asterik's should be "be", not "by". ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-30 20:22 Message: Logged In: YES user_id=849994 Originator: NO Thanks for the report, fixed in rev. 53603, 53604 (2.5). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648191&group_id=5470 From noreply at sourceforge.net Tue Jan 30 21:23:43 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 30 Jan 2007 12:23:43 -0800 Subject: [ python-Bugs-1648179 ] set update problem with class derived from dict Message-ID: Bugs item #1648179, was opened at 2007-01-30 20:00 Message generated for change (Settings changed) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648179&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: duncan (urubu147) >Assigned to: Raymond Hettinger (rhettinger) Summary: set update problem with class derived from dict Initial Comment: Class derived from dict with __iter__ method returning itervalues() has keys (rather than values) added to set when using set update method. Works as expected in 2.4. Windows XP (Python 2.5 (r25:51908, Sep 19 2006, 09:52:17) [MSC v.1310 32 bit (Intel)] on win32). Unsure of platform for Peter Otten's minimal example in (hopefully) attached file. Duncan Smith ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648179&group_id=5470 From noreply at sourceforge.net Tue Jan 30 22:20:13 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 30 Jan 2007 13:20:13 -0800 Subject: [ python-Bugs-1568075 ] GUI scripts always return to an interpreter Message-ID: Bugs item #1568075, was opened at 2006-09-29 16:00 Message generated for change (Comment added) made by jejackson You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1568075&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Macintosh Group: Python 2.5 >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: jjackson (jejackson) Assigned to: Jack Jansen (jackjansen) Summary: GUI scripts always return to an interpreter Initial Comment: I installed the latest version of 2.5 from the web last night: Python 2.5 (r25:51918, Sep 19 2006, 08:49:13) [GCC 4.0.1 (Apple Computer, Inc. build 5341)] on darwin When I run a wxPython script, using something like pythonw myScript.py from the Terminal, I find myself in an interpreter after I use the quit menu. The menubar becomes a single, hung python menu, and a shell window pops up with an interpreter prompt. Cntrl-D kills the interpreter. It's as if python was stuck in "-i" mode: pythonw -i myScript.py gives the same results. (python and pythonw give the same results. It appears from comments on the web that they are now the same. They appear so from a diff. If so, why not a symlink?) Running the lastest wxPython demo gives this warning in the console, 2006-09-29 15:40:06.681 wxPython Demo[942] WARNING: _wrapRunLoopWithAutoreleasePoolHandler got kCFRunLoopExit, but there are no autorelease pools in the stack. which may or may not be related. ---------------------------------------------------------------------- >Comment By: jjackson (jejackson) Date: 2007-01-30 13:20 Message: Logged In: YES user_id=1497873 Originator: YES After a bunch of troubleshooting, I've determined that the problem is caused by the Unsanity Application Enhancer Module (APE) 2.0.2. Deactivating or removing that module and the problem goes away. Maybe they are patching something that causes the python.app to hang on exit. I've set the resolution to invalid as this is not wxpython or python's problem. ---------------------------------------------------------------------- Comment By: jjackson (jejackson) Date: 2006-09-30 14:48 Message: Logged In: YES user_id=1497873 I tried: Olivos:~ jj$ echo $PYTHONINSPECT Olivos:~ jj$ Looks like it isn't set. However, this might be a wxPython bug. I tried a tcl/tk app from the demos: Applications/MacPython 2.5/Extras/Demo/tkinter/guido/solitaire.py. Quitting it worked fine. I'll post something on the wxPython mac list. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-09-30 11:24 Message: Logged In: YES user_id=849994 Could you check if it is set? (using echo $PYTHONINSPECT in a console?) ---------------------------------------------------------------------- Comment By: jjackson (jejackson) Date: 2006-09-30 10:55 Message: Logged In: YES user_id=1497873 No, I didn't set the PYTHONINSPECT env variable. If it was set, it was by something else. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2006-09-30 00:47 Message: Logged In: YES user_id=849994 Did you (or someone else) perhaps set the PYTHONINSPECT environment variable? I can't imagine another cause for this problem. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1568075&group_id=5470 From noreply at sourceforge.net Tue Jan 30 23:15:43 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Tue, 30 Jan 2007 14:15:43 -0800 Subject: [ python-Bugs-1648268 ] Parameter list mismatches (portation problem) Message-ID: Bugs item #1648268, was opened at 2007-01-30 22:15 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648268&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: ked-tao (ked-tao) Assigned to: Nobody/Anonymous (nobody) Summary: Parameter list mismatches (portation problem) Initial Comment: On the system I'm porting to(*), an application will trap if the caller does not pass the exact parameter list that the callee requires. This is causing problems running Python. One common instance where this appears to be causing problems is where functions are registered as METH_NOARGS methods. For example, in Obejcts/dictobject.c, dict_popitem() is declared: static PyObject *dict_popitem(dictobject *mp); However, as it is declared in the method array as METH_NOARGS, it will be called by Objects/methodobject.c:PyCFunction_Call() as "(*meth)(self, NULL)" (i.e., an extra NULL parameter is passed for some reason). This will fail on my target system. I've no problem submitting a patch for this (dictobject.c is by no means the only place this is happening - it's just the first one encountered because it's used so much - though some places _do_ correctly declare a second, ignored parameter). However, I'd like to get agreement on the correct form it should be changed to before I put the effort in to produce a patch (it's going to be a fairly tedious process to identify and fix all these). In various modules, the functions are called internally as well as being registered as METH_NOARGS methods. Therefore, the change can either be: static PyObject *foo(PyObject *self) { ... } static PyObject *foo_noargs(PyObject *self, void *noargs_null) { return foo(self); } ... where 'foo' is called internally and 'foo_noargs' is registered as a METH_NOARGS method. or: static PyObject *foo(PyObject *self, void *noargs_null) { ... } ... and any internal calls in the module have to pass a second, NULL, argument in each call. The former favours internal module calls over METH_NOARGS calls, the latter penalises them. Which is preferred? Should this be raised on a different forum? Does anyone care? ;) Thanks, Kev. (*) Details on request. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648268&group_id=5470 From noreply at sourceforge.net Wed Jan 31 14:57:41 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 05:57:41 -0800 Subject: [ python-Bugs-1648890 ] HP-UX: ld -Wl,+b... Message-ID: Bugs item #1648890, was opened at 2007-01-31 14:57 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648890&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: HP-UX: ld -Wl,+b... Initial Comment: On HP-UX 11.* (here: 11.23), configure chooses "ld -b" for extension modules like unicodedata.so. My $LDFLAGS contains instructions like "-Wl,+b" (run-time search path for shared libs). This is correct, because LDFLAGS should be passed to the compiler. distutils compiles the extension modules with "cc" (I need to use the native compiler), then it links with ld -b $(LDFLAGS) -I.... ... These means that options like -Wl, and -I are passed to the linker! To solve this problem quickly, I propose to modify configure. If LDSHARED="cc -b", Python 2.5 compiles. Though this works very godd with with current HP-UX compilers, it does not work with ancient HP-UX compiler suites. Maybe there should be a test in configure in order to see if LDSHARED works. If you really want to support old HP-UX compilers, distutils should not - pass $LDFLAGS containing "-Wl," to "ld" nor - call the linker with -I. This is the current state of the linker call: ld -b -L/usr/local/python/2.5/lib/hpux32 -Wl,+b,/usr/local/python/2.5/lib/hpux32:/usr/local/devel/readline/5.1-static/lib/hpux32:/usr/local/ssl/lib:/usr/local/devel/bzip-1.0.3/lib/hpux32:/usr/local/devel/berkeleydb/4.3.29-static/lib:/usr/local/lib/hpux32 -L/usr/local/devel/readline/5.1-static/lib/hpux32 -Wl,+b,/usr/local/devel/readline/5.1-static/lib/hpux32:/usr/local/ssl/lib:/usr/local/devel/bzip-1.0.3/lib/hpux32:/usr/local/devel/berkeleydb/4.3.29-static/lib:/usr/local/lib/hpux32 -L/usr/local/ssl/lib -Wl,+b,/usr/local/ssl/lib:/usr/local/devel/bzip-1.0.3/lib/hpux32:/usr/local/devel/berkeleydb/4.3.29-static/lib:/usr/local/lib/hpux32 -L/usr/local/devel/bzip-1.0.3/lib/hpux32 -Wl,+b,/usr/local/devel/bzip-1.0.3/lib/hpux32:/usr/local/devel/berkeleydb/4.3.29-static/lib:/usr/local/lib/hpux32 -L/usr/local/devel/berkeleydb/4.3.29-static/lib -Wl,+b,/usr/local/devel/berkeleydb/4.3.29-static/lib:/usr/local/lib/hpux32 -L/usr/local/lib/hpux32 -Wl,+b,/usr/local/lib/hpux32 -I. -I/soft/python/python-2.5/Python-2.5/Include -I/usr/local/include -I/usr/local/devel/berkeleydb/4.3.29-static/include -I/usr/local/devel/bzip-1.0.3/include -I/usr/local/ssl/include -I/usr/local/devel/readline/5.1-static/include build/temp.hp-ux-B.11.23-ia64-2.5/soft/python/python-2.5/Python-2.5/Modules/readline.o -L/usr/lib/termcap -L/usr/local/python/2.5/lib -L/usr/local/lib -lreadline -o build/lib.hp-ux-B.11.23-ia64-2.5/readline.so ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648890&group_id=5470 From noreply at sourceforge.net Wed Jan 31 15:33:21 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 06:33:21 -0800 Subject: [ python-Bugs-1648923 ] HP-UX: -lcurses missing for readline.so Message-ID: Bugs item #1648923, was opened at 2007-01-31 15:33 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648923&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: HP-UX: -lcurses missing for readline.so Initial Comment: The readline extension seemed to be built without problems, but afterwards, this line appears: /usr/lib/hpux32/dld.so: Unsatisfied code symbol 'tgetent' in load module 'build/lib.hp-ux-B.11.23-ia64-2.5/readline.so'. I have fixed this by manually rebuilding the above file with -lcurses. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648923&group_id=5470 From noreply at sourceforge.net Wed Jan 31 16:07:02 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 07:07:02 -0800 Subject: [ python-Bugs-1648957 ] HP-UX: _ctypes/libffi/src/ia64/ffi/__attribute__/native cc Message-ID: Bugs item #1648957, was opened at 2007-01-31 16:07 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648957&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: HP-UX: _ctypes/libffi/src/ia64/ffi/__attribute__/native cc Initial Comment: _ctypes/libffi/src/ia64/ffi.c uses __attribute__((...)) twice. Consequently, ffi.c does not compile with the native compiler (cc: HP C/aC++ B3910B A.06.12 [Aug 17 2006]). ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648957&group_id=5470 From noreply at sourceforge.net Wed Jan 31 16:14:08 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 07:14:08 -0800 Subject: [ python-Bugs-1633863 ] AIX: configure ignores $CC Message-ID: Bugs item #1633863, was opened at 2007-01-12 09:46 Message generated for change (Settings changed) made by jabt You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633863&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) >Summary: AIX: configure ignores $CC Initial Comment: CC=xlc_r ./configure does not work on AIX-5.1, because configure unconditionally sets $CC to "cc_r": case $ac_sys_system in AIX*) CC=cc_r without_gcc=;; It would be better to leave $CC and just add "-qthreaded" to $CFLAGS. Furthermore, much of the C source code of Python uses C++ /C99 comments. This is an error with the standard AIX compiler. Please add the compiler flag "-qcpluscmt". An alternative would be to use a default of "xlc_r" for CC on AIX. This calls the compiler in a mode that both accepts C++ comments and generates reentrant code. Regards, Johannes ---------------------------------------------------------------------- Comment By: Johannes Abt (jabt) Date: 2007-01-30 14:07 Message: Logged In: YES user_id=1563402 Originator: YES Sorry about the C++ comments... all the C++ comments I have found concern Windows, PC or Darwin. I must have confused this with another project I have been compiling. Though there still the issue with setting $CC. ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2007-01-19 06:47 Message: Logged In: YES user_id=33168 Originator: NO There shouldn't be any C++ comments in the Python code. If there are, it is a mistake. I did see some get removed recently. Could you let me know where you see the C++ comments? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1633863&group_id=5470 From noreply at sourceforge.net Wed Jan 31 16:13:24 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 07:13:24 -0800 Subject: [ python-Bugs-1648960 ] HP-UX11.23: module zlib missing Message-ID: Bugs item #1648960, was opened at 2007-01-31 16:13 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648960&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: HP-UX11.23: module zlib missing Initial Comment: The build processes does not build module zlib, so zipimport does not work, either. /usr/local/lib/hpux32/libz.so exists. configure tells me: checking for inflateCopy in -lz... yes ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648960&group_id=5470 From noreply at sourceforge.net Wed Jan 31 17:20:17 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 08:20:17 -0800 Subject: [ python-Bugs-1649011 ] HP-UX: compiler warnings: alignment Message-ID: Bugs item #1649011, was opened at 2007-01-31 17:20 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649011&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.6 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: HP-UX: compiler warnings: alignment Initial Comment: On HP-UX 11.23 with the native compiler (cc: HP C/aC++ B3910B A.06.12 [Aug 17 2006]), I get dozens of these warnings: /soft/python/python-2.5/Python-2.5/Modules/_ctypes/_ctypes.c", line 2885: warning #4232-D: conversion from "PyObject *" to a more strictly aligned type "CDataObject *" may cause misaligned access ob = (CDataObject *)GenericCData_new(type, args, kwds); It does not seem to be very serious. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649011&group_id=5470 From noreply at sourceforge.net Wed Jan 31 19:25:27 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 10:25:27 -0800 Subject: [ python-Bugs-1649098 ] non-standard: array[0] Message-ID: Bugs item #1649098, was opened at 2007-01-31 19:25 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649098&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: non-standard: array[0] Initial Comment: in Modules/_ctypes/ctypes.h: typedef struct { [..] ffi_type *atypes[0]; } ffi_info; AFAIK, arrays must be of size > 0. _Most_ compilers accepts this, but not all (especially my HP-UX compiler). Please change *atypes[0] to *atypes[1]! Bye, Johannes ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649098&group_id=5470 From noreply at sourceforge.net Wed Jan 31 19:36:12 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 10:36:12 -0800 Subject: [ python-Bugs-1649100 ] Arithmetics behaving strange Message-ID: Bugs item #1649100, was opened at 2007-01-31 19:36 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649100&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Parser/Compiler Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Sascha Peilicke (lastmohican) Assigned to: Nobody/Anonymous (nobody) Summary: Arithmetics behaving strange Initial Comment: Hello, i just found some strange things going around, could you please tell me if this is desired: >>> 3 + 4 7 >>> 3 +- 4 -1 >>> 3 +-+ 4 -1 >>> 3 +-+- 4 7 >>> 3 +-+-+ 4 7 >>> 3 +-+-+- 4 -1 >>> 3 +-+-+-+ 4 -1 >>> 3 +-+-+-+- 4 7 This was found in Python 2.4.4c1. And also another one: >>> _ Traceback (most recent call last): File "", line 1, in ? NameError: name '_' is not defined >>> 3 == 3 True >>> _ True >>> 3 3 >>> _ 3 So what the hell is '_' something very strange indeed. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649100&group_id=5470 From noreply at sourceforge.net Wed Jan 31 19:39:18 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 10:39:18 -0800 Subject: [ python-Bugs-1649100 ] Arithmetics behaving strange Message-ID: Bugs item #1649100, was opened at 2007-01-31 19:36 Message generated for change (Comment added) made by lastmohican You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649100&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Parser/Compiler Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Sascha Peilicke (lastmohican) Assigned to: Nobody/Anonymous (nobody) Summary: Arithmetics behaving strange Initial Comment: Hello, i just found some strange things going around, could you please tell me if this is desired: >>> 3 + 4 7 >>> 3 +- 4 -1 >>> 3 +-+ 4 -1 >>> 3 +-+- 4 7 >>> 3 +-+-+ 4 7 >>> 3 +-+-+- 4 -1 >>> 3 +-+-+-+ 4 -1 >>> 3 +-+-+-+- 4 7 This was found in Python 2.4.4c1. And also another one: >>> _ Traceback (most recent call last): File "", line 1, in ? NameError: name '_' is not defined >>> 3 == 3 True >>> _ True >>> 3 3 >>> _ 3 So what the hell is '_' something very strange indeed. ---------------------------------------------------------------------- >Comment By: Sascha Peilicke (lastmohican) Date: 2007-01-31 19:39 Message: Logged In: YES user_id=1465593 Originator: YES I also found these working on the following: Python 2.5 (r25:51908, Oct 6 2006, 15:22:41) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu4)] on linux2 Seems to be a common 'problem' ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649100&group_id=5470 From noreply at sourceforge.net Wed Jan 31 19:47:29 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 10:47:29 -0800 Subject: [ python-Bugs-1649100 ] Arithmetics behaving strange and magic underscore Message-ID: Bugs item #1649100, was opened at 2007-01-31 19:36 Message generated for change (Settings changed) made by lastmohican You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649100&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Parser/Compiler Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Sascha Peilicke (lastmohican) Assigned to: Nobody/Anonymous (nobody) >Summary: Arithmetics behaving strange and magic underscore Initial Comment: Hello, i just found some strange things going around, could you please tell me if this is desired: >>> 3 + 4 7 >>> 3 +- 4 -1 >>> 3 +-+ 4 -1 >>> 3 +-+- 4 7 >>> 3 +-+-+ 4 7 >>> 3 +-+-+- 4 -1 >>> 3 +-+-+-+ 4 -1 >>> 3 +-+-+-+- 4 7 This was found in Python 2.4.4c1. And also another one: >>> _ Traceback (most recent call last): File "", line 1, in ? NameError: name '_' is not defined >>> 3 == 3 True >>> _ True >>> 3 3 >>> _ 3 So what the hell is '_' something very strange indeed. ---------------------------------------------------------------------- Comment By: Sascha Peilicke (lastmohican) Date: 2007-01-31 19:39 Message: Logged In: YES user_id=1465593 Originator: YES I also found these working on the following: Python 2.5 (r25:51908, Oct 6 2006, 15:22:41) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu4)] on linux2 Seems to be a common 'problem' ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649100&group_id=5470 From noreply at sourceforge.net Wed Jan 31 20:43:37 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 11:43:37 -0800 Subject: [ python-Bugs-1648957 ] HP-UX: _ctypes/libffi/src/ia64/ffi/__attribute__/native cc Message-ID: Bugs item #1648957, was opened at 2007-01-31 16:07 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648957&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Build Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: HP-UX: _ctypes/libffi/src/ia64/ffi/__attribute__/native cc Initial Comment: _ctypes/libffi/src/ia64/ffi.c uses __attribute__((...)) twice. Consequently, ffi.c does not compile with the native compiler (cc: HP C/aC++ B3910B A.06.12 [Aug 17 2006]). ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2007-01-31 20:43 Message: Logged In: YES user_id=11105 Originator: NO I tried that on a HP testdrive machine. While the _ctypes extension buils fine with GCC (*), it does indeed not compile with the native C compiler. But cc not understanding __attribute__ is only the first problem; if it is removed there are lots of other compilation problems. Unless someone can provide a patch, I'll close this as 'won't fix'. (*) _ctypes_test.so is also built but fails to load as shared library, because the symbol __divsf3 is not defined. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648957&group_id=5470 From noreply at sourceforge.net Wed Jan 31 21:15:25 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 12:15:25 -0800 Subject: [ python-Bugs-1582742 ] Python is dumping core after the test test_ctypes Message-ID: Bugs item #1582742, was opened at 2006-10-23 11:42 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1582742&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: shashi (shashikala) Assigned to: Thomas Heller (theller) Summary: Python is dumping core after the test test_ctypes Initial Comment: Hi , Iam building Python-2.5 on HPUX Itanium. The compilation is done without any error, but while testing the same using gmake test it is dumping core telling "Segementation Fault" after the test test_ctypes. Please help me in resolving the above issue.Iam attaching the output of gmake test. Thanks in advance, ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2007-01-31 21:15 Message: Logged In: YES user_id=11105 Originator: NO I finally found time (and energy) to try out the td176 HPUX host on HP testdrive. I downloaded the python25.tar.bz2 snapshot from svn.python.org, and built it with the installed gcc 3.4.3. First, I got errors in the ctypes tests because the _ctypes_test extension/shared library could not be loaded because of a missing symbol __divsf3. Googling around I found http://gcc.gnu.org/onlinedocs/gccint/Libgcc.html which mentions a GCC runtime library libgcc.a (see the link 'soft float library routines' on ths page). When this library is specified when building _ctypes_test.so, all ctypes unittests pass. Without any crash. It is strange, to link against the libgcc.a library it seems needed to specify the location of the library '/usr/local/lib/gcc/ia64-hp-hpux11.23/3.4.3/' - no idea why. Can some HPUX guru provide some insight? The attached patch to setup.py is what was needed, but it is a hack of course. File Added: setup.py.patch ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2006-11-02 22:02 Message: Logged In: YES user_id=11105 Neal, I see no connection between the code that you show and the stack dump. For the failure when importing ctypes.test.test_cfuncs it seems that a library (?) is missing that _ctypes_test.so requires. Any idea? (I know that HP offers shell access to HPUX boxes, but I hesitate to try that out...). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-29 03:05 Message: Logged In: YES user_id=33168 This is the code that crashes: from ctypes import * print cast(c_void_p(0), POINTER(c_int)) *** #0 ffi_call_unix+0x20 () at trunk/Modules/_ctypes/libffi/src/ia64/unix.S:63 #1 0x2000000079194d30:0 in ffi_call (cif=0x7fffe020, fn=0x7913a860, rvalue=0x7fffe090, avalue=0x7fffe070) at trunk/Modules/_ctypes/libffi/src/ia64/ffi.c:372 #2 0x20000000791762f0:0 in _call_function_pointer (flags=4101, pProc=0x7913a860, avalues=0x7fffe070, atypes=0x7fffe050, restype=0x40081de8, resmem=0x7fffe090, argcount=3) at trunk/Modules/_ctypes/callproc.c:665 #3 0x20000000791781d0:0 in _CallProc (pProc=0x7913a860, argtuple=0x401cdd78, flags=4101, argtypes=0x401ef7b8, restype=0x400eacd8, checker=0x0) at trunk/Modules/_ctypes/callproc.c:1001 #4 0x2000000079165350:0 in CFuncPtr_call (self=0x4007abe8, inargs=0x401cdd78, kwds=0x0) at trunk/Modules/_ctypes/_ctypes.c:3364 *** Also note there are a bunch of errors like this: Warning: could not import ctypes.test.test_cfuncs: Unsatisfied code symbol '__divsf3' in load module 'trunk/build/lib.hp-ux-B.11.23-ia64-2.6/_ctypes_test.so'. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-10-25 10:41 Message: Logged In: YES user_id=21627 You will need to run Python in a debugger and find out where it crashes. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1582742&group_id=5470 From noreply at sourceforge.net Wed Jan 31 21:22:24 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 12:22:24 -0800 Subject: [ python-Bugs-1649100 ] Arithmetics behaving strange and magic underscore Message-ID: Bugs item #1649100, was opened at 2007-01-31 18:36 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649100&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Parser/Compiler Group: Python 2.4 >Status: Closed >Resolution: Invalid Priority: 5 Private: No Submitted By: Sascha Peilicke (lastmohican) Assigned to: Nobody/Anonymous (nobody) Summary: Arithmetics behaving strange and magic underscore Initial Comment: Hello, i just found some strange things going around, could you please tell me if this is desired: >>> 3 + 4 7 >>> 3 +- 4 -1 >>> 3 +-+ 4 -1 >>> 3 +-+- 4 7 >>> 3 +-+-+ 4 7 >>> 3 +-+-+- 4 -1 >>> 3 +-+-+-+ 4 -1 >>> 3 +-+-+-+- 4 7 This was found in Python 2.4.4c1. And also another one: >>> _ Traceback (most recent call last): File "", line 1, in ? NameError: name '_' is not defined >>> 3 == 3 True >>> _ True >>> 3 3 >>> _ 3 So what the hell is '_' something very strange indeed. ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-31 20:22 Message: Logged In: YES user_id=849994 Originator: NO In your first example, all + and - except the first + are seen as unary operators and modify the 4. In your second example: "_" is a convenience variable in the interactive interpreter and always bound to the latest expression result. At startup, there is no such result. ---------------------------------------------------------------------- Comment By: Sascha Peilicke (lastmohican) Date: 2007-01-31 18:39 Message: Logged In: YES user_id=1465593 Originator: YES I also found these working on the following: Python 2.5 (r25:51908, Oct 6 2006, 15:22:41) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu4)] on linux2 Seems to be a common 'problem' ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649100&group_id=5470 From noreply at sourceforge.net Wed Jan 31 22:00:26 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 13:00:26 -0800 Subject: [ python-Bugs-1582742 ] Python is dumping core after the test test_ctypes Message-ID: Bugs item #1582742, was opened at 2006-10-23 11:42 Message generated for change (Comment added) made by theller You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1582742&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: shashi (shashikala) Assigned to: Thomas Heller (theller) Summary: Python is dumping core after the test test_ctypes Initial Comment: Hi , Iam building Python-2.5 on HPUX Itanium. The compilation is done without any error, but while testing the same using gmake test it is dumping core telling "Segementation Fault" after the test test_ctypes. Please help me in resolving the above issue.Iam attaching the output of gmake test. Thanks in advance, ---------------------------------------------------------------------- >Comment By: Thomas Heller (theller) Date: 2007-01-31 22:00 Message: Logged In: YES user_id=11105 Originator: NO I did also try the Python 2.5 release tarball and could not reproduce the bug. Machine info: bash-3.00$ uname -a HP-UX td176 B.11.23 U ia64 1928826293 unlimited-user license bash-3.00$ gcc --version gcc (GCC) 3.4.3 Copyright (C) 2004 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. bash-3.00$ ./python Python 2.5 (r25:51908, Jan 31 2007, 15:56:22) [GCC 3.4.3] on hp-ux11 Type "help", "copyright", "credits" or "license" for more information. >>> bash-3.00$ ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2007-01-31 21:15 Message: Logged In: YES user_id=11105 Originator: NO I finally found time (and energy) to try out the td176 HPUX host on HP testdrive. I downloaded the python25.tar.bz2 snapshot from svn.python.org, and built it with the installed gcc 3.4.3. First, I got errors in the ctypes tests because the _ctypes_test extension/shared library could not be loaded because of a missing symbol __divsf3. Googling around I found http://gcc.gnu.org/onlinedocs/gccint/Libgcc.html which mentions a GCC runtime library libgcc.a (see the link 'soft float library routines' on ths page). When this library is specified when building _ctypes_test.so, all ctypes unittests pass. Without any crash. It is strange, to link against the libgcc.a library it seems needed to specify the location of the library '/usr/local/lib/gcc/ia64-hp-hpux11.23/3.4.3/' - no idea why. Can some HPUX guru provide some insight? The attached patch to setup.py is what was needed, but it is a hack of course. File Added: setup.py.patch ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2006-11-02 22:02 Message: Logged In: YES user_id=11105 Neal, I see no connection between the code that you show and the stack dump. For the failure when importing ctypes.test.test_cfuncs it seems that a library (?) is missing that _ctypes_test.so requires. Any idea? (I know that HP offers shell access to HPUX boxes, but I hesitate to try that out...). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-29 03:05 Message: Logged In: YES user_id=33168 This is the code that crashes: from ctypes import * print cast(c_void_p(0), POINTER(c_int)) *** #0 ffi_call_unix+0x20 () at trunk/Modules/_ctypes/libffi/src/ia64/unix.S:63 #1 0x2000000079194d30:0 in ffi_call (cif=0x7fffe020, fn=0x7913a860, rvalue=0x7fffe090, avalue=0x7fffe070) at trunk/Modules/_ctypes/libffi/src/ia64/ffi.c:372 #2 0x20000000791762f0:0 in _call_function_pointer (flags=4101, pProc=0x7913a860, avalues=0x7fffe070, atypes=0x7fffe050, restype=0x40081de8, resmem=0x7fffe090, argcount=3) at trunk/Modules/_ctypes/callproc.c:665 #3 0x20000000791781d0:0 in _CallProc (pProc=0x7913a860, argtuple=0x401cdd78, flags=4101, argtypes=0x401ef7b8, restype=0x400eacd8, checker=0x0) at trunk/Modules/_ctypes/callproc.c:1001 #4 0x2000000079165350:0 in CFuncPtr_call (self=0x4007abe8, inargs=0x401cdd78, kwds=0x0) at trunk/Modules/_ctypes/_ctypes.c:3364 *** Also note there are a bunch of errors like this: Warning: could not import ctypes.test.test_cfuncs: Unsatisfied code symbol '__divsf3' in load module 'trunk/build/lib.hp-ux-B.11.23-ia64-2.6/_ctypes_test.so'. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-10-25 10:41 Message: Logged In: YES user_id=21627 You will need to run Python in a debugger and find out where it crashes. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1582742&group_id=5470 From noreply at sourceforge.net Wed Jan 31 22:00:16 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 13:00:16 -0800 Subject: [ python-Bugs-933670 ] pty.fork() leaves slave fd's open on Solaris Message-ID: Bugs item #933670, was opened at 2004-04-12 10:21 Message generated for change (Comment added) made by dmeranda You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=933670&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Scott Lowrey (slowrey) Assigned to: Nobody/Anonymous (nobody) Summary: pty.fork() leaves slave fd's open on Solaris Initial Comment: On a Solaris 2.8 system, slave file descriptors are left open after the child process is gone and the master has been closed. The pty.fork() function attempts to use os.forkpty() first. When that fails (apparently the os module does not provide forkpty() on Solaris?), it uses openpty() and os.fork(). openpty() returns master and slave file descriptors. Since pty.fork() only returns the master_fd, it is not clear to me how the slave would ever be closed since the caller doesn't have access to it. Perhaps pty.fork is supposed to take care of this? I am using pexpect to control my pty's, so I don't have much expertise in this area other than what I've gleaned from the code. At any rate, on a long running process used to test other programs, the open file descriptors pile up until the ulimit is reached. I've worked around this by modifying pexpect.close() to use os.close(self.child_fd + 1). A hack, I'm sure... :) ---------------------------------------------------------------------- Comment By: Deron Meranda (dmeranda) Date: 2007-01-31 16:00 Message: Logged In: YES user_id=847188 Originator: NO I am seeing the exact same problem under HP-UX 11.0 (python 2.5). Slave descriptors are leaked. This is a problem with Python's pty.fork(), not with pexpect. ---------------------------------------------------------------------- Comment By: HyunKook Kim (k5r2a) Date: 2004-08-27 04:07 Message: Logged In: YES user_id=604333 Thank you very much for your comments My Case is same to yours. -platform : Solaris 5.8 -Python : 2.3.3 -pyexpect: 0.99 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=933670&group_id=5470 From noreply at sourceforge.net Wed Jan 31 22:58:30 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 13:58:30 -0800 Subject: [ python-Bugs-1582742 ] Python is dumping core after the test test_ctypes Message-ID: Bugs item #1582742, was opened at 2006-10-23 11:42 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1582742&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: shashi (shashikala) Assigned to: Thomas Heller (theller) Summary: Python is dumping core after the test test_ctypes Initial Comment: Hi , Iam building Python-2.5 on HPUX Itanium. The compilation is done without any error, but while testing the same using gmake test it is dumping core telling "Segementation Fault" after the test test_ctypes. Please help me in resolving the above issue.Iam attaching the output of gmake test. Thanks in advance, ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-31 22:58 Message: Logged In: YES user_id=21627 Originator: NO Thomas, the libgcc problem might be a gcc installation problem. Just specifying -lgcc should be enough to get libgcc linked in. Furthermore, depending on how the linking is done (gcc -shared?), it shouldn't be necessary *at all* to provide -lgcc. This isn't so much a HPUX question but more a gcc question: if you link with gcc, it *ought* to work (if you link with ld(1), you are on your own). ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2007-01-31 22:00 Message: Logged In: YES user_id=11105 Originator: NO I did also try the Python 2.5 release tarball and could not reproduce the bug. Machine info: bash-3.00$ uname -a HP-UX td176 B.11.23 U ia64 1928826293 unlimited-user license bash-3.00$ gcc --version gcc (GCC) 3.4.3 Copyright (C) 2004 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. bash-3.00$ ./python Python 2.5 (r25:51908, Jan 31 2007, 15:56:22) [GCC 3.4.3] on hp-ux11 Type "help", "copyright", "credits" or "license" for more information. >>> bash-3.00$ ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2007-01-31 21:15 Message: Logged In: YES user_id=11105 Originator: NO I finally found time (and energy) to try out the td176 HPUX host on HP testdrive. I downloaded the python25.tar.bz2 snapshot from svn.python.org, and built it with the installed gcc 3.4.3. First, I got errors in the ctypes tests because the _ctypes_test extension/shared library could not be loaded because of a missing symbol __divsf3. Googling around I found http://gcc.gnu.org/onlinedocs/gccint/Libgcc.html which mentions a GCC runtime library libgcc.a (see the link 'soft float library routines' on ths page). When this library is specified when building _ctypes_test.so, all ctypes unittests pass. Without any crash. It is strange, to link against the libgcc.a library it seems needed to specify the location of the library '/usr/local/lib/gcc/ia64-hp-hpux11.23/3.4.3/' - no idea why. Can some HPUX guru provide some insight? The attached patch to setup.py is what was needed, but it is a hack of course. File Added: setup.py.patch ---------------------------------------------------------------------- Comment By: Thomas Heller (theller) Date: 2006-11-02 22:02 Message: Logged In: YES user_id=11105 Neal, I see no connection between the code that you show and the stack dump. For the failure when importing ctypes.test.test_cfuncs it seems that a library (?) is missing that _ctypes_test.so requires. Any idea? (I know that HP offers shell access to HPUX boxes, but I hesitate to try that out...). ---------------------------------------------------------------------- Comment By: Neal Norwitz (nnorwitz) Date: 2006-10-29 03:05 Message: Logged In: YES user_id=33168 This is the code that crashes: from ctypes import * print cast(c_void_p(0), POINTER(c_int)) *** #0 ffi_call_unix+0x20 () at trunk/Modules/_ctypes/libffi/src/ia64/unix.S:63 #1 0x2000000079194d30:0 in ffi_call (cif=0x7fffe020, fn=0x7913a860, rvalue=0x7fffe090, avalue=0x7fffe070) at trunk/Modules/_ctypes/libffi/src/ia64/ffi.c:372 #2 0x20000000791762f0:0 in _call_function_pointer (flags=4101, pProc=0x7913a860, avalues=0x7fffe070, atypes=0x7fffe050, restype=0x40081de8, resmem=0x7fffe090, argcount=3) at trunk/Modules/_ctypes/callproc.c:665 #3 0x20000000791781d0:0 in _CallProc (pProc=0x7913a860, argtuple=0x401cdd78, flags=4101, argtypes=0x401ef7b8, restype=0x400eacd8, checker=0x0) at trunk/Modules/_ctypes/callproc.c:1001 #4 0x2000000079165350:0 in CFuncPtr_call (self=0x4007abe8, inargs=0x401cdd78, kwds=0x0) at trunk/Modules/_ctypes/_ctypes.c:3364 *** Also note there are a bunch of errors like this: Warning: could not import ctypes.test.test_cfuncs: Unsatisfied code symbol '__divsf3' in load module 'trunk/build/lib.hp-ux-B.11.23-ia64-2.6/_ctypes_test.so'. ---------------------------------------------------------------------- Comment By: Martin v. L?wis (loewis) Date: 2006-10-25 10:41 Message: Logged In: YES user_id=21627 You will need to run Python in a debugger and find out where it crashes. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1582742&group_id=5470 From noreply at sourceforge.net Wed Jan 31 23:00:51 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 14:00:51 -0800 Subject: [ python-Bugs-1648890 ] HP-UX: ld -Wl,+b... Message-ID: Bugs item #1648890, was opened at 2007-01-31 14:57 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648890&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Distutils Group: Python 2.5 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Johannes Abt (jabt) Assigned to: Nobody/Anonymous (nobody) Summary: HP-UX: ld -Wl,+b... Initial Comment: On HP-UX 11.* (here: 11.23), configure chooses "ld -b" for extension modules like unicodedata.so. My $LDFLAGS contains instructions like "-Wl,+b" (run-time search path for shared libs). This is correct, because LDFLAGS should be passed to the compiler. distutils compiles the extension modules with "cc" (I need to use the native compiler), then it links with ld -b $(LDFLAGS) -I.... ... These means that options like -Wl, and -I are passed to the linker! To solve this problem quickly, I propose to modify configure. If LDSHARED="cc -b", Python 2.5 compiles. Though this works very godd with with current HP-UX compilers, it does not work with ancient HP-UX compiler suites. Maybe there should be a test in configure in order to see if LDSHARED works. If you really want to support old HP-UX compilers, distutils should not - pass $LDFLAGS containing "-Wl," to "ld" nor - call the linker with -I. This is the current state of the linker call: ld -b -L/usr/local/python/2.5/lib/hpux32 -Wl,+b,/usr/local/python/2.5/lib/hpux32:/usr/local/devel/readline/5.1-static/lib/hpux32:/usr/local/ssl/lib:/usr/local/devel/bzip-1.0.3/lib/hpux32:/usr/local/devel/berkeleydb/4.3.29-static/lib:/usr/local/lib/hpux32 -L/usr/local/devel/readline/5.1-static/lib/hpux32 -Wl,+b,/usr/local/devel/readline/5.1-static/lib/hpux32:/usr/local/ssl/lib:/usr/local/devel/bzip-1.0.3/lib/hpux32:/usr/local/devel/berkeleydb/4.3.29-static/lib:/usr/local/lib/hpux32 -L/usr/local/ssl/lib -Wl,+b,/usr/local/ssl/lib:/usr/local/devel/bzip-1.0.3/lib/hpux32:/usr/local/devel/berkeleydb/4.3.29-static/lib:/usr/local/lib/hpux32 -L/usr/local/devel/bzip-1.0.3/lib/hpux32 -Wl,+b,/usr/local/devel/bzip-1.0.3/lib/hpux32:/usr/local/devel/berkeleydb/4.3.29-static/lib:/usr/local/lib/hpux32 -L/usr/local/devel/berkeleydb/4.3.29-static/lib -Wl,+b,/usr/local/devel/berkeleydb/4.3.29-static/lib:/usr/local/lib/hpux32 -L/usr/local/lib/hpux32 -Wl,+b,/usr/local/lib/hpux32 -I. -I/soft/python/python-2.5/Python-2.5/Include -I/usr/local/include -I/usr/local/devel/berkeleydb/4.3.29-static/include -I/usr/local/devel/bzip-1.0.3/include -I/usr/local/ssl/include -I/usr/local/devel/readline/5.1-static/include build/temp.hp-ux-B.11.23-ia64-2.5/soft/python/python-2.5/Python-2.5/Modules/readline.o -L/usr/lib/termcap -L/usr/local/python/2.5/lib -L/usr/local/lib -lreadline -o build/lib.hp-ux-B.11.23-ia64-2.5/readline.so ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-31 23:00 Message: Logged In: YES user_id=21627 Originator: NO I personally wouldn't have any issues with breaking old HPUX installations if that helps current ones. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1648890&group_id=5470 From noreply at sourceforge.net Wed Jan 31 23:07:17 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 14:07:17 -0800 Subject: [ python-Bugs-1649100 ] Arithmetics behaving strange and magic underscore Message-ID: Bugs item #1649100, was opened at 2007-01-31 19:36 Message generated for change (Comment added) made by lastmohican You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649100&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Parser/Compiler Group: Python 2.4 Status: Closed Resolution: Invalid Priority: 5 Private: No Submitted By: Sascha Peilicke (lastmohican) Assigned to: Nobody/Anonymous (nobody) Summary: Arithmetics behaving strange and magic underscore Initial Comment: Hello, i just found some strange things going around, could you please tell me if this is desired: >>> 3 + 4 7 >>> 3 +- 4 -1 >>> 3 +-+ 4 -1 >>> 3 +-+- 4 7 >>> 3 +-+-+ 4 7 >>> 3 +-+-+- 4 -1 >>> 3 +-+-+-+ 4 -1 >>> 3 +-+-+-+- 4 7 This was found in Python 2.4.4c1. And also another one: >>> _ Traceback (most recent call last): File "", line 1, in ? NameError: name '_' is not defined >>> 3 == 3 True >>> _ True >>> 3 3 >>> _ 3 So what the hell is '_' something very strange indeed. ---------------------------------------------------------------------- >Comment By: Sascha Peilicke (lastmohican) Date: 2007-01-31 23:07 Message: Logged In: YES user_id=1465593 Originator: YES Ok, claryfies something, but i don't think, that this should be valid syntax. Maybe it is not a real bug, but definitly a gotcha. ---------------------------------------------------------------------- Comment By: Georg Brandl (gbrandl) Date: 2007-01-31 21:22 Message: Logged In: YES user_id=849994 Originator: NO In your first example, all + and - except the first + are seen as unary operators and modify the 4. In your second example: "_" is a convenience variable in the interactive interpreter and always bound to the latest expression result. At startup, there is no such result. ---------------------------------------------------------------------- Comment By: Sascha Peilicke (lastmohican) Date: 2007-01-31 19:39 Message: Logged In: YES user_id=1465593 Originator: YES I also found these working on the following: Python 2.5 (r25:51908, Oct 6 2006, 15:22:41) [GCC 4.1.2 20060928 (prerelease) (Ubuntu 4.1.1-13ubuntu4)] on linux2 Seems to be a common 'problem' ? ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649100&group_id=5470 From noreply at sourceforge.net Wed Jan 31 23:16:26 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 14:16:26 -0800 Subject: [ python-Bugs-1649238 ] potential class with C++ in ceval.h Message-ID: Bugs item #1649238, was opened at 2007-01-31 16:16 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649238&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: thechao (jaroslov) Assigned to: Nobody/Anonymous (nobody) Summary: potential class with C++ in ceval.h Initial Comment: There is a potential clash with future revisions of C++ in the file "ceval.h". On lines 52, 54, and 57 the word "where" is used. Future versions of C++ will have a "where" keyword (for concepts). I have a diff file (attached) that changes the word "where" to "location". I'm not sure if this is an appropriate name, but it certainly compiles. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649238&group_id=5470 From noreply at sourceforge.net Wed Jan 31 23:17:29 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 14:17:29 -0800 Subject: [ python-Bugs-1649238 ] potential clash with C++ in ceval.h Message-ID: Bugs item #1649238, was opened at 2007-01-31 16:16 Message generated for change (Settings changed) made by jaroslov You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649238&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: thechao (jaroslov) Assigned to: Nobody/Anonymous (nobody) >Summary: potential clash with C++ in ceval.h Initial Comment: There is a potential clash with future revisions of C++ in the file "ceval.h". On lines 52, 54, and 57 the word "where" is used. Future versions of C++ will have a "where" keyword (for concepts). I have a diff file (attached) that changes the word "where" to "location". I'm not sure if this is an appropriate name, but it certainly compiles. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1649238&group_id=5470 From noreply at sourceforge.net Wed Jan 31 23:53:37 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Wed, 31 Jan 2007 14:53:37 -0800 Subject: [ python-Bugs-933670 ] pty.fork() leaves slave fd's open on Solaris Message-ID: Bugs item #933670, was opened at 2004-04-12 14:21 Message generated for change (Comment added) made by gbrandl You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=933670&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 >Status: Closed >Resolution: Out of Date Priority: 5 Private: No Submitted By: Scott Lowrey (slowrey) Assigned to: Nobody/Anonymous (nobody) Summary: pty.fork() leaves slave fd's open on Solaris Initial Comment: On a Solaris 2.8 system, slave file descriptors are left open after the child process is gone and the master has been closed. The pty.fork() function attempts to use os.forkpty() first. When that fails (apparently the os module does not provide forkpty() on Solaris?), it uses openpty() and os.fork(). openpty() returns master and slave file descriptors. Since pty.fork() only returns the master_fd, it is not clear to me how the slave would ever be closed since the caller doesn't have access to it. Perhaps pty.fork is supposed to take care of this? I am using pexpect to control my pty's, so I don't have much expertise in this area other than what I've gleaned from the code. At any rate, on a long running process used to test other programs, the open file descriptors pile up until the ulimit is reached. I've worked around this by modifying pexpect.close() to use os.close(self.child_fd + 1). A hack, I'm sure... :) ---------------------------------------------------------------------- >Comment By: Georg Brandl (gbrandl) Date: 2007-01-31 22:53 Message: Logged In: YES user_id=849994 Originator: NO This is already fixed in Subversion, see patch #783050. ---------------------------------------------------------------------- Comment By: Deron Meranda (dmeranda) Date: 2007-01-31 21:00 Message: Logged In: YES user_id=847188 Originator: NO I am seeing the exact same problem under HP-UX 11.0 (python 2.5). Slave descriptors are leaked. This is a problem with Python's pty.fork(), not with pexpect. ---------------------------------------------------------------------- Comment By: HyunKook Kim (k5r2a) Date: 2004-08-27 08:07 Message: Logged In: YES user_id=604333 Thank you very much for your comments My Case is same to yours. -platform : Solaris 5.8 -Python : 2.3.3 -pyexpect: 0.99 ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=933670&group_id=5470 From noreply at sourceforge.net Sat Jan 6 01:38:12 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 00:38:12 -0000 Subject: [ python-Bugs-1629158 ] Lots of errors reported by valgrind in 2.4.4 and 2.5 Message-ID: Bugs item #1629158, was opened at 2007-01-05 16:38 Message generated for change (Tracker Item Submitted) made by Item Submitter You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629158&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 Status: Open Resolution: None Priority: 5 Private: No Submitted By: Anton Tropashko (atropashko) Assigned to: Nobody/Anonymous (nobody) Summary: Lots of errors reported by valgrind in 2.4.4 and 2.5 Initial Comment: 2.3.6 is clean valgrind, wise but 2.4.4 and 2.5 report a ton of problems (just as the interpreter starts) ==3805== Memcheck, a memory error detector. ==3805== Copyright (C) 2002-2005, and GNU GPL'd, by Julian Seward et al. ==3805== Using LibVEX rev 1367, a library for dynamic binary translation. ==3805== Copyright (C) 2004-2005, and GNU GPL'd, by OpenWorks LLP. ==3805== Using valgrind-3.0.1, a dynamic binary instrumentation framework. ==3805== Copyright (C) 2000-2005, and GNU GPL'd, by Julian Seward et al. ==3805== For more details, rerun with: -v ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B953E18: PyString_InternInPlace (stringobject.c:4337) ==3805== by 0x1B953EBB: PyString_InternFromString (stringobject.c:4364) ==3805== by 0x1B95DFED: add_operators (typeobject.c:5323) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D27D: _Py_ReadyTypes (object.c:1820) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B953E18: PyString_InternInPlace (stringobject.c:4337) ==3805== by 0x1B953EBB: PyString_InternFromString (stringobject.c:4364) ==3805== by 0x1B95DFED: add_operators (typeobject.c:5323) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D27D: _Py_ReadyTypes (object.c:1820) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B964F7B: pmerge (typeobject.c:1201) ==3805== by 0x1B95EEA7: mro_implementation (typeobject.c:1272) ==3805== by 0x1B95A8F8: mro_internal (typeobject.c:1296) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC87010 is 272 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B93E0C3: list_dealloc (listobject.c:269) ==3805== by 0x1B95EEF0: mro_implementation (typeobject.c:1276) ==3805== by 0x1B95A8F8: mro_internal (typeobject.c:1296) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC87010 is 272 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B95EEF0: mro_implementation (typeobject.c:1276) ==3805== by 0x1B95A8F8: mro_internal (typeobject.c:1296) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC87010 is 272 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B95A8A1: mro_internal (typeobject.c:1312) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC87010 is 272 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B96001B: add_methods (typeobject.c:2826) ==3805== by 0x1B95C1A3: PyType_Ready (typeobject.c:3191) ==3805== by 0x1B94D2C8: _Py_ReadyTypes (object.c:1829) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC87010 is 272 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B94D2E1: _Py_ReadyTypes (object.c:1832) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC88010 is 256 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B96001B: add_methods (typeobject.c:2826) ==3805== by 0x1B95C1A3: PyType_Ready (typeobject.c:3191) ==3805== by 0x1B94D2C8: _Py_ReadyTypes (object.c:1829) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B97A980: _PyExc_Init (exceptions.c:1804) ==3805== by 0x1B9AA450: Py_InitializeEx (pythonrun.c:207) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC8E010 is 0 bytes inside a block of size 29 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B97A980: _PyExc_Init (exceptions.c:1804) ==3805== by 0x1B9AA450: Py_InitializeEx (pythonrun.c:207) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B953E18: PyString_InternInPlace (stringobject.c:4337) ==3805== by 0x1B94897F: PyDict_SetItemString (dictobject.c:2025) ==3805== by 0x1B9A8C6F: Py_InitModule4 (modsupport.c:82) ==3805== by 0x1B9B4C41: initsignal (signalmodule.c:319) ==3805== by 0x1B9B5885: PyOS_InitInterrupts (signalmodule.c:643) ==3805== by 0x1B9AC6FF: initsigs (pythonrun.c:1610) ==3805== by 0x1B9AA6E8: Py_InitializeEx (pythonrun.c:216) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC84010 is 816 bytes inside a block of size 2744 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB33D76: qsort (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B95DE6F: add_operators (typeobject.c:5327) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D27D: _Py_ReadyTypes (object.c:1820) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B9A84B8: PyMarshal_ReadObjectFromString (marshal.c:825) ==3805== by 0x1B9A8352: PyMarshal_ReadLastObjectFromFile (marshal.c:784) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== Address 0x1BC9C010 is 336 bytes inside a block of size 592 free'd ==3805== at 0x1B8FFDF1: realloc (vg_replace_malloc.c:306) ==3805== by 0x1B93D4EB: PyList_Append (listobject.c:53) ==3805== by 0x1B9A7C07: r_object (marshal.c:549) ==3805== by 0x1B9A6A93: r_object (marshal.c:598) ==3805== by 0x1B9A71B0: r_object (marshal.c:670) ==3805== by 0x1B9A6A93: r_object (marshal.c:598) ==3805== by 0x1B9A71A1: r_object (marshal.c:669) ==3805== by 0x1B9A848F: PyMarshal_ReadObjectFromString (marshal.c:822) ==3805== by 0x1B9A8352: PyMarshal_ReadLastObjectFromFile (marshal.c:784) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B9A836D: PyMarshal_ReadLastObjectFromFile (marshal.c:786) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B9A836D: PyMarshal_ReadLastObjectFromFile (marshal.c:786) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B9BA0B2: setup_confname_table (posixmodule.c:7194) ==3805== by 0x1B9B5F28: initposix (posixmodule.c:7223) ==3805== by 0x1B9A08AB: init_builtin (import.c:1773) ==3805== by 0x1B9A07BB: load_module (import.c:1702) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== Address 0x1BCAB010 is 16 bytes before a block of size 1536 alloc'd ==3805== at 0x1B8FEA39: malloc (vg_replace_malloc.c:149) ==3805== by 0x1B948AA7: dictresize (dictobject.c:500) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B9BA0B2: setup_confname_table (posixmodule.c:7194) ==3805== by 0x1B9B5EFD: initposix (posixmodule.c:7216) ==3805== by 0x1B9A08AB: init_builtin (import.c:1773) ==3805== by 0x1B9A07BB: load_module (import.c:1702) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B95C170: PyType_Ready (typeobject.c:2845) ==3805== by 0x1B9584F2: PyStructSequence_InitType (structseq.c:388) ==3805== by 0x1B9B5E65: initposix (posixmodule.c:7983) ==3805== by 0x1B9A08AB: init_builtin (import.c:1773) ==3805== by 0x1B9A07BB: load_module (import.c:1702) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== Address 0x1BCAE010 is 8 bytes before a block of size 384 alloc'd ==3805== at 0x1B8FEA39: malloc (vg_replace_malloc.c:149) ==3805== by 0x1B948AA7: dictresize (dictobject.c:500) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B9584F2: PyStructSequence_InitType (structseq.c:388) ==3805== by 0x1B9B5E65: initposix (posixmodule.c:7983) ==3805== by 0x1B9A08AB: init_builtin (import.c:1773) ==3805== by 0x1B9A07BB: load_module (import.c:1702) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B97E73C: PyEval_EvalFrame (ceval.c:1700) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== Address 0x1BCB7010 is 104 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B98AB04: code_dealloc (compile.c:230) ==3805== by 0x1B9A1F69: load_source_module (import.c:919) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== Address 0x1BCB7010 is 104 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B9B3E4B: PyObject_GC_Del (gcmodule.c:1311) ==3805== by 0x1B9594C9: tupledealloc (tupleobject.c:182) ==3805== by 0x1B98AAEC: code_dealloc (compile.c:230) ==3805== by 0x1B9A1F69: load_source_module (import.c:919) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== Address 0x1BCB4010 is 24 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B980D9C: call_function (ceval.c:3603) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== Address 0x1BCB9010 is 48 bytes inside a block of size 100 free'd ==3805== at 0x1B8FFDF1: realloc (vg_replace_malloc.c:306) ==3805== by 0x1B93D4EB: PyList_Append (listobject.c:53) ==3805== by 0x1B97D4F5: PyEval_EvalFrame (ceval.c:1229) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DF90: PyObject_Realloc (obmalloc.c:818) ==3805== by 0x1B9B3DE0: _PyObject_GC_Resize (gcmodule.c:1294) ==3805== by 0x1B9386B7: PyFrame_New (frameobject.c:598) ==3805== by 0x1B97F5AC: PyEval_EvalCodeEx (ceval.c:2533) ==3805== by 0x1B98119A: fast_function (ceval.c:3661) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== Address 0x1BCB7010 is 104 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91B043: fixstate (acceler.c:124) ==3805== by 0x1B91AF23: fixdfa (acceler.c:60) ==3805== by 0x1B91AE63: PyGrammar_AddAccelerators (acceler.c:30) ==3805== by 0x1B91B704: PyParser_New (parser.c:77) ==3805== by 0x1B91BC6E: parsetok (parsetok.c:109) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== Address 0x1BCBD010 is 88 bytes inside a block of size 640 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B91B043: fixstate (acceler.c:124) ==3805== by 0x1B91AF23: fixdfa (acceler.c:60) ==3805== by 0x1B91AE63: PyGrammar_AddAccelerators (acceler.c:30) ==3805== by 0x1B91B704: PyParser_New (parser.c:77) ==3805== by 0x1B91BC6E: parsetok (parsetok.c:109) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD04010 is 296 bytes inside a block of size 640 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B91B043: fixstate (acceler.c:124) ==3805== by 0x1B91AF23: fixdfa (acceler.c:60) ==3805== by 0x1B91AE63: PyGrammar_AddAccelerators (acceler.c:30) ==3805== by 0x1B91B704: PyParser_New (parser.c:77) ==3805== by 0x1B91BC6E: parsetok (parsetok.c:109) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DF90: PyObject_Realloc (obmalloc.c:818) ==3805== by 0x1B952711: _PyString_Resize (stringobject.c:3521) ==3805== by 0x1B98845F: jcompile (compile.c:1217) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD05010 is 2328 bytes inside a block of size 6012 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DF90: PyObject_Realloc (obmalloc.c:818) ==3805== by 0x1B952711: _PyString_Resize (stringobject.c:3521) ==3805== by 0x1B988443: jcompile (compile.c:1219) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B94D42E: PyMem_Free (object.c:1973) ==3805== by 0x1B98B07E: optimize_code (compile.c:755) ==3805== by 0x1B9881F9: jcompile (compile.c:5018) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B94D42E: PyMem_Free (object.c:1973) ==3805== by 0x1B98B087: optimize_code (compile.c:756) ==3805== by 0x1B9881F9: jcompile (compile.c:5018) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B94D42E: PyMem_Free (object.c:1973) ==3805== by 0x1B98B090: optimize_code (compile.c:757) ==3805== by 0x1B9881F9: jcompile (compile.c:5018) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B982D67: com_free (compile.c:1187) ==3805== by 0x1B9882FC: jcompile (compile.c:5057) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD05010 is 2328 bytes inside a block of size 6012 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B982D20: com_free (compile.c:1187) ==3805== by 0x1B9882FC: jcompile (compile.c:5057) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B982C68: com_free (compile.c:1187) ==3805== by 0x1B9882FC: jcompile (compile.c:5057) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B98AA50: code_dealloc (compile.c:230) ==3805== by 0x1B9ABEE9: run_node (pythonrun.c:1288) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91E515: PyTokenizer_Free (tokenizer.c:678) ==3805== by 0x1B91BD8F: parsetok (parsetok.c:213) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B98AA50: code_dealloc (compile.c:230) ==3805== by 0x1B9394E7: frame_dealloc (frameobject.c:418) ==3805== by 0x1B9AF500: tb_dealloc (traceback.c:37) ==3805== by 0x1B9AF511: tb_dealloc (traceback.c:37) ==3805== by 0x1B947DCD: PyDict_DelItem (dictobject.c:642) ==3805== by 0x1B9489EE: PyDict_DelItemString (dictobject.c:2039) ==3805== by 0x1B9AD8B0: PySys_SetObject (sysmodule.c:79) ==3805== by 0x1B97FF81: reset_exc_info (ceval.c:2889) ==3805== by 0x1B97D04D: PyEval_EvalFrame (ceval.c:2500) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== Address 0x1BD08010 is 5128 bytes inside a block of size 6012 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B9A84B8: PyMarshal_ReadObjectFromString (marshal.c:825) ==3805== by 0x1B9A8352: PyMarshal_ReadLastObjectFromFile (marshal.c:784) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B9A84B8: PyMarshal_ReadObjectFromString (marshal.c:825) ==3805== by 0x1B9A8352: PyMarshal_ReadLastObjectFromFile (marshal.c:784) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B97E73C: PyEval_EvalFrame (ceval.c:1700) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B97E73C: PyEval_EvalFrame (ceval.c:1700) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B93E0C3: list_dealloc (listobject.c:269) ==3805== by 0x1B95EEF0: mro_implementation (typeobject.c:1276) ==3805== by 0x1B95A8F8: mro_internal (typeobject.c:1296) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B94C99C: PyObject_GenericGetAttr (object.c:1235) ==3805== by 0x1B94C4D8: PyObject_GetAttr (object.c:1088) ==3805== by 0x1B97EAE8: PyEval_EvalFrame (ceval.c:1957) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B93E0C3: list_dealloc (listobject.c:269) ==3805== by 0x1B95EEF0: mro_implementation (typeobject.c:1276) ==3805== by 0x1B95A8F8: mro_internal (typeobject.c:1296) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B94C99C: PyObject_GenericGetAttr (object.c:1235) ==3805== by 0x1B94C4D8: PyObject_GetAttr (object.c:1088) ==3805== by 0x1B97EAE8: PyEval_EvalFrame (ceval.c:1957) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B95A8A1: mro_internal (typeobject.c:1312) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B94C99C: PyObject_GenericGetAttr (object.c:1235) ==3805== by 0x1B94C4D8: PyObject_GetAttr (object.c:1088) ==3805== by 0x1B97EAE8: PyEval_EvalFrame (ceval.c:1957) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B95A8A1: mro_internal (typeobject.c:1312) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B94C99C: PyObject_GenericGetAttr (object.c:1235) ==3805== by 0x1B94C4D8: PyObject_GetAttr (object.c:1088) ==3805== by 0x1B97EAE8: PyEval_EvalFrame (ceval.c:1957) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B9B3E4B: PyObject_GC_Del (gcmodule.c:1311) ==3805== by 0x1B9594C9: tupledealloc (tupleobject.c:182) ==3805== by 0x1B98AAD4: code_dealloc (compile.c:230) ==3805== by 0x1B9A1F69: load_source_module (import.c:919) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== Address 0x1BD1B010 is 384 bytes inside a block of size 614 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B98AB04: code_dealloc (compile.c:230) ==3805== by 0x1B9A1F69: load_source_module (import.c:919) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DF90: PyObject_Realloc (obmalloc.c:818) ==3805== by 0x1B9B3DE0: _PyObject_GC_Resize (gcmodule.c:1294) ==3805== by 0x1B9386B7: PyFrame_New (frameobject.c:598) ==3805== by 0x1B9811D8: fast_function (ceval.c:3640) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B98119A: fast_function (ceval.c:3661) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== Address 0x1BD1B010 is 384 bytes inside a block of size 614 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B98AB04: code_dealloc (compile.c:230) ==3805== by 0x1B9A1F69: load_source_module (import.c:919) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B97D341: PyEval_EvalFrame (ceval.c:923) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD1F010 is 0 bytes inside a block of size 32 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B939523: frame_dealloc (frameobject.c:418) ==3805== by 0x1B981246: fast_function (ceval.c:3655) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B939523: frame_dealloc (frameobject.c:418) ==3805== by 0x1B981246: fast_function (ceval.c:3655) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD1F010 is 0 bytes inside a block of size 32 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B939523: frame_dealloc (frameobject.c:418) ==3805== by 0x1B981246: fast_function (ceval.c:3655) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B940665: listiter_next (listobject.c:2772) ==3805== by 0x1B97DDB0: PyEval_EvalFrame (ceval.c:2121) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== Address 0x1BD85010 is 12 bytes after a block of size 20 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B95A8A1: mro_internal (typeobject.c:1312) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B95B691: type_new (typeobject.c:1959) ==3805== by 0x1B95EA8D: type_call (typeobject.c:421) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B924566: PyObject_CallFunction (abstract.c:1837) ==3805== by 0x1B981D71: build_class (ceval.c:4113) ==3805== by 0x1B97E693: PyEval_EvalFrame (ceval.c:1688) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B99C042: vgetargs1 (getargs.c:108) ==3805== by 0x1B99BDA8: PyArg_ParseTuple (getargs.c:54) ==3805== by 0x1B9B6DFE: posix_stat (posixmodule.c:1049) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== Address 0x1BD86010 is 16 bytes before a block of size 32 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B939523: frame_dealloc (frameobject.c:418) ==3805== by 0x1B981246: fast_function (ceval.c:3655) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B98119A: fast_function (ceval.c:3661) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B94D42E: PyMem_Free (object.c:1973) ==3805== by 0x1B9B6E58: posix_stat (posixmodule.c:1096) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD1F010 is 0 bytes inside a block of size 32 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B939523: frame_dealloc (frameobject.c:418) ==3805== by 0x1B981246: fast_function (ceval.c:3655) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== ==3805== More than 50 errors detected. Subsequent errors ==3805== will still be recorded, but in less detail than before. ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B9A836D: PyMarshal_ReadLastObjectFromFile (marshal.c:786) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E79: load_next (import.c:2100) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== Address 0x1BD25010 is 144 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A23C7: load_package (import.c:961) ==3805== by 0x1B9A0708: load_module (import.c:1694) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B99A503: _PyCodecRegistry_Init (codecs.c:834) ==3805== by 0x1B9990E9: _PyCodec_Lookup (codecs.c:106) Python 2.4.4 (#1, Jan 2 2007, 15:51:25) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91E528: PyTokenizer_Free (tokenizer.c:677) ==3805== by 0x1B91BD8F: parsetok (parsetok.c:213) ==3805== by 0x1B9AB128: PyRun_InteractiveOneFlags (pythonrun.c:752) ==3805== by 0x1B9AAF48: PyRun_InteractiveLoopFlags (pythonrun.c:704) ==3805== by 0x1B9AAE7E: PyRun_AnyFileExFlags (pythonrun.c:667) ==3805== by 0x1B9B350A: Py_Main (main.c:493) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91E528: PyTokenizer_Free (tokenizer.c:677) ==3805== by 0x1B91BD8F: parsetok (parsetok.c:213) ==3805== by 0x1B9AB128: PyRun_InteractiveOneFlags (pythonrun.c:752) ==3805== by 0x1B9AAF48: PyRun_InteractiveLoopFlags (pythonrun.c:704) ==3805== by 0x1B9AAE7E: PyRun_AnyFileExFlags (pythonrun.c:667) ==3805== by 0x1B9B350A: Py_Main (main.c:493) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91E515: PyTokenizer_Free (tokenizer.c:678) ==3805== by 0x1B91BD8F: parsetok (parsetok.c:213) ==3805== by 0x1B9AB128: PyRun_InteractiveOneFlags (pythonrun.c:752) ==3805== by 0x1B9AAF48: PyRun_InteractiveLoopFlags (pythonrun.c:704) ==3805== by 0x1B9AAE7E: PyRun_AnyFileExFlags (pythonrun.c:667) ==3805== by 0x1B9B350A: Py_Main (main.c:493) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91E515: PyTokenizer_Free (tokenizer.c:678) ==3805== by 0x1B91BD8F: parsetok (parsetok.c:213) ==3805== by 0x1B9AB128: PyRun_InteractiveOneFlags (pythonrun.c:752) ==3805== by 0x1B9AAF48: PyRun_InteractiveLoopFlags (pythonrun.c:704) ==3805== by 0x1B9AAE7E: PyRun_AnyFileExFlags (pythonrun.c:667) ==3805== by 0x1B9B350A: Py_Main (main.c:493) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948CA8: dict_dealloc (dictobject.c:770) ==3805== by 0x1B947C71: PyDict_SetItem (dictobject.c:606) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B99F28D: PyImport_Cleanup (import.c:384) ==3805== by 0x1B9AA92D: Py_Finalize (pythonrun.c:356) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BD21010 is 224 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B947F34: PyDict_Clear (dictobject.c:710) ==3805== by 0x1B99F646: PyImport_Cleanup (import.c:476) ==3805== by 0x1B9AA92D: Py_Finalize (pythonrun.c:356) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BDA6010 is 288 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948CA8: dict_dealloc (dictobject.c:770) ==3805== by 0x1B948D22: dict_dealloc (dictobject.c:772) ==3805== by 0x1B99F10F: _PyImport_Fini (import.c:211) ==3805== by 0x1B9AA932: Py_Finalize (pythonrun.c:378) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948CA8: dict_dealloc (dictobject.c:770) ==3805== by 0x1B948D22: dict_dealloc (dictobject.c:772) ==3805== by 0x1B99F10F: _PyImport_Fini (import.c:211) ==3805== by 0x1B9AA932: Py_Finalize (pythonrun.c:378) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B99F0F1: _PyImport_Fini (import.c:209) ==3805== by 0x1B9AA932: Py_Finalize (pythonrun.c:378) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC8C010 is 2112 bytes inside a block of size 3410 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B948D22: dict_dealloc (dictobject.c:772) ==3805== by 0x1B948D22: dict_dealloc (dictobject.c:772) ==3805== by 0x1B99F10F: _PyImport_Fini (import.c:211) ==3805== by 0x1B9AA932: Py_Finalize (pythonrun.c:378) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93B913: PyInt_Fini (intobject.c:1135) ==3805== by 0x1B9AA977: Py_Finalize (pythonrun.c:426) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BCAC010 is 944 bytes inside a block of size 1536 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B9BA0B2: setup_confname_table (posixmodule.c:7194) ==3805== by 0x1B9B5F28: initposix (posixmodule.c:7223) ==3805== by 0x1B9A08AB: init_builtin (import.c:1773) ==3805== by 0x1B9A07BB: load_module (import.c:1702) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91AEF4: PyGrammar_RemoveAccelerators (acceler.c:47) ==3805== by 0x1B9AA98D: Py_Finalize (pythonrun.c:440) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BCBC010 is 16 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948CA8: dict_dealloc (dictobject.c:770) ==3805== by 0x1B93A6A4: func_dealloc (funcobject.c:454) ==3805== by 0x1B948D22: dict_dealloc (dictobject.c:772) ==3805== by 0x1B927970: class_dealloc (classobject.c:193) ==3805== by 0x1B959529: tupledealloc (tupleobject.c:178) ==3805== by 0x1B927988: class_dealloc (classobject.c:193) ==3805== by 0x1B92855C: instance_dealloc (classobject.c:703) ==3805== by 0x1B947C71: PyDict_SetItem (dictobject.c:606) ==3805== by 0x1B94B362: _PyModule_Clear (moduleobject.c:136) ==3805== by 0x1B99F53B: PyImport_Cleanup (import.c:454) ==3805== ==3805== ERROR SUMMARY: 720 errors from 62 contexts (suppressed: 50 from 1) ==3805== malloc/free: in use at exit: 713145 bytes in 231 blocks. ==3805== malloc/free: 1648 allocs, 1417 frees, 1435954 bytes allocated. ==3805== For counts of detected errors, rerun with: -v ==3805== searching for pointers to 231 not-freed blocks. ==3805== checked 1192292 bytes. ==3805== ==3805== LEAK SUMMARY: ==3805== definitely lost: 40 bytes in 2 blocks. ==3805== possibly lost: 0 bytes in 0 blocks. ==3805== still reachable: 713105 bytes in 229 blocks. ==3805== suppressed: 0 bytes in 0 blocks. ==3805== Use --leak-check=full to see details of leaked memory. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629158&group_id=5470 From noreply at sourceforge.net Sat Jan 6 01:43:41 2007 From: noreply at sourceforge.net (SourceForge.net) Date: Sat, 06 Jan 2007 00:43:41 -0000 Subject: [ python-Bugs-1629158 ] Lots of errors reported by valgrind in 2.4.4 and 2.5 Message-ID: Bugs item #1629158, was opened at 2007-01-06 01:38 Message generated for change (Comment added) made by loewis You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629158&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.4 >Status: Closed >Resolution: Works For Me Priority: 5 Private: No Submitted By: Anton Tropashko (atropashko) Assigned to: Nobody/Anonymous (nobody) Summary: Lots of errors reported by valgrind in 2.4.4 and 2.5 Initial Comment: 2.3.6 is clean valgrind, wise but 2.4.4 and 2.5 report a ton of problems (just as the interpreter starts) ==3805== Memcheck, a memory error detector. ==3805== Copyright (C) 2002-2005, and GNU GPL'd, by Julian Seward et al. ==3805== Using LibVEX rev 1367, a library for dynamic binary translation. ==3805== Copyright (C) 2004-2005, and GNU GPL'd, by OpenWorks LLP. ==3805== Using valgrind-3.0.1, a dynamic binary instrumentation framework. ==3805== Copyright (C) 2000-2005, and GNU GPL'd, by Julian Seward et al. ==3805== For more details, rerun with: -v ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B953E18: PyString_InternInPlace (stringobject.c:4337) ==3805== by 0x1B953EBB: PyString_InternFromString (stringobject.c:4364) ==3805== by 0x1B95DFED: add_operators (typeobject.c:5323) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D27D: _Py_ReadyTypes (object.c:1820) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B953E18: PyString_InternInPlace (stringobject.c:4337) ==3805== by 0x1B953EBB: PyString_InternFromString (stringobject.c:4364) ==3805== by 0x1B95DFED: add_operators (typeobject.c:5323) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D27D: _Py_ReadyTypes (object.c:1820) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B964F7B: pmerge (typeobject.c:1201) ==3805== by 0x1B95EEA7: mro_implementation (typeobject.c:1272) ==3805== by 0x1B95A8F8: mro_internal (typeobject.c:1296) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC87010 is 272 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B93E0C3: list_dealloc (listobject.c:269) ==3805== by 0x1B95EEF0: mro_implementation (typeobject.c:1276) ==3805== by 0x1B95A8F8: mro_internal (typeobject.c:1296) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC87010 is 272 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B95EEF0: mro_implementation (typeobject.c:1276) ==3805== by 0x1B95A8F8: mro_internal (typeobject.c:1296) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC87010 is 272 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B95A8A1: mro_internal (typeobject.c:1312) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC87010 is 272 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B96001B: add_methods (typeobject.c:2826) ==3805== by 0x1B95C1A3: PyType_Ready (typeobject.c:3191) ==3805== by 0x1B94D2C8: _Py_ReadyTypes (object.c:1829) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC87010 is 272 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D2AF: _Py_ReadyTypes (object.c:1826) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B94D2E1: _Py_ReadyTypes (object.c:1832) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC88010 is 256 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B96001B: add_methods (typeobject.c:2826) ==3805== by 0x1B95C1A3: PyType_Ready (typeobject.c:3191) ==3805== by 0x1B94D2C8: _Py_ReadyTypes (object.c:1829) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B97A980: _PyExc_Init (exceptions.c:1804) ==3805== by 0x1B9AA450: Py_InitializeEx (pythonrun.c:207) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC8E010 is 0 bytes inside a block of size 29 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B97A980: _PyExc_Init (exceptions.c:1804) ==3805== by 0x1B9AA450: Py_InitializeEx (pythonrun.c:207) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B953E18: PyString_InternInPlace (stringobject.c:4337) ==3805== by 0x1B94897F: PyDict_SetItemString (dictobject.c:2025) ==3805== by 0x1B9A8C6F: Py_InitModule4 (modsupport.c:82) ==3805== by 0x1B9B4C41: initsignal (signalmodule.c:319) ==3805== by 0x1B9B5885: PyOS_InitInterrupts (signalmodule.c:643) ==3805== by 0x1B9AC6FF: initsigs (pythonrun.c:1610) ==3805== by 0x1B9AA6E8: Py_InitializeEx (pythonrun.c:216) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC84010 is 816 bytes inside a block of size 2744 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB33D76: qsort (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B95DE6F: add_operators (typeobject.c:5327) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B95C214: PyType_Ready (typeobject.c:3156) ==3805== by 0x1B94D27D: _Py_ReadyTypes (object.c:1820) ==3805== by 0x1B9AA38C: Py_InitializeEx (pythonrun.c:167) ==3805== by 0x1B9AA8B9: Py_Initialize (pythonrun.c:287) ==3805== by 0x1B9B3257: Py_Main (main.c:427) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B9A84B8: PyMarshal_ReadObjectFromString (marshal.c:825) ==3805== by 0x1B9A8352: PyMarshal_ReadLastObjectFromFile (marshal.c:784) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== Address 0x1BC9C010 is 336 bytes inside a block of size 592 free'd ==3805== at 0x1B8FFDF1: realloc (vg_replace_malloc.c:306) ==3805== by 0x1B93D4EB: PyList_Append (listobject.c:53) ==3805== by 0x1B9A7C07: r_object (marshal.c:549) ==3805== by 0x1B9A6A93: r_object (marshal.c:598) ==3805== by 0x1B9A71B0: r_object (marshal.c:670) ==3805== by 0x1B9A6A93: r_object (marshal.c:598) ==3805== by 0x1B9A71A1: r_object (marshal.c:669) ==3805== by 0x1B9A848F: PyMarshal_ReadObjectFromString (marshal.c:822) ==3805== by 0x1B9A8352: PyMarshal_ReadLastObjectFromFile (marshal.c:784) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B9A836D: PyMarshal_ReadLastObjectFromFile (marshal.c:786) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B9A836D: PyMarshal_ReadLastObjectFromFile (marshal.c:786) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B9BA0B2: setup_confname_table (posixmodule.c:7194) ==3805== by 0x1B9B5F28: initposix (posixmodule.c:7223) ==3805== by 0x1B9A08AB: init_builtin (import.c:1773) ==3805== by 0x1B9A07BB: load_module (import.c:1702) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== Address 0x1BCAB010 is 16 bytes before a block of size 1536 alloc'd ==3805== at 0x1B8FEA39: malloc (vg_replace_malloc.c:149) ==3805== by 0x1B948AA7: dictresize (dictobject.c:500) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B9BA0B2: setup_confname_table (posixmodule.c:7194) ==3805== by 0x1B9B5EFD: initposix (posixmodule.c:7216) ==3805== by 0x1B9A08AB: init_builtin (import.c:1773) ==3805== by 0x1B9A07BB: load_module (import.c:1702) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B95C170: PyType_Ready (typeobject.c:2845) ==3805== by 0x1B9584F2: PyStructSequence_InitType (structseq.c:388) ==3805== by 0x1B9B5E65: initposix (posixmodule.c:7983) ==3805== by 0x1B9A08AB: init_builtin (import.c:1773) ==3805== by 0x1B9A07BB: load_module (import.c:1702) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== Address 0x1BCAE010 is 8 bytes before a block of size 384 alloc'd ==3805== at 0x1B8FEA39: malloc (vg_replace_malloc.c:149) ==3805== by 0x1B948AA7: dictresize (dictobject.c:500) ==3805== by 0x1B95DFB3: add_operators (typeobject.c:5482) ==3805== by 0x1B95BE00: PyType_Ready (typeobject.c:3188) ==3805== by 0x1B9584F2: PyStructSequence_InitType (structseq.c:388) ==3805== by 0x1B9B5E65: initposix (posixmodule.c:7983) ==3805== by 0x1B9A08AB: init_builtin (import.c:1773) ==3805== by 0x1B9A07BB: load_module (import.c:1702) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B97E73C: PyEval_EvalFrame (ceval.c:1700) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== Address 0x1BCB7010 is 104 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B98AB04: code_dealloc (compile.c:230) ==3805== by 0x1B9A1F69: load_source_module (import.c:919) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== Address 0x1BCB7010 is 104 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B9B3E4B: PyObject_GC_Del (gcmodule.c:1311) ==3805== by 0x1B9594C9: tupledealloc (tupleobject.c:182) ==3805== by 0x1B98AAEC: code_dealloc (compile.c:230) ==3805== by 0x1B9A1F69: load_source_module (import.c:919) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== Address 0x1BCB4010 is 24 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B980D9C: call_function (ceval.c:3603) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== Address 0x1BCB9010 is 48 bytes inside a block of size 100 free'd ==3805== at 0x1B8FFDF1: realloc (vg_replace_malloc.c:306) ==3805== by 0x1B93D4EB: PyList_Append (listobject.c:53) ==3805== by 0x1B97D4F5: PyEval_EvalFrame (ceval.c:1229) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DF90: PyObject_Realloc (obmalloc.c:818) ==3805== by 0x1B9B3DE0: _PyObject_GC_Resize (gcmodule.c:1294) ==3805== by 0x1B9386B7: PyFrame_New (frameobject.c:598) ==3805== by 0x1B97F5AC: PyEval_EvalCodeEx (ceval.c:2533) ==3805== by 0x1B98119A: fast_function (ceval.c:3661) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== Address 0x1BCB7010 is 104 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91B043: fixstate (acceler.c:124) ==3805== by 0x1B91AF23: fixdfa (acceler.c:60) ==3805== by 0x1B91AE63: PyGrammar_AddAccelerators (acceler.c:30) ==3805== by 0x1B91B704: PyParser_New (parser.c:77) ==3805== by 0x1B91BC6E: parsetok (parsetok.c:109) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== Address 0x1BCBD010 is 88 bytes inside a block of size 640 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B91B043: fixstate (acceler.c:124) ==3805== by 0x1B91AF23: fixdfa (acceler.c:60) ==3805== by 0x1B91AE63: PyGrammar_AddAccelerators (acceler.c:30) ==3805== by 0x1B91B704: PyParser_New (parser.c:77) ==3805== by 0x1B91BC6E: parsetok (parsetok.c:109) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD04010 is 296 bytes inside a block of size 640 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B91B043: fixstate (acceler.c:124) ==3805== by 0x1B91AF23: fixdfa (acceler.c:60) ==3805== by 0x1B91AE63: PyGrammar_AddAccelerators (acceler.c:30) ==3805== by 0x1B91B704: PyParser_New (parser.c:77) ==3805== by 0x1B91BC6E: parsetok (parsetok.c:109) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DF90: PyObject_Realloc (obmalloc.c:818) ==3805== by 0x1B952711: _PyString_Resize (stringobject.c:3521) ==3805== by 0x1B98845F: jcompile (compile.c:1217) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD05010 is 2328 bytes inside a block of size 6012 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DF90: PyObject_Realloc (obmalloc.c:818) ==3805== by 0x1B952711: _PyString_Resize (stringobject.c:3521) ==3805== by 0x1B988443: jcompile (compile.c:1219) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B94D42E: PyMem_Free (object.c:1973) ==3805== by 0x1B98B07E: optimize_code (compile.c:755) ==3805== by 0x1B9881F9: jcompile (compile.c:5018) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B94D42E: PyMem_Free (object.c:1973) ==3805== by 0x1B98B087: optimize_code (compile.c:756) ==3805== by 0x1B9881F9: jcompile (compile.c:5018) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B94D42E: PyMem_Free (object.c:1973) ==3805== by 0x1B98B090: optimize_code (compile.c:757) ==3805== by 0x1B9881F9: jcompile (compile.c:5018) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B982D67: com_free (compile.c:1187) ==3805== by 0x1B9882FC: jcompile (compile.c:5057) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD05010 is 2328 bytes inside a block of size 6012 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B982D20: com_free (compile.c:1187) ==3805== by 0x1B9882FC: jcompile (compile.c:5057) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B982C68: com_free (compile.c:1187) ==3805== by 0x1B9882FC: jcompile (compile.c:5057) ==3805== by 0x1B987EC1: PyNode_CompileFlags (compile.c:4919) ==3805== by 0x1B9ABEA7: run_node (pythonrun.c:1281) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B98AA50: code_dealloc (compile.c:230) ==3805== by 0x1B9ABEE9: run_node (pythonrun.c:1288) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91E515: PyTokenizer_Free (tokenizer.c:678) ==3805== by 0x1B91BD8F: parsetok (parsetok.c:213) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== Address 0x1BD06010 is not stack'd, malloc'd or (recently) free'd ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B98AA50: code_dealloc (compile.c:230) ==3805== by 0x1B9394E7: frame_dealloc (frameobject.c:418) ==3805== by 0x1B9AF500: tb_dealloc (traceback.c:37) ==3805== by 0x1B9AF511: tb_dealloc (traceback.c:37) ==3805== by 0x1B947DCD: PyDict_DelItem (dictobject.c:642) ==3805== by 0x1B9489EE: PyDict_DelItemString (dictobject.c:2039) ==3805== by 0x1B9AD8B0: PySys_SetObject (sysmodule.c:79) ==3805== by 0x1B97FF81: reset_exc_info (ceval.c:2889) ==3805== by 0x1B97D04D: PyEval_EvalFrame (ceval.c:2500) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== Address 0x1BD08010 is 5128 bytes inside a block of size 6012 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91B739: PyParser_Delete (parser.c:101) ==3805== by 0x1B91BD70: parsetok (parsetok.c:182) ==3805== by 0x1B91BA70: PyParser_ParseStringFlags (parsetok.c:31) ==3805== by 0x1B9AC164: PyParser_SimpleParseStringFlags (pythonrun.c:1385) ==3805== by 0x1B9ABD86: PyRun_StringFlags (pythonrun.c:1242) ==3805== by 0x1B977203: builtin_eval (bltinmodule.c:527) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B9A84B8: PyMarshal_ReadObjectFromString (marshal.c:825) ==3805== by 0x1B9A8352: PyMarshal_ReadLastObjectFromFile (marshal.c:784) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B9A84B8: PyMarshal_ReadObjectFromString (marshal.c:825) ==3805== by 0x1B9A8352: PyMarshal_ReadLastObjectFromFile (marshal.c:784) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B97E73C: PyEval_EvalFrame (ceval.c:1700) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B97E73C: PyEval_EvalFrame (ceval.c:1700) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B93E0C3: list_dealloc (listobject.c:269) ==3805== by 0x1B95EEF0: mro_implementation (typeobject.c:1276) ==3805== by 0x1B95A8F8: mro_internal (typeobject.c:1296) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B94C99C: PyObject_GenericGetAttr (object.c:1235) ==3805== by 0x1B94C4D8: PyObject_GetAttr (object.c:1088) ==3805== by 0x1B97EAE8: PyEval_EvalFrame (ceval.c:1957) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B93E0C3: list_dealloc (listobject.c:269) ==3805== by 0x1B95EEF0: mro_implementation (typeobject.c:1276) ==3805== by 0x1B95A8F8: mro_internal (typeobject.c:1296) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B94C99C: PyObject_GenericGetAttr (object.c:1235) ==3805== by 0x1B94C4D8: PyObject_GetAttr (object.c:1088) ==3805== by 0x1B97EAE8: PyEval_EvalFrame (ceval.c:1957) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B95A8A1: mro_internal (typeobject.c:1312) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B94C99C: PyObject_GenericGetAttr (object.c:1235) ==3805== by 0x1B94C4D8: PyObject_GetAttr (object.c:1088) ==3805== by 0x1B97EAE8: PyEval_EvalFrame (ceval.c:1957) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B95A8A1: mro_internal (typeobject.c:1312) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B94C99C: PyObject_GenericGetAttr (object.c:1235) ==3805== by 0x1B94C4D8: PyObject_GetAttr (object.c:1088) ==3805== by 0x1B97EAE8: PyEval_EvalFrame (ceval.c:1957) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== by 0x1B99FA4C: PyImport_ExecCodeModuleEx (import.c:636) ==3805== by 0x1B9A1F45: load_source_module (import.c:915) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B9B3E4B: PyObject_GC_Del (gcmodule.c:1311) ==3805== by 0x1B9594C9: tupledealloc (tupleobject.c:182) ==3805== by 0x1B98AAD4: code_dealloc (compile.c:230) ==3805== by 0x1B9A1F69: load_source_module (import.c:919) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== Address 0x1BD1B010 is 384 bytes inside a block of size 614 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B98AB04: code_dealloc (compile.c:230) ==3805== by 0x1B9A1F69: load_source_module (import.c:919) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DF90: PyObject_Realloc (obmalloc.c:818) ==3805== by 0x1B9B3DE0: _PyObject_GC_Resize (gcmodule.c:1294) ==3805== by 0x1B9386B7: PyFrame_New (frameobject.c:598) ==3805== by 0x1B9811D8: fast_function (ceval.c:3640) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B98119A: fast_function (ceval.c:3661) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== Address 0x1BD1B010 is 384 bytes inside a block of size 614 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B98AB04: code_dealloc (compile.c:230) ==3805== by 0x1B9A1F69: load_source_module (import.c:919) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B97D341: PyEval_EvalFrame (ceval.c:923) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD1F010 is 0 bytes inside a block of size 32 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B939523: frame_dealloc (frameobject.c:418) ==3805== by 0x1B981246: fast_function (ceval.c:3655) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B939523: frame_dealloc (frameobject.c:418) ==3805== by 0x1B981246: fast_function (ceval.c:3655) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD1F010 is 0 bytes inside a block of size 32 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B939523: frame_dealloc (frameobject.c:418) ==3805== by 0x1B981246: fast_function (ceval.c:3655) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B940665: listiter_next (listobject.c:2772) ==3805== by 0x1B97DDB0: PyEval_EvalFrame (ceval.c:2121) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B97CC96: PyEval_EvalCode (ceval.c:484) ==3805== Address 0x1BD85010 is 12 bytes after a block of size 20 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B95A8A1: mro_internal (typeobject.c:1312) ==3805== by 0x1B95BE76: PyType_Ready (typeobject.c:3204) ==3805== by 0x1B95B691: type_new (typeobject.c:1959) ==3805== by 0x1B95EA8D: type_call (typeobject.c:421) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B924566: PyObject_CallFunction (abstract.c:1837) ==3805== by 0x1B981D71: build_class (ceval.c:4113) ==3805== by 0x1B97E693: PyEval_EvalFrame (ceval.c:1688) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B99C042: vgetargs1 (getargs.c:108) ==3805== by 0x1B99BDA8: PyArg_ParseTuple (getargs.c:54) ==3805== by 0x1B9B6DFE: posix_stat (posixmodule.c:1049) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== Address 0x1BD86010 is 16 bytes before a block of size 32 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B939523: frame_dealloc (frameobject.c:418) ==3805== by 0x1B981246: fast_function (ceval.c:3655) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B97F980: PyEval_EvalCodeEx (ceval.c:2741) ==3805== by 0x1B98119A: fast_function (ceval.c:3661) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B94D42E: PyMem_Free (object.c:1973) ==3805== by 0x1B9B6E58: posix_stat (posixmodule.c:1096) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B981060: call_function (ceval.c:3568) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== Address 0x1BD1F010 is 0 bytes inside a block of size 32 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B93E04A: list_dealloc (listobject.c:266) ==3805== by 0x1B939523: frame_dealloc (frameobject.c:418) ==3805== by 0x1B981246: fast_function (ceval.c:3655) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== by 0x1B97EFA2: PyEval_EvalFrame (ceval.c:2167) ==3805== by 0x1B98121B: fast_function (ceval.c:3651) ==3805== by 0x1B980DB4: call_function (ceval.c:3589) ==3805== ==3805== More than 50 errors detected. Subsequent errors ==3805== will still be recorded, but in less detail than before. ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B9A836D: PyMarshal_ReadLastObjectFromFile (marshal.c:786) ==3805== by 0x1B9A1EC3: load_source_module (import.c:728) ==3805== by 0x1B9A05E2: load_module (import.c:1680) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E79: load_next (import.c:2100) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== Address 0x1BD25010 is 144 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A23C7: load_package (import.c:961) ==3805== by 0x1B9A0708: load_module (import.c:1694) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B99A503: _PyCodecRegistry_Init (codecs.c:834) ==3805== by 0x1B9990E9: _PyCodec_Lookup (codecs.c:106) Python 2.4.4 (#1, Jan 2 2007, 15:51:25) [GCC 3.2.3 20030502 (Red Hat Linux 3.2.3-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91E528: PyTokenizer_Free (tokenizer.c:677) ==3805== by 0x1B91BD8F: parsetok (parsetok.c:213) ==3805== by 0x1B9AB128: PyRun_InteractiveOneFlags (pythonrun.c:752) ==3805== by 0x1B9AAF48: PyRun_InteractiveLoopFlags (pythonrun.c:704) ==3805== by 0x1B9AAE7E: PyRun_AnyFileExFlags (pythonrun.c:667) ==3805== by 0x1B9B350A: Py_Main (main.c:493) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91E528: PyTokenizer_Free (tokenizer.c:677) ==3805== by 0x1B91BD8F: parsetok (parsetok.c:213) ==3805== by 0x1B9AB128: PyRun_InteractiveOneFlags (pythonrun.c:752) ==3805== by 0x1B9AAF48: PyRun_InteractiveLoopFlags (pythonrun.c:704) ==3805== by 0x1B9AAE7E: PyRun_AnyFileExFlags (pythonrun.c:667) ==3805== by 0x1B9B350A: Py_Main (main.c:493) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91E515: PyTokenizer_Free (tokenizer.c:678) ==3805== by 0x1B91BD8F: parsetok (parsetok.c:213) ==3805== by 0x1B9AB128: PyRun_InteractiveOneFlags (pythonrun.c:752) ==3805== by 0x1B9AAF48: PyRun_InteractiveLoopFlags (pythonrun.c:704) ==3805== by 0x1B9AAE7E: PyRun_AnyFileExFlags (pythonrun.c:667) ==3805== by 0x1B9B350A: Py_Main (main.c:493) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91E515: PyTokenizer_Free (tokenizer.c:678) ==3805== by 0x1B91BD8F: parsetok (parsetok.c:213) ==3805== by 0x1B9AB128: PyRun_InteractiveOneFlags (pythonrun.c:752) ==3805== by 0x1B9AAF48: PyRun_InteractiveLoopFlags (pythonrun.c:704) ==3805== by 0x1B9AAE7E: PyRun_AnyFileExFlags (pythonrun.c:667) ==3805== by 0x1B9B350A: Py_Main (main.c:493) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948CA8: dict_dealloc (dictobject.c:770) ==3805== by 0x1B947C71: PyDict_SetItem (dictobject.c:606) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B99F28D: PyImport_Cleanup (import.c:384) ==3805== by 0x1B9AA92D: Py_Finalize (pythonrun.c:356) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BD21010 is 224 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B947F34: PyDict_Clear (dictobject.c:710) ==3805== by 0x1B99F646: PyImport_Cleanup (import.c:476) ==3805== by 0x1B9AA92D: Py_Finalize (pythonrun.c:356) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BDA6010 is 288 bytes inside a block of size 352 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1BB69ED6: __fopen_internal (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1BB6C5BD: fopen64 (in /lib/tls/libc-2.3.2.so) ==3805== by 0x1B9A01AF: find_module (import.c:1324) ==3805== by 0x1B9A1243: import_submodule (import.c:2266) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== by 0x1B976668: builtin___import__ (bltinmodule.c:45) ==3805== by 0x1B94A955: PyCFunction_Call (methodobject.c:108) ==3805== by 0x1B9244A3: PyObject_Call (abstract.c:1795) ==3805== by 0x1B980B5B: PyEval_CallObjectWithKeywords (ceval.c:3435) ==3805== ==3805== Conditional jump or move depends on uninitialised value(s) ==3805== at 0x1B94DEEB: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948CA8: dict_dealloc (dictobject.c:770) ==3805== by 0x1B948D22: dict_dealloc (dictobject.c:772) ==3805== by 0x1B99F10F: _PyImport_Fini (import.c:211) ==3805== by 0x1B9AA932: Py_Finalize (pythonrun.c:378) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Use of uninitialised value of size 4 ==3805== at 0x1B94DEF5: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B948CA8: dict_dealloc (dictobject.c:770) ==3805== by 0x1B948D22: dict_dealloc (dictobject.c:772) ==3805== by 0x1B99F10F: _PyImport_Fini (import.c:211) ==3805== by 0x1B9AA932: Py_Finalize (pythonrun.c:378) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B99F0F1: _PyImport_Fini (import.c:209) ==3805== by 0x1B9AA932: Py_Finalize (pythonrun.c:378) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BC8C010 is 2112 bytes inside a block of size 3410 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B954108: string_dealloc (stringobject.c:512) ==3805== by 0x1B948D22: dict_dealloc (dictobject.c:772) ==3805== by 0x1B948D22: dict_dealloc (dictobject.c:772) ==3805== by 0x1B99F10F: _PyImport_Fini (import.c:211) ==3805== by 0x1B9AA932: Py_Finalize (pythonrun.c:378) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B93B913: PyInt_Fini (intobject.c:1135) ==3805== by 0x1B9AA977: Py_Finalize (pythonrun.c:426) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BCAC010 is 944 bytes inside a block of size 1536 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948BBC: dictresize (dictobject.c:533) ==3805== by 0x1B948990: PyDict_SetItemString (dictobject.c:2026) ==3805== by 0x1B9BA0B2: setup_confname_table (posixmodule.c:7194) ==3805== by 0x1B9B5F28: initposix (posixmodule.c:7223) ==3805== by 0x1B9A08AB: init_builtin (import.c:1773) ==3805== by 0x1B9A07BB: load_module (import.c:1702) ==3805== by 0x1B9A1281: import_submodule (import.c:2276) ==3805== by 0x1B9A0E0E: load_next (import.c:2096) ==3805== by 0x1B9A2649: import_module_ex (import.c:1931) ==3805== by 0x1B9A0B91: PyImport_ImportModuleEx (import.c:1972) ==3805== ==3805== Invalid read of size 4 ==3805== at 0x1B94DEE0: PyObject_Free (obmalloc.c:735) ==3805== by 0x1B91AEF4: PyGrammar_RemoveAccelerators (acceler.c:47) ==3805== by 0x1B9AA98D: Py_Finalize (pythonrun.c:440) ==3805== by 0x1B9B3405: Py_Main (main.c:513) ==3805== by 0x80486A9: main (python.c:23) ==3805== Address 0x1BCBC010 is 16 bytes inside a block of size 384 free'd ==3805== at 0x1B8FF54C: free (vg_replace_malloc.c:235) ==3805== by 0x1B94DF60: PyObject_Free (obmalloc.c:798) ==3805== by 0x1B948CA8: dict_dealloc (dictobject.c:770) ==3805== by 0x1B93A6A4: func_dealloc (funcobject.c:454) ==3805== by 0x1B948D22: dict_dealloc (dictobject.c:772) ==3805== by 0x1B927970: class_dealloc (classobject.c:193) ==3805== by 0x1B959529: tupledealloc (tupleobject.c:178) ==3805== by 0x1B927988: class_dealloc (classobject.c:193) ==3805== by 0x1B92855C: instance_dealloc (classobject.c:703) ==3805== by 0x1B947C71: PyDict_SetItem (dictobject.c:606) ==3805== by 0x1B94B362: _PyModule_Clear (moduleobject.c:136) ==3805== by 0x1B99F53B: PyImport_Cleanup (import.c:454) ==3805== ==3805== ERROR SUMMARY: 720 errors from 62 contexts (suppressed: 50 from 1) ==3805== malloc/free: in use at exit: 713145 bytes in 231 blocks. ==3805== malloc/free: 1648 allocs, 1417 frees, 1435954 bytes allocated. ==3805== For counts of detected errors, rerun with: -v ==3805== searching for pointers to 231 not-freed blocks. ==3805== checked 1192292 bytes. ==3805== ==3805== LEAK SUMMARY: ==3805== definitely lost: 40 bytes in 2 blocks. ==3805== possibly lost: 0 bytes in 0 blocks. ==3805== still reachable: 713105 bytes in 229 blocks. ==3805== suppressed: 0 bytes in 0 blocks. ==3805== Use --leak-check=full to see details of leaked memory. ---------------------------------------------------------------------- >Comment By: Martin v. L?wis (loewis) Date: 2007-01-06 01:43 Message: Logged In: YES user_id=21627 Originator: NO This is not a bug, see Misc/README.valgrind. ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1629158&group_id=5470