From cournape at gmail.com Thu Jan 1 07:11:11 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 1 Jan 2009 15:11:11 +0900 Subject: [Python-Dev] floatformat vs float_format Message-ID: <5b8d13220812312211s799f6860xe9f761c199eb2b19@mail.gmail.com> Hi, In python 2.6, there have been some effort to make float formatting more consistent between platforms, which is nice. Unfortunately, there is still one corner case, for example on windows: print a -> print 'inf' print '%f' % a -> print '1.#INF' The difference being that in the second case, the formatting is done in floatformat.c (in stringobject.c), whereas in the first case, it is done in format_float (in floatobject.c). Shouldn't both functions be calling the same underlying implementation, to avoid those inconsistencies ? thanks, David From eric at trueblade.com Thu Jan 1 10:43:36 2009 From: eric at trueblade.com (Eric Smith) Date: Thu, 01 Jan 2009 04:43:36 -0500 Subject: [Python-Dev] floatformat vs float_format In-Reply-To: <5b8d13220812312211s799f6860xe9f761c199eb2b19@mail.gmail.com> References: <5b8d13220812312211s799f6860xe9f761c199eb2b19@mail.gmail.com> Message-ID: <495C9048.80201@trueblade.com> David Cournapeau wrote: > Hi, > > In python 2.6, there have been some effort to make float formatting > more consistent between platforms, which is nice. Unfortunately, there > is still one corner case, for example on windows: > > print a -> print 'inf' > print '%f' % a -> print '1.#INF' > > The difference being that in the second case, the formatting is done > in floatformat.c (in stringobject.c), whereas in the first case, it is > done in format_float (in floatobject.c). Shouldn't both functions be > calling the same underlying implementation, to avoid those > inconsistencies ? Yes, float formatting definitely needs some rationalization. While this isn't the exact issue discussed in http://bugs.python.org/issue3382, it is related, and Windows is the reason I had to back my fix out right before the freeze for 2.6 and 3.0. It's on my list of things to fix. http://bugs.python.org/issue4482 might also be related, and I'll fix that, too. If you could either add a comment to 3382 (with this test case) or open another bug and assign it to me (eric.smith), I'd appreciate it. Happy New Year, all! Eric. From yinon.me at gmail.com Thu Jan 1 13:24:02 2009 From: yinon.me at gmail.com (Yinon Ehrlich) Date: Thu, 01 Jan 2009 14:24:02 +0200 Subject: [Python-Dev] patch suggestion for webbrowser Message-ID: <495CB5E2.5050309@gmail.com> Hi, enclosed a patch for webbrowser which will find applications/batch files ending with .com or .cmd too. Yinon Index: Lib/webbrowser.py =================================================================== --- Lib/webbrowser.py (revision 68118) +++ Lib/webbrowser.py (working copy) @@ -103,10 +103,11 @@ if sys.platform[:3] == "win": def _isexecutable(cmd): + win_exts = (".exe", ".com", ".bat", ".cmd") cmd = cmd.lower() - if os.path.isfile(cmd) and cmd.endswith((".exe", ".bat")): + if os.path.isfile(cmd) and cmd.endswith(win_exts): return True - for ext in ".exe", ".bat": + for ext in win_exts: if os.path.isfile(cmd + ext): return True return False From phd at phd.pp.ru Thu Jan 1 13:38:31 2009 From: phd at phd.pp.ru (Oleg Broytmann) Date: Thu, 1 Jan 2009 15:38:31 +0300 Subject: [Python-Dev] patch suggestion for webbrowser In-Reply-To: <495CB5E2.5050309@gmail.com> References: <495CB5E2.5050309@gmail.com> Message-ID: <20090101123831.GA6464@phd.pp.ru> On Thu, Jan 01, 2009 at 02:24:02PM +0200, Yinon Ehrlich wrote: > enclosed a patch for webbrowser which will find applications/batch files > ending with .com or .cmd too. Please submit the patch to the issue tracker: http://bugs.python.org/ Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd at phd.pp.ru Programmers don't die, they just GOSUB without RETURN. From g.brandl at gmx.net Thu Jan 1 15:20:30 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 01 Jan 2009 15:20:30 +0100 Subject: [Python-Dev] test_subprocess and sparc buildbots In-Reply-To: References: <495A9099.1030907@gmail.com> <2d75d7660812301941r3c133eaw7094609bd6bc51ce@mail.gmail.com> Message-ID: Alexandre Vassalotti schrieb: > On Tue, Dec 30, 2008 at 10:41 PM, Daniel (ajax) Diniz wrote: >> A reliable way to get that in a --with-pydebug build seems to be: >> >> ~/py3k$ ./python -c "import locale; locale.format_string(1,1)" >> * ob >> object : >> type : tuple >> refcount: 0 >> address : 0x825c76c >> * op->_ob_prev->_ob_next >> NULL >> * op->_ob_next->_ob_prev >> object : >> type : tuple >> refcount: 0 >> address : 0x825c76c >> Fatal Python error: UNREF invalid object >> TypeError: expected string or buffer >> Aborted >> > > Nice catch! I reduced your example to: "import _sre; _sre.compile(0, > 0, [])". And, it doesn't seem to be an input validation problem with > _sre. From what I saw, it's actually a bug in Py_TRACE_REFS's code. > Now, it's getting interesting! > > It seems something is breaking the refchain. However, I don't know > what is causing the problem exactly. This only occurs --with-pydebug, I assume? It is the same basic problem as in http://bugs.python.org/issue3299, which I analysed some time ago. Simply speaking, it is caused by the object allocation and deallocation scheme that _sre chooses: if _compile's argument processing raises an error, PyObject_DEL is called which doesn't remove the object from the refchain. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From doomster at knuut.de Thu Jan 1 16:30:37 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Thu, 1 Jan 2009 16:30:37 +0100 Subject: [Python-Dev] #ifdef __cplusplus? Message-ID: <200901011630.38196.doomster@knuut.de> Hi! There are lots of files that are framed with an extern "C" stanza when compiled under C++. Now, I appreciate that header files are made suitable for use with C++ with that, but WTF are those doing in .c files??? puzzled greetings Uli From ajaksu at gmail.com Thu Jan 1 19:17:06 2009 From: ajaksu at gmail.com (Daniel (ajax) Diniz) Date: Thu, 1 Jan 2009 16:17:06 -0200 Subject: [Python-Dev] test_subprocess and sparc buildbots In-Reply-To: References: <495A9099.1030907@gmail.com> <2d75d7660812301941r3c133eaw7094609bd6bc51ce@mail.gmail.com> Message-ID: <2d75d7660901011017x3c2357u3279d24828994565@mail.gmail.com> Georg Brandl wrote: > > This only occurs --with-pydebug, I assume? For me, on 32 bits Linux, yes, only --with-pydebug*. > It is the same basic problem as in http://bugs.python.org/issue3299, > which I analysed some time ago. Yes, I guess my 'catch' is exactly that. But it might be a red herring (sorry if that's the case): is the correlation with sparc and/or rev.67888 real? Regards, Daniel From ncoghlan at gmail.com Fri Jan 2 00:27:07 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 02 Jan 2009 09:27:07 +1000 Subject: [Python-Dev] test_subprocess and sparc buildbots In-Reply-To: <2d75d7660901011017x3c2357u3279d24828994565@mail.gmail.com> References: <495A9099.1030907@gmail.com> <2d75d7660812301941r3c133eaw7094609bd6bc51ce@mail.gmail.com> <2d75d7660901011017x3c2357u3279d24828994565@mail.gmail.com> Message-ID: <495D514B.3070703@gmail.com> Daniel (ajax) Diniz wrote: > Georg Brandl wrote: >> This only occurs --with-pydebug, I assume? > > For me, on 32 bits Linux, yes, only --with-pydebug*. > >> It is the same basic problem as in http://bugs.python.org/issue3299, >> which I analysed some time ago. > > Yes, I guess my 'catch' is exactly that. But it might be a red herring > (sorry if that's the case): is the correlation with sparc and/or > rev.67888 real? The correlation with sparc probably isn't real (that was just a subjective impression on my part based on the buildbot failure emails). When --with-pydebug is enabled, I can reproduce the fault (as posted by Alexandre) on 32-bit x86 Linux. There may be a specific issue with the klose buildbots, but the crash in the object deallocation is obscuring the original problem. I'll put further comment on the issue Georg linked. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Fri Jan 2 00:54:38 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 02 Jan 2009 09:54:38 +1000 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <200901011630.38196.doomster@knuut.de> References: <200901011630.38196.doomster@knuut.de> Message-ID: <495D57BE.6020904@gmail.com> Ulrich Eckhardt wrote: > Hi! > > There are lots of files that are framed with an extern "C" stanza when > compiled under C++. Now, I appreciate that header files are made suitable for > use with C++ with that, but WTF are those doing in .c files??? I believe it is to allow building the Python source as an embedded part of an external application that is built with a C++ compiler, even when that compiler isn't clever enough to realise that the 'extern "C"' should be implied by the '.c' file extension. I didn't add those lines though - I suggest doing an SVN annotate on some of the affected source files, and looking at the associated checkin comments. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From alexander.belopolsky at gmail.com Fri Jan 2 03:18:53 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 1 Jan 2009 21:18:53 -0500 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <495D57BE.6020904@gmail.com> References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> Message-ID: The relevant revision is r45330: . """ Author: anthony.baxter Date: Thu Apr 13 02:06:09 2006 UTC (2 years, 8 months ago) Log Message: spread the extern "C" { } magic pixie dust around. Python itself builds now using a C++ compiler. Still lots and lots of errors in the modules built by setup.py, and a bunch of warnings from g++ in the core. """ Wrapping code inside .c files in extern "C" { } strikes me as a lazy solution. It is likely that g++ warnings that were silenced by that change were indicative of either functions not declared in headers missing "static" keyword or .c files not including relevant headers. If OP has energy to investigate this issue further, it would be interesting to revert r45330 and recompile python with CC=g++. On Thu, Jan 1, 2009 at 6:54 PM, Nick Coghlan wrote: > Ulrich Eckhardt wrote: >> Hi! >> >> There are lots of files that are framed with an extern "C" stanza when >> compiled under C++. Now, I appreciate that header files are made suitable for >> use with C++ with that, but WTF are those doing in .c files??? > > I believe it is to allow building the Python source as an embedded part > of an external application that is built with a C++ compiler, even when > that compiler isn't clever enough to realise that the 'extern "C"' > should be implied by the '.c' file extension. > > I didn't add those lines though - I suggest doing an SVN annotate on > some of the affected source files, and looking at the associated checkin > comments. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > --------------------------------------------------------------- > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexander.belopolsky%40gmail.com > From lists at cheimes.de Fri Jan 2 05:05:11 2009 From: lists at cheimes.de (Christian Heimes) Date: Fri, 02 Jan 2009 05:05:11 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> Message-ID: Alexander Belopolsky schrieb: > The relevant revision is r45330: > . > > """ > Author: anthony.baxter > Date: Thu Apr 13 02:06:09 2006 UTC (2 years, 8 months ago) > Log Message: > spread the extern "C" { } magic pixie dust around. Python itself builds now > using a C++ compiler. Still lots and lots of errors in the modules built by > setup.py, and a bunch of warnings from g++ in the core. > """ > > Wrapping code inside .c files in extern "C" { } strikes me as a lazy > solution. It is likely that g++ warnings that were silenced by that > change were indicative of either functions not declared in headers > missing "static" keyword or .c files not including relevant headers. > > If OP has energy to investigate this issue further, it would be > interesting to revert r45330 and recompile python with CC=g++. You might be interested in the bug report http://bugs.python.org/issue4665. Skip pointed out that Python 2.6 no longer compiles with a C++ compiler due to missing casts. C++ is more restrict when it comes to implicit casts from (amongst others) void pointers. Martin is against the necessary changes. I don't really care about it. If somebody wants to tackle the issue I'm fine with sprinkling some type casts over the code. This topic is slightly related to small feature request in http://bugs.python.org/issue4558. It adds a configure option --with-stdc89 and adds some small fixes to various modules. The --with-stdc89 is dedicated for our build bots. In the past non ANSI C89 conform pieces of code like 'inline' or '// C++' comments were committed. The --with-stdc89 options adds a canonical way to detect such errors at compile time. Christian From alexander.belopolsky at gmail.com Fri Jan 2 06:17:24 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 2 Jan 2009 00:17:24 -0500 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> Message-ID: On Thu, Jan 1, 2009 at 11:05 PM, Christian Heimes wrote: .. > You might be interested in the bug report > http://bugs.python.org/issue4665. Skip pointed out that Python 2.6 no > longer compiles with a C++ compiler due to missing casts. C++ is more > restrict when it comes to implicit casts from (amongst others) void > pointers. > Since that issue is closed, I have created http://bugs.python.org/issue4805 with a patch that restores C++ compilability of the core and a few standard modules. > Martin is against the necessary changes. I don't really care about it. > If somebody wants to tackle the issue I'm fine with sprinkling some type > casts over the code. > I've listed the following arguments in support maintaining C++ compilability on the bug tracker: """ 1. It is hard to verify that header files are compilable if source code is not. With compilable source code, CC=g++ ./configure; make will supply an adequate test that does not require anything beyond a standard distribution. 2. Arguably, C++ compliant code is more consistent and less error prone. For example, "new" is a really bad choice for a variable name regardless of being a C++ keyword, especially when used instead of prevailing "res" for a result of a function producing a PyObject. Even clearly redundant explicit casts of malloc return values arguably improve readability by reminding the type of the object that is being allocated. 3. Compiling with C++ may reveal actual coding errors that otherwise go unnoticed. For example, use of undefined PyLong_BASE_TWODIGITS_TYPE in Objects/longobject.c. 4. Stricter type checking may promote use of specific types instead of void* which in turn may help optimizing compilers. 5. Once achieved, C++ compilability is not that hard to maintain, but restoring it with patches like this one is hard because it requires review of changes to many unrelated files. """ Note that this discussion has deviated from OP's original question. While I argue that C++ compilability of source code is good thing, I agree with OP that wrapping non-header file code in extern "C" {} is bad practice. For example, the only reason Objects/fileobject.c does not compile without extern "C" {} is because fclose is declared inside PyFile_FromString as follows: PyObject * PyFile_FromString(char *name, char *mode) { extern int fclose(FILE *); .. I would rather #include at the top of the file instead. From rhamph at gmail.com Fri Jan 2 06:58:18 2009 From: rhamph at gmail.com (Adam Olsen) Date: Thu, 1 Jan 2009 22:58:18 -0700 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> Message-ID: On Thu, Jan 1, 2009 at 10:17 PM, Alexander Belopolsky wrote: > On Thu, Jan 1, 2009 at 11:05 PM, Christian Heimes wrote: > .. >> You might be interested in the bug report >> http://bugs.python.org/issue4665. Skip pointed out that Python 2.6 no >> longer compiles with a C++ compiler due to missing casts. C++ is more >> restrict when it comes to implicit casts from (amongst others) void >> pointers. >> > Since that issue is closed, I have created > http://bugs.python.org/issue4805 with a patch that restores C++ > compilability of the core and a few standard modules. As C++ has more specific ways of allocating memory, they impose this restriction to annoy you into using them. We won't be using them, and the extra casts and nothing but noise. Figure out a way to turn off the warnings instead. http://www.research.att.com/~bs/bs_faq2.html#void-ptr -- Adam Olsen, aka Rhamphoryncus From alexander.belopolsky at gmail.com Fri Jan 2 07:24:25 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 2 Jan 2009 01:24:25 -0500 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> Message-ID: On Fri, Jan 2, 2009 at 12:58 AM, Adam Olsen wrote: .. > > As C++ has more specific ways of allocating memory, they impose this > restriction to annoy you into using them. And so does Python API: see PyMem_NEW and PyMem_RESIZE macros. > We won't be using them, and the extra casts and nothing but noise. A quick grep through the sources shows that these casts are not just nose: Objects/stringobject.c: op = (PyStringObject *)PyObject_MALLOC(.. Objects/typeobject.c: remain = (int *)PyMem_MALLOC(.. Objects/unicodeobject.c: unicode->str = (Py_UNICODE*) PyObject_MALLOC(.. in many cases the type of object being allocated is not obvious from the l.h.s. name. Redundant cast improves readability in these cases. > Figure out a way to turn off the warnings instead. > These are not warnings: these are compile errors in C++. A compiler which allows to suppress them would not be a standard compliant C++ compiler. From cournape at gmail.com Fri Jan 2 07:31:37 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 2 Jan 2009 15:31:37 +0900 Subject: [Python-Dev] floatformat vs float_format In-Reply-To: <495C9048.80201@trueblade.com> References: <5b8d13220812312211s799f6860xe9f761c199eb2b19@mail.gmail.com> <495C9048.80201@trueblade.com> Message-ID: <5b8d13220901012231k2796a34p334fa25d7354f8cd@mail.gmail.com> On Thu, Jan 1, 2009 at 6:43 PM, Eric Smith wrote: > David Cournapeau wrote: >> >> Hi, >> >> In python 2.6, there have been some effort to make float formatting >> more consistent between platforms, which is nice. Unfortunately, there >> is still one corner case, for example on windows: >> >> print a -> print 'inf' >> print '%f' % a -> print '1.#INF' >> >> The difference being that in the second case, the formatting is done >> in floatformat.c (in stringobject.c), whereas in the first case, it is >> done in format_float (in floatobject.c). Shouldn't both functions be >> calling the same underlying implementation, to avoid those >> inconsistencies ? > > Yes, float formatting definitely needs some rationalization. > > While this isn't the exact issue discussed in > http://bugs.python.org/issue3382, it is related, and Windows is the reason I > had to back my fix out right before the freeze for 2.6 and 3.0. It's on my > list of things to fix. > > http://bugs.python.org/issue4482 might also be related, and I'll fix that, > too. > > If you could either add a comment to 3382 (with this test case) or open > another bug and assign it to me (eric.smith), I'd appreciate it. I did open a new bug, with a first not-so-good patch at http://bugs.python.org/issue4799 I am not so familiar with python core code organization: where should a function used by several core objects (here floatobjects.c and stringobject.c, and potentially complexobject.c as well later) go ? Since the float_format is private, is it OK to change its API (to return a potential error instead of no return value) ? thanks, David From rhamph at gmail.com Fri Jan 2 08:26:25 2009 From: rhamph at gmail.com (Adam Olsen) Date: Fri, 2 Jan 2009 00:26:25 -0700 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> Message-ID: On Thu, Jan 1, 2009 at 11:24 PM, Alexander Belopolsky wrote: > On Fri, Jan 2, 2009 at 12:58 AM, Adam Olsen wrote: > .. >> >> As C++ has more specific ways of allocating memory, they impose this >> restriction to annoy you into using them. > > And so does Python API: see PyMem_NEW and PyMem_RESIZE macros. An optional second API provides convenience, not annoyance. Besides, they're not used much anymore. I am curious what their history is though. >> We won't be using them, and the extra casts and nothing but noise. > > A quick grep through the sources shows that these casts are not just nose: > > Objects/stringobject.c: op = (PyStringObject *)PyObject_MALLOC(.. > Objects/typeobject.c: remain = (int *)PyMem_MALLOC(.. > Objects/unicodeobject.c: unicode->str = (Py_UNICODE*) PyObject_MALLOC(.. > > in many cases the type of object being allocated is not obvious from > the l.h.s. name. Redundant cast improves readability in these cases. Python's malloc wrappers are pretty messy. Of your examples, only unicode->str isn't obvious what the result is, as the rest are local to that function. Even that is obvious when you glance at the line above, where the size is calculated using sizeof(Py_UNICODE). If you're concerned about correctness then you'd do better eliminating the redundant malloc wrappers and giving them names that directly match what they can be used for. If the size calculation bothers you you could include the semantics of the PyMem_New() API, which includes the cast you want. I've no opposition to including casts in a single place like that (and it would catch errors even with C compilation). >> Figure out a way to turn off the warnings instead. >> > These are not warnings: these are compile errors in C++. A compiler > which allows to suppress them would not be a standard compliant C++ > compiler. So long as the major compilers allow it I don't particularly care. Compiling as C++ is too obscure of a feature to warrant uglifying the code. -- Adam Olsen, aka Rhamphoryncus From alexander.belopolsky at gmail.com Fri Jan 2 08:51:13 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 2 Jan 2009 02:51:13 -0500 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> Message-ID: On Fri, Jan 2, 2009 at 2:26 AM, Adam Olsen wrote: .. > Compiling as C++ is too obscure of a feature to warrant uglifying the > code. Malloc casts may be hard to defend, but most of python code base already has them, there is little to be gained from having these casts in some places and not others. There are other design flaws that a C++ compiler may help to catch. Two examples that come to mind are use of void* where a typed pointer is a better match and declaring external functions in .c file instead of including an appropriate header. In most cases keeping in mind compliance with C++ leads to better design, not to uglier code. With respect to the OP's issue, I have added another patch to http://bugs.python.org/issue4805 . From cournape at gmail.com Fri Jan 2 10:41:37 2009 From: cournape at gmail.com (David Cournapeau) Date: Fri, 2 Jan 2009 18:41:37 +0900 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> Message-ID: <5b8d13220901020141t5160eaaata5e7bb6eda4f1254@mail.gmail.com> On Fri, Jan 2, 2009 at 4:51 PM, Alexander Belopolsky wrote: > On Fri, Jan 2, 2009 at 2:26 AM, Adam Olsen wrote: > .. >> Compiling as C++ is too obscure of a feature to warrant uglifying the >> code. > > Malloc casts may be hard to defend, but most of python code base > already has them, there is little to be gained from having these casts > in some places and not others. There are other design flaws that a > C++ compiler may help to catch. Two examples that come to mind are > use of void* where a typed pointer is a better match and declaring > external functions in .c file instead of including an appropriate > header. In most cases keeping in mind compliance with C++ leads to > better design, not to uglier code. Can't those errors be found simply using appropriate warning flags in the C compiler ? C has stopped being a subset of C++ a long time ago David From doomster at knuut.de Fri Jan 2 10:49:57 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Fri, 2 Jan 2009 10:49:57 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> Message-ID: <200901021049.57493.doomster@knuut.de> On Friday 02 January 2009 06:17:24 Alexander Belopolsky wrote: > Since that issue is closed, I have created > http://bugs.python.org/issue4805 with a patch that restores C++ > compilability of the core and a few standard modules. Looking at the patch, I see three main changes there: 1. Remove the extern "C" stanza from the .c files. 2. Add explicit casts where necessary for C++. 3. Include headers instead of locally declaring functions and adding some declarations to headers. In particular the third part of above, I personally would definitely vote a +1. Since the first one is only necessary because things are declared sloppily, I'm also +1 on that one, i.e. #3 makes #1 possible. As far as the second part is concerned, I personally wouldn't bother getting Python to compile with a C++ compiler. I'd also like to point out that a missing declaration of e.g. malloc(), and the ensuing implicit declaration as "int malloc(int)" will be hidden when there's an explicit cast, which is why e.g. the comp.lang.c FAQ is against it. > Note that this discussion has deviated from OP's original question. > While I argue that C++ compilability of source code is good thing, I > agree with OP that wrapping non-header file code in extern "C" {} is > bad practice. For example, the only reason Objects/fileobject.c does > not compile without extern "C" {} is because fclose is declared inside > PyFile_FromString as follows: > > PyObject * > PyFile_FromString(char *name, char *mode) > { > extern int fclose(FILE *); > .. > > I would rather #include at the top of the file instead. This is change #3 above, and I'm all for it. I'm actually surprised that this code has actually escaped the peer review here. Uli From doomster at knuut.de Fri Jan 2 12:32:30 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Fri, 2 Jan 2009 12:32:30 +0100 Subject: [Python-Dev] ParseTuple question Message-ID: <200901021232.30572.doomster@knuut.de> Hi! I'm looking at NullImporter_init in import.c and especially at the call to PyArg_ParseTuple there. What I'm wondering is what that call will do when I call the function with a Unicode object. Will it convert the Unicode to a char string first, will it return the Unicode object in a certain (default) encoding, will it fail? I'm working on the MS Windows CE port, and I don't have stat() there. Also, I don't have GetFileAttributesA(char const*) there, so I need a wchar_t (UTF-16) string anyway. What would be the best way to get one? Thanks! Uli From nicolas at qlayer.com Fri Jan 2 09:25:34 2009 From: nicolas at qlayer.com (Nicolas Trangez) Date: Fri, 02 Jan 2009 09:25:34 +0100 Subject: [Python-Dev] opcode dispatch optimization In-Reply-To: References: <495BAF7B.5090405@cheimes.de> Message-ID: <1230884734.5833.39.camel@lambda.qlayer.com> On Wed, 2008-12-31 at 12:51 -0600, Jason Orendorff wrote: > On Wed, Dec 31, 2008 at 11:44 AM, Christian Heimes wrote: > > The patch makes use of a GCC feature where labels can be used as values: > > http://gcc.gnu.org/onlinedocs/gcc/Labels-as-Values.html . I didn't know > > about the feature and got confused by the unary && operator. > > Right. SpiderMonkey (Mozilla's JavaScript interpreter) does this, and > it was good for a similar win on platforms that use GCC. (It took me > a while to figure out why it was so much faster, so I think this patch > would be better with a few very specific comments!) > > SpiderMonkey calls this optimization "threaded code" too, but this > isn't the standard meaning of that term. See: > http://en.wikipedia.org/wiki/Threaded_code FWIW, it's also explained pretty well in the first pages of [1]. WebKit's SquirrelFish is direct-threaded as well [2]. Nicolas [1] http://citeseer.ist.psu.edu/cache/papers/cs/32018/http:zSzzSzwww.jilp.orgzSzvol5zSzv5paper12.pdf/ertl03structure.pdf [2] http://webkit.org/blog/189/announcing-squirrelfish/ From mal at egenix.com Fri Jan 2 16:43:57 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 02 Jan 2009 16:43:57 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <495D57BE.6020904@gmail.com> References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> Message-ID: <495E363D.9050303@egenix.com> On 2009-01-02 00:54, Nick Coghlan wrote: > Ulrich Eckhardt wrote: >> Hi! >> >> There are lots of files that are framed with an extern "C" stanza when >> compiled under C++. Now, I appreciate that header files are made suitable for >> use with C++ with that, but WTF are those doing in .c files??? > > I believe it is to allow building the Python source as an embedded part > of an external application that is built with a C++ compiler, That's the reason, yes. Mixing .c and .cpp files in a compiler call will not always cause an implicit extern "C" to be used for the .c files. This causes problems for cases where you rely on the naming of the exported functions, e.g. for the module init function. C++ mangles all exported symbols. extern "C" disables this. AFAIR, early versions of MS VC++ used to compile everything as C++ file, regardless of the extension. > even when > that compiler isn't clever enough to realise that the 'extern "C"' > should be implied by the '.c' file extension. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 02 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From mal at egenix.com Fri Jan 2 17:05:19 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 02 Jan 2009 17:05:19 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> Message-ID: <495E3B3F.7090603@egenix.com> On 2009-01-02 08:26, Adam Olsen wrote: > On Thu, Jan 1, 2009 at 11:24 PM, Alexander Belopolsky > wrote: >> On Fri, Jan 2, 2009 at 12:58 AM, Adam Olsen wrote: >> .. >>> As C++ has more specific ways of allocating memory, they impose this >>> restriction to annoy you into using them. >> And so does Python API: see PyMem_NEW and PyMem_RESIZE macros. > > An optional second API provides convenience, not annoyance. Besides, > they're not used much anymore. I am curious what their history is > though. See Include/pymem.h and objimpl.h for details. PyMem_MALLOC() et al. provide an abstraction layer on top of the system's malloc() implementation. PyObject_MALLOC() et al. use the Python memory allocator instead. >>> We won't be using them, and the extra casts and nothing but noise. >> A quick grep through the sources shows that these casts are not just nose: >> >> Objects/stringobject.c: op = (PyStringObject *)PyObject_MALLOC(.. >> Objects/typeobject.c: remain = (int *)PyMem_MALLOC(.. >> Objects/unicodeobject.c: unicode->str = (Py_UNICODE*) PyObject_MALLOC(.. >> >> in many cases the type of object being allocated is not obvious from >> the l.h.s. name. Redundant cast improves readability in these cases. > > Python's malloc wrappers are pretty messy. Of your examples, only > unicode->str isn't obvious what the result is, as the rest are local > to that function. Even that is obvious when you glance at the line > above, where the size is calculated using sizeof(Py_UNICODE). > > If you're concerned about correctness then you'd do better eliminating > the redundant malloc wrappers and giving them names that directly > match what they can be used for. ??? Please read the comments in pymem.h and objimpl.h. > If the size calculation bothers you you could include the semantics of > the PyMem_New() API, which includes the cast you want. I've no > opposition to including casts in a single place like that (and it > would catch errors even with C compilation). You should always use PyMem_NEW() (capital letters), if you ever intend to benefit from the memory allocation debug facilities in the Python memory allocation interfaces. The difference between using the _NEW() macros and the _MALLOC() macros is that the first apply overflow checking for you. However, the added overhead only makes sense if these overflow haven't already been applied elsewhere. >>> Figure out a way to turn off the warnings instead. >>> >> These are not warnings: these are compile errors in C++. A compiler >> which allows to suppress them would not be a standard compliant C++ >> compiler. > > So long as the major compilers allow it I don't particularly care. > Compiling as C++ is too obscure of a feature to warrant uglifying the > code. > > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 02 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From lists at cheimes.de Fri Jan 2 17:35:51 2009 From: lists at cheimes.de (Christian Heimes) Date: Fri, 02 Jan 2009 17:35:51 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <5b8d13220901020141t5160eaaata5e7bb6eda4f1254@mail.gmail.com> References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> <5b8d13220901020141t5160eaaata5e7bb6eda4f1254@mail.gmail.com> Message-ID: <495E4267.30307@cheimes.de> David Cournapeau schrieb: > Can't those errors be found simply using appropriate warning flags in > the C compiler ? C has stopped being a subset of C++ a long time ago Python's C code still follow the ANSI C89 standard. The fact puts 'long time ago' in a different perspective. :) From curt at hagenlocher.org Fri Jan 2 17:42:51 2009 From: curt at hagenlocher.org (Curt Hagenlocher) Date: Fri, 2 Jan 2009 08:42:51 -0800 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <495E4267.30307@cheimes.de> References: <200901011630.38196.doomster@knuut.de> <5b8d13220901020141t5160eaaata5e7bb6eda4f1254@mail.gmail.com> <495E4267.30307@cheimes.de> Message-ID: On Fri, Jan 2, 2009 at 8:35 AM, Christian Heimes wrote: > > David Cournapeau schrieb: >> Can't those errors be found simply using appropriate warning flags in >> the C compiler ? C has stopped being a subset of C++ a long time ago > > Python's C code still follow the ANSI C89 standard. The fact puts 'long > time ago' in a different perspective. :) ...and many of us can still remember when Python's source was "K&R C" :) -- Curt Hagenlocher curt at hagenlocher.org From status at bugs.python.org Fri Jan 2 18:06:59 2009 From: status at bugs.python.org (Python tracker) Date: Fri, 2 Jan 2009 18:06:59 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20090102170659.7ACE07836B@psf.upfronthosting.co.za> ACTIVITY SUMMARY (12/26/08 - 01/02/09) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 2261 open (+32) / 14372 closed (+27) / 16633 total (+59) Open issues with patches: 771 Average duration of open issues: 708 days. Median duration of open issues: 2759 days. Open Issues Breakdown open 2237 (+32) pending 24 ( +0) Issues Created Or Reopened (60) _______________________________ [distutils] - error when processing the "--formats=tar" option 12/27/08 http://bugs.python.org/issue1885 reopened tarek patch Error in SocketServer UDP documentation 12/26/08 CLOSED http://bugs.python.org/issue4752 created shazow Faster opcode dispatch on gcc 12/26/08 http://bugs.python.org/issue4753 created pitrou patch winsound documentation (about stoping sounds) 12/26/08 CLOSED http://bugs.python.org/issue4754 created ocean-city patch Common path prefix 12/27/08 http://bugs.python.org/issue4755 created skip.montanaro patch, patch, needs review zipfile.is_zipfile: added support for file-like objects 12/27/08 CLOSED http://bugs.python.org/issue4756 created gagenellina patch reject unicode in zlib 12/27/08 http://bugs.python.org/issue4757 created haypo patch Python 3.0 internet documentation needs work 12/27/08 http://bugs.python.org/issue4758 created beazley bytearray.translate() should support None first argument 12/27/08 CLOSED http://bugs.python.org/issue4759 created georg.brandl patch cmp gone---What's new in 3.1 12/28/08 http://bugs.python.org/issue4760 created LambertDW create Python wrappers for openat() and others 12/28/08 http://bugs.python.org/issue4761 created pitrou PyFile_FromFd() doesn't set the file name 12/28/08 http://bugs.python.org/issue4762 created haypo PyErr_GivenExceptionMatches documentation out of date 12/28/08 CLOSED http://bugs.python.org/issue4763 created garcia open('existing_dir') -> IOError instance's attr filename is None 12/29/08 CLOSED http://bugs.python.org/issue4764 created zuo IDLE fails to "Delete Custom Key Set" properly 12/29/08 http://bugs.python.org/issue4765 created alex_fainshtein email documentation needs to be precise about strings/bytes 12/29/08 http://bugs.python.org/issue4766 created beazley email.mime incorrectly documented (or implemented) 12/29/08 CLOSED http://bugs.python.org/issue4767 created beazley email.generator.Generator object bytes/str crash - b64encode() b 12/29/08 http://bugs.python.org/issue4768 created beazley patch b64decode should accept strings or bytes 12/29/08 http://bugs.python.org/issue4769 created beazley binascii module, crazy error messages, unexpected behavior 12/29/08 http://bugs.python.org/issue4770 created beazley Bad examples in hashlib documentation 12/29/08 CLOSED http://bugs.python.org/issue4771 created beazley undesired switch fall-through in socketmodule.c 12/29/08 http://bugs.python.org/issue4772 created dontbugme patch, needs review HTTPMessage not documented and has inconsistent API across 2.6/3 12/29/08 http://bugs.python.org/issue4773 created beazley threding, bsddb and double free or corruption (fasttop) 12/29/08 http://bugs.python.org/issue4774 created aspineux Incorrect documentation - UTC time 12/30/08 http://bugs.python.org/issue4775 created luckmor distutils documentation 12/30/08 CLOSED http://bugs.python.org/issue4776 created steve21 nntplib - python 2.5 12/30/08 CLOSED http://bugs.python.org/issue4777 created morrowc patch Small typo in multiprocessing documentation 12/30/08 CLOSED http://bugs.python.org/issue4778 created hdima patch Can't import Tkinter 12/30/08 CLOSED http://bugs.python.org/issue4779 created pierre.lhoste Makefile.pre.in patch to run regen on OSX (framework build) 12/30/08 CLOSED http://bugs.python.org/issue4780 created ronaldoussoren patch, patch, needs review The function, Threading.Timer.run(), may be Inappropriate 12/30/08 http://bugs.python.org/issue4781 created gestapo21th json documentation missing load(), loads() 12/30/08 CLOSED http://bugs.python.org/issue4782 created beazley json documentation needs a BAWM (Big A** Warning Message) 12/30/08 http://bugs.python.org/issue4783 created beazley Mismatch in documentation for module "webbrowser" 12/30/08 CLOSED http://bugs.python.org/issue4784 created improper_smile json.JSONDecoder() strict argument undocumented and potentially 12/30/08 http://bugs.python.org/issue4785 created beazley xml.etree.ElementTree module name in Python 3 12/30/08 CLOSED http://bugs.python.org/issue4786 created beazley Curses Unicode Support 12/30/08 http://bugs.python.org/issue4787 created atagar1 two bare "except" clauses are used in the ssl module 12/31/08 CLOSED http://bugs.python.org/issue4788 created giampaolo.rodola patch Documentation changes break existing URIs 12/31/08 CLOSED http://bugs.python.org/issue4789 created msapiro Optimization to heapq module 12/31/08 CLOSED http://bugs.python.org/issue4790 created nilton patch retrlines('LIST') and dir hang at end of listing in ftplib (pyth 12/31/08 http://bugs.python.org/issue4791 created chris.mahan PythonCmd in Modules/_tkinter.c should use the given "interp" pa 12/31/08 http://bugs.python.org/issue4792 created gpolo patch Glossary incorrectly describes a decorator as "merely syntactic 12/31/08 CLOSED http://bugs.python.org/issue4793 created kermode garbage collector blocks and takes worst-case linear time wrt nu 12/31/08 CLOSED http://bugs.python.org/issue4794 created darrenr inspect.isgeneratorfunction inconsistent with other inspect func 12/31/08 CLOSED http://bugs.python.org/issue4795 created stevenjd Decimal to receive from_float method 12/31/08 http://bugs.python.org/issue4796 created stevenjd test_fileio error (windows) 01/01/09 CLOSED http://bugs.python.org/issue4797 created ocean-city patch Update deprecation of 'new' module in PEP 4. 01/01/09 CLOSED http://bugs.python.org/issue4798 created vshenoy patch handling inf/nan in '%f' 01/01/09 http://bugs.python.org/issue4799 created cdavid patch little inaccuracy in Py_ssize_t explanation 01/01/09 CLOSED http://bugs.python.org/issue4800 created exe _collections module fail to build on cygwin 01/01/09 CLOSED http://bugs.python.org/issue4801 created rpetrov detect_tkinter for cygwin 01/01/09 http://bugs.python.org/issue4802 created rpetrov patch Manas Thapliyal sent you a Friend Request on Yaari 01/02/09 CLOSED http://bugs.python.org/issue4803 created gravitywarrior1 Python on Windows disables all C runtime library assertions 01/02/09 http://bugs.python.org/issue4804 created mhammond Make python code compilable with a C++ compiler 01/02/09 http://bugs.python.org/issue4805 created belopolsky patch Function calls taking a generator as star argument can mask Type 01/02/09 http://bugs.python.org/issue4806 created hagen wrong wsprintf usage 01/02/09 http://bugs.python.org/issue4807 created eckhardt patch doc issue for threading module (name/daemon properties) 01/02/09 http://bugs.python.org/issue4808 created cgoldberg 2.5.4 release missing from python.org/downloads 01/02/09 http://bugs.python.org/issue4809 created rsyring timeit needs "official" '--' flag 01/02/09 http://bugs.python.org/issue4810 created skip.montanaro Issues Now Closed (95) ______________________ MacOS.GetCreatorAndType() and SetCreatorAndType() broken on inte 388 days http://bugs.python.org/issue1594 ronaldoussoren unittest.py modernization 311 days http://bugs.python.org/issue2153 pitrou patch IDLE doesn't work with Tk 8.5 under python 2.5 and older 250 days http://bugs.python.org/issue2693 gpolo patch 2to3 converts long(itude) argument to int 244 days http://bugs.python.org/issue2734 benjamin.peterson test_list on 64-bit platforms 210 days http://bugs.python.org/issue3055 ronaldoussoren ScrolledText can't be placed in a PanedWindow 182 days http://bugs.python.org/issue3248 gpolo patch fix_imports does not handle intra-package renames 182 days http://bugs.python.org/issue3260 benjamin.peterson ``make htmlview`` for docs fails on OS X 129 days http://bugs.python.org/issue3644 georg.brandl What's New in 2.6 - corrections 126 days http://bugs.python.org/issue3671 georg.brandl Cycles with some iterator are leaking. 129 days http://bugs.python.org/issue3680 pitrou patch tkColorChooser may fail if no color is selected 117 days http://bugs.python.org/issue3767 loewis patch python-2.6b3.msi and python-2.6b3.amd64.msi can't both be instal 110 days http://bugs.python.org/issue3833 loewis IDLE does not open too 85 days http://bugs.python.org/issue4049 loewis patch PyUnicode_DecodeUTF16(..., byteorder=0) gets it wrong on Mac OS 83 days http://bugs.python.org/issue4060 benjamin.peterson patch distutils.util.get_platform() is wrong for universal builds on m 82 days http://bugs.python.org/issue4064 ronaldoussoren patch, needs review Lib/lib2to3/*.pickle are shipped / modified in the build 81 days http://bugs.python.org/issue4096 loewis xml.etree.ElementTree does not read xml-text over page bonderies 60 days http://bugs.python.org/issue4100 georg.brandl patch Docs for BaseHandler.protocol_xxx methods are unclear 72 days http://bugs.python.org/issue4156 georg.brandl re module treats raw strings as normal strings 51 days http://bugs.python.org/issue4185 georg.brandl dis.findlinestarts is missing from dis.__all__ and from the onli 65 days http://bugs.python.org/issue4222 georg.brandl struct.pack('L', -1) 64 days http://bugs.python.org/issue4228 georg.brandl patch zipfile.py -> is_zipfile leaves file open when error 58 days http://bugs.python.org/issue4241 pitrou struct module: pack/unpack and byte order on x86_64 51 days http://bugs.python.org/issue4270 pitrou Wrong encoding in files saved from IDLE (3.0rc2 on Windows) 45 days http://bugs.python.org/issue4323 loewis patch Pickle tests fail w/o _pickle extension 36 days http://bugs.python.org/issue4374 alexandre.vassalotti patch pypirc default is not at the right format 36 days http://bugs.python.org/issue4400 benjamin.peterson patch In Lib\tkinter\filedialog.py, class Directory define loss a"_" 37 days http://bugs.python.org/issue4406 benjamin.peterson patch IDLE string problem in Run/Run Module 35 days http://bugs.python.org/issue4410 loewis unittest - use contexts to assert exceptions 31 days http://bugs.python.org/issue4444 pitrou patch Module wsgiref is not python3000 ready (unicode issues) 28 days http://bugs.python.org/issue4522 pitrou patch askdirectory() in tkinter.filedialog is broken 27 days http://bugs.python.org/issue4539 gpolo IDLE shutdown if I run an edited file contains chinese 19 days http://bugs.python.org/issue4623 loewis distutils chokes on empty options arg in the setup function 17 days http://bugs.python.org/issue4646 tarek patch, patch bytes,join and bytearray.join not in manual; help for bytes.join 12 days http://bugs.python.org/issue4669 georg.brandl setup.py exception when db_setup_debug = True 16 days http://bugs.python.org/issue4670 amaury.forgeotdarc pydoc executes the code to be documented 12 days http://bugs.python.org/issue4671 georg.brandl a list comprehensions tests for pybench 11 days http://bugs.python.org/issue4677 pitrou patch 'b' formatter is actually unsigned char 10 days http://bugs.python.org/issue4682 georg.brandl Bad AF_PIPE address in multiprocessing documentation 9 days http://bugs.python.org/issue4695 georg.brandl Clarification needed for subprocess convenience functions in Pyt 9 days http://bugs.python.org/issue4697 georg.brandl range objects becomes hashable after attribute access 11 days http://bugs.python.org/issue4701 ncoghlan patch [PATCH] msvc9compiler raises IOError when no compiler found inst 11 days http://bugs.python.org/issue4702 tarek patch Python 3.0 halts on shutdown when settrace is set 5 days http://bugs.python.org/issue4716 pitrou Endianness and universal builds problems 6 days http://bugs.python.org/issue4728 ronaldoussoren cPickle corrupts high-unicode strings 4 days http://bugs.python.org/issue4730 alexandre.vassalotti suggest change to "Failed to find the necessary bits to build th 5 days http://bugs.python.org/issue4731 georg.brandl Object allocation stress leads to segfault on RHEL 8 days http://bugs.python.org/issue4732 pitrou [patch] Let users do help('@') and so on for confusing syntax co 3 days http://bugs.python.org/issue4739 georg.brandl patch pickle test for protocol 3 (HIGHEST_PROTOCOL in py3k) 2 days http://bugs.python.org/issue4740 ocean-city patch, easy 3.0 distutils byte-compiling -> Syntax error: unknown encoding: 8 days http://bugs.python.org/issue4742 georg.brandl patch intra-pkg multiple import (import local1, local2) not fixed 2 days http://bugs.python.org/issue4743 benjamin.peterson socket.send obscure error message 3 days http://bugs.python.org/issue4745 georg.brandl Misguiding wording 3.0 c-api reference 4 days http://bugs.python.org/issue4746 georg.brandl SyntaxError executing a script containing non-ASCII characters i 7 days http://bugs.python.org/issue4747 amaury.forgeotdarc patch yield expression vs lambda 1 days http://bugs.python.org/issue4748 benjamin.peterson patch, needs review Error in SocketServer UDP documentation 1 days http://bugs.python.org/issue4752 georg.brandl winsound documentation (about stoping sounds) 1 days http://bugs.python.org/issue4754 georg.brandl patch zipfile.is_zipfile: added support for file-like objects 0 days http://bugs.python.org/issue4756 pitrou patch bytearray.translate() should support None first argument 1 days http://bugs.python.org/issue4759 georg.brandl patch PyErr_GivenExceptionMatches documentation out of date 0 days http://bugs.python.org/issue4763 benjamin.peterson open('existing_dir') -> IOError instance's attr filename is None 1 days http://bugs.python.org/issue4764 benjamin.peterson email.mime incorrectly documented (or implemented) 3 days http://bugs.python.org/issue4767 georg.brandl Bad examples in hashlib documentation 0 days http://bugs.python.org/issue4771 benjamin.peterson distutils documentation 2 days http://bugs.python.org/issue4776 georg.brandl nntplib - python 2.5 1 days http://bugs.python.org/issue4777 haypo patch Small typo in multiprocessing documentation 0 days http://bugs.python.org/issue4778 georg.brandl patch Can't import Tkinter 0 days http://bugs.python.org/issue4779 georg.brandl Makefile.pre.in patch to run regen on OSX (framework build) 3 days http://bugs.python.org/issue4780 ronaldoussoren patch, patch, needs review json documentation missing load(), loads() 2 days http://bugs.python.org/issue4782 georg.brandl Mismatch in documentation for module "webbrowser" 2 days http://bugs.python.org/issue4784 georg.brandl xml.etree.ElementTree module name in Python 3 1 days http://bugs.python.org/issue4786 benjamin.peterson two bare "except" clauses are used in the ssl module 0 days http://bugs.python.org/issue4788 benjamin.peterson patch Documentation changes break existing URIs 2 days http://bugs.python.org/issue4789 georg.brandl Optimization to heapq module 1 days http://bugs.python.org/issue4790 nilton patch Glossary incorrectly describes a decorator as "merely syntactic 0 days http://bugs.python.org/issue4793 benjamin.peterson garbage collector blocks and takes worst-case linear time wrt nu 2 days http://bugs.python.org/issue4794 loewis inspect.isgeneratorfunction inconsistent with other inspect func 0 days http://bugs.python.org/issue4795 rhettinger test_fileio error (windows) 0 days http://bugs.python.org/issue4797 ocean-city patch Update deprecation of 'new' module in PEP 4. 0 days http://bugs.python.org/issue4798 georg.brandl patch little inaccuracy in Py_ssize_t explanation 1 days http://bugs.python.org/issue4800 loewis _collections module fail to build on cygwin 0 days http://bugs.python.org/issue4801 amaury.forgeotdarc Manas Thapliyal sent you a Friend Request on Yaari 0 days http://bugs.python.org/issue4803 benjamin.peterson zappyfiles.py absent from MacPython binary 1967 days http://bugs.python.org/issue789545 ronaldoussoren -O breaks bundlebuilder --standalone 1877 days http://bugs.python.org/issue841800 ronaldoussoren bundlebuilder: an arg to disable zipping the code 1779 days http://bugs.python.org/issue900506 ronaldoussoren bundlebuilder: easily keep main routine in orig location 1779 days http://bugs.python.org/issue900514 ronaldoussoren plat-mac/videoreader.py not working on OS X 1778 days http://bugs.python.org/issue900949 ronaldoussoren BuildApplet needs to get more BuildApplication features 1771 days http://bugs.python.org/issue905737 ronaldoussoren Implement BundleBuilder GUI as plugin component 1688 days http://bugs.python.org/issue957652 ronaldoussoren method after() and afer_idle() are not thread save 1623 days http://bugs.python.org/issue995925 gpolo Problems importing packages in ZIP file 1594 days http://bugs.python.org/issue1011893 loewis os.times() is bogus 1547 days http://bugs.python.org/issue1040026 loewis patch macostools.mkdirs: not thread-safe 1409 days http://bugs.python.org/issue1149804 ronaldoussoren plat-mac videoreader.py auido format info 729 days http://bugs.python.org/issue1627952 ronaldoussoren EasyDialogs patch to remove aepack dependency 567 days http://bugs.python.org/issue1737832 ronaldoussoren patch Top Issues Most Discussed (10) ______________________________ 36 Faster opcode dispatch on gcc 7 days open http://bugs.python.org/issue4753 21 Patch for better thread support in hashlib 7 days open http://bugs.python.org/issue4751 17 shutil.rmtree is vulnerable to a symlink attack 31 days open http://bugs.python.org/issue4489 15 Is shared lib building broken on trunk for Mac OS X? 33 days pending http://bugs.python.org/issue4472 14 retrlines('LIST') and dir hang at end of listing in ftplib (pyt 2 days open http://bugs.python.org/issue4791 12 Curses Unicode Support 3 days open http://bugs.python.org/issue4787 12 2.6.1 breaks many applications that embed Python on Windows 27 days open http://bugs.python.org/issue4566 10 wsgiref package totally broken 11 days open http://bugs.python.org/issue4718 10 complex constructor doesn't accept string with nan and inf 322 days open http://bugs.python.org/issue2121 9 Patch to make zlib-objects better support threads 9 days open http://bugs.python.org/issue4738 From martin at v.loewis.de Fri Jan 2 18:29:40 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 02 Jan 2009 18:29:40 +0100 Subject: [Python-Dev] A wart which should have been repaired in 3.0? In-Reply-To: <20081231040951.36EB43A410E@sparrow.telecommunity.com> References: <18773.27523.297588.265405@montanaro-dyndns-org.local> <1230460141.6361.4.camel@localhost> <49575952.7070405@gmail.com> <4957DFC5.9030405@v.loewis.de> <18776.1376.724926.669345@montanaro-dyndns-org.local> <495808C6.4050304@v.loewis.de> <18776.2535.459306.987378@montanaro-dyndns-org.local> <1bc395c10812291349t149bf3fcm7926934cef9fd6be@mail.gmail.com> <18777.21289.504321.865439@montanaro-dyndns-org.local> <20081230010023.B46883A406C@sparrow.telecommunity.com> <79990c6b0812300136i323cb7eem76d2889262fd2175@mail.gmail.com> <20081230225006.B5D043A405E@sparrow.telecommunity.com> <18778.57257.227598.592245@montanaro-dyndns-org.local> <20081231040951.36EB43A410E@sparrow.telecommunity.com> Message-ID: <495E4F04.1050402@v.loewis.de> I propose a different solution to this commonprefix mess: eliminate the function altogether. It is apparently too confusing to get right. Regards, Martin From cournape at gmail.com Fri Jan 2 18:31:17 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 3 Jan 2009 02:31:17 +0900 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <495E4267.30307@cheimes.de> References: <200901011630.38196.doomster@knuut.de> <5b8d13220901020141t5160eaaata5e7bb6eda4f1254@mail.gmail.com> <495E4267.30307@cheimes.de> Message-ID: <5b8d13220901020931h70d9deb9n85bb24dd568765d7@mail.gmail.com> On Sat, Jan 3, 2009 at 1:35 AM, Christian Heimes wrote: > David Cournapeau schrieb: >> Can't those errors be found simply using appropriate warning flags in >> the C compiler ? C has stopped being a subset of C++ a long time ago > > Python's C code still follow the ANSI C89 standard. The fact puts 'long > time ago' in a different perspective. :) In my mind, C++ is largely responsible for C not being a subset of C++, so the type of C used by the codebase is not that important in that context. David From ronaldoussoren at mac.com Fri Jan 2 17:31:16 2009 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Fri, 02 Jan 2009 17:31:16 +0100 Subject: [Python-Dev] Python 3 - Mac Installer? In-Reply-To: <1afaf6160812301359r36d3b5b9k98afb21b517a69ce@mail.gmail.com> References: <200812260855.49518.list@qtrac.plus.com> <1afaf6160812261530r4f72eca8nf7cc519683bcbb16@mail.gmail.com> <74A762C2-585A-479D-BA3E-E0658E212A16@barrys-emacs.org> <1afaf6160812301359r36d3b5b9k98afb21b517a69ce@mail.gmail.com> Message-ID: On 30 Dec, 2008, at 22:59, Benjamin Peterson wrote: >> Seems that there are two ways to go. >> >> Put back the Carbon and MacOS modules into 3.0. >> Use Python 2 to build the python 3 package. > > I've converted it back to 2.x for the time being. Eventually, I think > some 3.x bindings should be released. For the record: I was my intention that the build-installer.py script works with /usr/bin/python on OSX 10.4 or later. That makes it possible to create a usable installer without first having to build a bootstrap version of python. Ronald From alexander.belopolsky at gmail.com Fri Jan 2 18:32:40 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 2 Jan 2009 12:32:40 -0500 Subject: [Python-Dev] A wart which should have been repaired in 3.0? In-Reply-To: <495E4F04.1050402@v.loewis.de> References: <18773.27523.297588.265405@montanaro-dyndns-org.local> <1bc395c10812291349t149bf3fcm7926934cef9fd6be@mail.gmail.com> <18777.21289.504321.865439@montanaro-dyndns-org.local> <20081230010023.B46883A406C@sparrow.telecommunity.com> <79990c6b0812300136i323cb7eem76d2889262fd2175@mail.gmail.com> <20081230225006.B5D043A405E@sparrow.telecommunity.com> <18778.57257.227598.592245@montanaro-dyndns-org.local> <20081231040951.36EB43A410E@sparrow.telecommunity.com> <495E4F04.1050402@v.loewis.de> Message-ID: On Fri, Jan 2, 2009 at 12:29 PM, "Martin v. L?wis" wrote: > I propose a different solution to this commonprefix mess: eliminate > the function altogether. It is apparently too confusing to get right. +1 From alexander.belopolsky at gmail.com Fri Jan 2 19:16:49 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 2 Jan 2009 13:16:49 -0500 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <5b8d13220901020931h70d9deb9n85bb24dd568765d7@mail.gmail.com> References: <200901011630.38196.doomster@knuut.de> <5b8d13220901020141t5160eaaata5e7bb6eda4f1254@mail.gmail.com> <495E4267.30307@cheimes.de> <5b8d13220901020931h70d9deb9n85bb24dd568765d7@mail.gmail.com> Message-ID: First, by copying c++-sig, let me invite C++ experts to comment on this thread and the tracker issue: http://mail.python.org/pipermail/python-dev/2009-January/084685.html http://bugs.python.org/issue4805 My patch highlights several issues: 1. (OP's issue.) Do we need #ifdef __cplusplus wrappers in .c files? 2. Should malloc() calls be prefixed with explicit type casts? 3. Should use of void* pointers be discouraged when typed pointers will work? 4. Should exported symbols be always declared in headers or is it ok to just declare them as extern in .c files where they are used? 5. Should the use of C++ keywords such as "new" or "class" be discouraged? On #1, I find it silly to have #ifdef __cplusplus in the files that cannot be compiled with C++ in the first place. Even if the files are fixed to compile with C++, I have arguments that wrapping the entire file in extern "C" is an overkill and hides design flaws. On #4, Marc-Andre Lemburg commented on the tracker: """ Moving declarations into header files is not really in line with the way Python developers use header files: We usually only put code into header files that is meant for public use. Buy putting declarations into the header files without additional warning, you implicitly document them and make them usable in non-interpreter code. """ I disagree. Declaring functions in .c files breaks dependency analysis and may lead to subtle bugs. AFAIK, Python's convention for global functions that are not meant to be used outside of the interpreter is the _Py prefix. If an extra protection is deemed necessary, non-public global symbols can be declared in separate header files that are not included by Python.h. On #3 and #5 arguments can be made that C++ compliant code is better irrespective of C++. #2 seems to be the most controversial, but explicit casts seem to be the current norm and if perceived ugliness will discourage use of mallocs in favor of higher level APIs, it is probably a good thing. I believe restricting python core code to an intersection of c89 and C++ will improve the overall self-consistency and portability of the code and will allow python to be used in more systems, but since I don't use C++ myself, I will only argue for changes such as #4 that are independent of C++. On Fri, Jan 2, 2009 at 12:31 PM, David Cournapeau wrote: > On Sat, Jan 3, 2009 at 1:35 AM, Christian Heimes wrote: >> David Cournapeau schrieb: >>> Can't those errors be found simply using appropriate warning flags in >>> the C compiler ? C has stopped being a subset of C++ a long time ago >> >> Python's C code still follow the ANSI C89 standard. The fact puts 'long >> time ago' in a different perspective. :) > > In my mind, C++ is largely responsible for C not being a subset of > C++, so the type of C used by the codebase is not that important in > that context. > > David > From matthieu.brucher at gmail.com Fri Jan 2 19:31:40 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 2 Jan 2009 19:31:40 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <5b8d13220901020141t5160eaaata5e7bb6eda4f1254@mail.gmail.com> <495E4267.30307@cheimes.de> <5b8d13220901020931h70d9deb9n85bb24dd568765d7@mail.gmail.com> Message-ID: Hi, 2009/1/2 Alexander Belopolsky : > First, by copying c++-sig, let me invite C++ experts to comment on > this thread and the tracker issue: > > http://mail.python.org/pipermail/python-dev/2009-January/084685.html > http://bugs.python.org/issue4805 > > My patch highlights several issues: > > 1. (OP's issue.) Do we need #ifdef __cplusplus wrappers in .c files? It seems that it is a reminiscence of old compilers, perhaps not even supported anymore, thus it shouldn't be an issue anymore. > 2. Should malloc() calls be prefixed with explicit type casts? When I learnt C, I was always told to explicitely cast. > 3. Should use of void* pointers be discouraged when typed pointers will work? I think type checking is good in C. > 4. Should exported symbols be always declared in headers or is it ok > to just declare them as extern in .c files where they are used? I agree with you, declaration should be put in headers, to prevent a signature change for instance. One place is always better than several places (you have to define them somewhere if you want to call them without a warning). If we don't do this, we are back to the Fortran 77 days. Besides, if those functions are not to be advertized, nothing prevents them for inclusion inside "private" headers that are not included my Python.h. This way, you can benefit from C checks, without the advertizement (letting them in C files does not prevent their use, putting them in headers prevents crashes). With Visual Studio, you can say which function you want to export in your library. This can be done with gcc as well, even if it is not the default behavior. > 5. Should the use of C++ keywords such as "new" or "class" be discouraged? As CPython will not be ported to C++, I don't see the point of not using "new" and "class" for variables names inside a function. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From cournape at gmail.com Fri Jan 2 20:06:38 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 3 Jan 2009 04:06:38 +0900 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <5b8d13220901020141t5160eaaata5e7bb6eda4f1254@mail.gmail.com> <495E4267.30307@cheimes.de> <5b8d13220901020931h70d9deb9n85bb24dd568765d7@mail.gmail.com> Message-ID: <5b8d13220901021106n4548c4c5ja46c3759d1301f13@mail.gmail.com> Hi Matthis, On Sat, Jan 3, 2009 at 3:31 AM, Matthieu Brucher wrote: > > When I learnt C, I was always told to explicitely cast. Maybe your professor was used to old C :) It is discouraged practice to cast malloc - the only rationale I can think of nowadays is when you have to compile the source with both a C and C++ compiler. Otherwise, it is redundant at best (it was useful when malloc was defined as returning char*, and C did not allow for automatic void* to other pointer cast). David From matthieu.brucher at gmail.com Fri Jan 2 20:21:16 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 2 Jan 2009 20:21:16 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <5b8d13220901021106n4548c4c5ja46c3759d1301f13@mail.gmail.com> References: <200901011630.38196.doomster@knuut.de> <5b8d13220901020141t5160eaaata5e7bb6eda4f1254@mail.gmail.com> <495E4267.30307@cheimes.de> <5b8d13220901020931h70d9deb9n85bb24dd568765d7@mail.gmail.com> <5b8d13220901021106n4548c4c5ja46c3759d1301f13@mail.gmail.com> Message-ID: >> When I learnt C, I was always told to explicitely cast. > > Maybe your professor was used to old C :) That's more than likely :D Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From janssen at parc.com Fri Jan 2 20:51:46 2009 From: janssen at parc.com (Bill Janssen) Date: Fri, 2 Jan 2009 11:51:46 PST Subject: [Python-Dev] Python 3 - Mac Installer? In-Reply-To: References: <200812260855.49518.list@qtrac.plus.com> <1afaf6160812261530r4f72eca8nf7cc519683bcbb16@mail.gmail.com> <74A762C2-585A-479D-BA3E-E0658E212A16@barrys-emacs.org> Message-ID: <49713.1230925906@parc.com> Nicko van Someren wrote: > On 30 Dec 2008, at 13:45, Barry Scott wrote: > ... > > Since I've been building 3.0 for a while now I looked at the script. > > > > build-install.py seems to have been half converted to py 3.0. > > Going full 3.0 was not hard but then there is the problem of > > the imports. > > > > Python 3.0 does not have MacOS or Carbon modules. > > > > Seems that there are two ways to go. > > > > Put back the Carbon and MacOS modules into 3.0. > > Use Python 2 to build the python 3 package. > > As far as I can tell the Carbon and MacOS modules are _only_ used in > the setIcon() function, which is used to give pretty icon to the > python folder. Perhaps it might be better to have a fully Python 3 > build system and loose the prettiness for the time being. +1 Bill From martin at v.loewis.de Fri Jan 2 21:02:10 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 02 Jan 2009 21:02:10 +0100 Subject: [Python-Dev] Python 3 - Mac Installer? In-Reply-To: <49713.1230925906@parc.com> References: <200812260855.49518.list@qtrac.plus.com> <1afaf6160812261530r4f72eca8nf7cc519683bcbb16@mail.gmail.com> <74A762C2-585A-479D-BA3E-E0658E212A16@barrys-emacs.org> <49713.1230925906@parc.com> Message-ID: <495E72C2.1020408@v.loewis.de> >> As far as I can tell the Carbon and MacOS modules are _only_ used in >> the setIcon() function, which is used to give pretty icon to the >> python folder. Perhaps it might be better to have a fully Python 3 >> build system and loose the prettiness for the time being. > > +1 -1. I think it is a good choice that build-install.py is written in Python 2.x, and only relies on the system Python. For that matter, it could have been a shell script. That way, you don't have to build Python first in order to build it. Regards, Martin From ronaldoussoren at mac.com Fri Jan 2 21:03:24 2009 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Fri, 02 Jan 2009 21:03:24 +0100 Subject: [Python-Dev] Python 3 - Mac Installer? In-Reply-To: <49713.1230925906@parc.com> References: <200812260855.49518.list@qtrac.plus.com> <1afaf6160812261530r4f72eca8nf7cc519683bcbb16@mail.gmail.com> <74A762C2-585A-479D-BA3E-E0658E212A16@barrys-emacs.org> <49713.1230925906@parc.com> Message-ID: <02290CDC-8C4D-4282-A9D1-22FDCCBC5DF1@mac.com> On 2 Jan, 2009, at 20:51, Bill Janssen wrote: > Nicko van Someren wrote: > >> On 30 Dec 2008, at 13:45, Barry Scott wrote: >> ... >>> Since I've been building 3.0 for a while now I looked at the script. >>> >>> build-install.py seems to have been half converted to py 3.0. >>> Going full 3.0 was not hard but then there is the problem of >>> the imports. >>> >>> Python 3.0 does not have MacOS or Carbon modules. >>> >>> Seems that there are two ways to go. >>> >>> Put back the Carbon and MacOS modules into 3.0. >>> Use Python 2 to build the python 3 package. >> >> As far as I can tell the Carbon and MacOS modules are _only_ used in >> the setIcon() function, which is used to give pretty icon to the >> python folder. Perhaps it might be better to have a fully Python 3 >> build system and loose the prettiness for the time being. > > +1 -1. The script exists to make it as easy as possible to build the installer. Converting it to python 3 makes that task harder. BTW. Until a couple of days ago the script had no chance of working at all because the Mac-specific Makefiles were broken. Ronald From benjamin at python.org Fri Jan 2 21:06:22 2009 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 2 Jan 2009 14:06:22 -0600 Subject: [Python-Dev] Python 3 - Mac Installer? In-Reply-To: References: <200812260855.49518.list@qtrac.plus.com> <1afaf6160812261530r4f72eca8nf7cc519683bcbb16@mail.gmail.com> <74A762C2-585A-479D-BA3E-E0658E212A16@barrys-emacs.org> Message-ID: <1afaf6160901021206h2d7a4610q95286855edf346cc@mail.gmail.com> On Wed, Dec 31, 2008 at 4:34 PM, Nicko van Someren wrote: > > As far as I can tell the Carbon and MacOS modules are _only_ used in the > setIcon() function, which is used to give pretty icon to the python folder. > Perhaps it might be better to have a fully Python 3 build system and loose > the prettiness for the time being. -1 also There's little advantage having it be in Python 3 at the moment. -- Regards, Benjamin From doomster at knuut.de Fri Jan 2 22:30:39 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Fri, 2 Jan 2009 22:30:39 +0100 Subject: [Python-Dev] PyOS_GetLastModificationTime Message-ID: <200901022230.39789.doomster@knuut.de> Hi! The function PyOS_GetLastModificationTime() is documented in sys.rst as taking a "char*". However, in reality, it takes a "char*" and a "FILE*". Actually, it should take a "char const*", as it doesn't and shouldn't modify the path. Further, the normal version doesn't use the path at all, the RISCOS version in 2.7 does however, and for a CE port it would be convenient to have that info, too. There is another issue, and that is that the function isn't declared anywhere, and I'm not sure where it should be declared either. Actually, I'm not sure if is suitable/intended for public consumption, so I wonder if putting it into 'Include' would be right. Any suggestions how to deal with that issue? Uli From janssen at parc.com Fri Jan 2 22:34:14 2009 From: janssen at parc.com (Bill Janssen) Date: Fri, 2 Jan 2009 13:34:14 PST Subject: [Python-Dev] Python 3 - Mac Installer? In-Reply-To: <495E72C2.1020408@v.loewis.de> References: <200812260855.49518.list@qtrac.plus.com> <1afaf6160812261530r4f72eca8nf7cc519683bcbb16@mail.gmail.com> <74A762C2-585A-479D-BA3E-E0658E212A16@barrys-emacs.org> <49713.1230925906@parc.com> <495E72C2.1020408@v.loewis.de> Message-ID: <51979.1230932054@parc.com> Martin v. L?wis wrote: > >> As far as I can tell the Carbon and MacOS modules are _only_ used in > >> the setIcon() function, which is used to give pretty icon to the > >> python folder. Perhaps it might be better to have a fully Python 3 > >> build system and loose the prettiness for the time being. > > > > +1 > > -1. I think it is a good choice that build-install.py is written in > Python 2.x, and only relies on the system Python. For that matter, > it could have been a shell script. That way, you don't have to build > Python first in order to build it. Can't argue with that. But dropping the "setIcon" function and its associated use of Carbon and MacOS modules from that build script seems to me a good idea. Prepare for the future with *very* limited loss of functionality. Bill From janssen at parc.com Fri Jan 2 22:43:22 2009 From: janssen at parc.com (Bill Janssen) Date: Fri, 2 Jan 2009 13:43:22 PST Subject: [Python-Dev] Python 3 - Mac Installer? In-Reply-To: <51979.1230932054@parc.com> References: <200812260855.49518.list@qtrac.plus.com> <1afaf6160812261530r4f72eca8nf7cc519683bcbb16@mail.gmail.com> <74A762C2-585A-479D-BA3E-E0658E212A16@barrys-emacs.org> <49713.1230925906@parc.com> <495E72C2.1020408@v.loewis.de> <51979.1230932054@parc.com> Message-ID: <52405.1230932602@parc.com> Bill Janssen wrote: > Martin v. L?wis wrote: > > > >> As far as I can tell the Carbon and MacOS modules are _only_ used in > > >> the setIcon() function, which is used to give pretty icon to the > > >> python folder. Perhaps it might be better to have a fully Python 3 > > >> build system and loose the prettiness for the time being. > > > > > > +1 > > > > -1. I think it is a good choice that build-install.py is written in > > Python 2.x, and only relies on the system Python. For that matter, > > it could have been a shell script. That way, you don't have to build > > Python first in order to build it. > > Can't argue with that. But dropping the "setIcon" function and its > associated use of Carbon and MacOS modules from that build script seems > to me a good idea. Prepare for the future with *very* limited loss of > functionality. At the very least, we could move the imports of Carbon and MacOS into the setIcon() function, which is called only once, and wrap a try-except around that call. Bill From martin at v.loewis.de Fri Jan 2 23:18:04 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 02 Jan 2009 23:18:04 +0100 Subject: [Python-Dev] PyOS_GetLastModificationTime In-Reply-To: <200901022230.39789.doomster@knuut.de> References: <200901022230.39789.doomster@knuut.de> Message-ID: <495E929C.7020307@v.loewis.de> > Any suggestions how to deal with that issue? Correct me if I'm wrong: it seems that the function isn't called anymore. So I propose to just remove it (and the file it lives in). Regards, Martin From doomster at knuut.de Sat Jan 3 00:36:24 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Sat, 3 Jan 2009 00:36:24 +0100 Subject: [Python-Dev] PyOS_GetLastModificationTime In-Reply-To: <200901022230.39789.doomster@knuut.de> References: <200901022230.39789.doomster@knuut.de> Message-ID: <200901030036.25122.doomster@knuut.de> On Friday 02 January 2009 22:30:39 Ulrich Eckhardt wrote: > The function PyOS_GetLastModificationTime() is documented in sys.rst as > taking a "char*". However, in reality, it takes a "char*" and a "FILE*". > Actually, it should take a "char const*", as it doesn't and shouldn't > modify the path. Further, the normal version doesn't use the path at all, > the RISCOS version in 2.7 does however, and for a CE port it would be > convenient to have that info, too. > > There is another issue, and that is that the function isn't declared > anywhere, and I'm not sure where it should be declared either. Actually, > I'm not sure if is suitable/intended for public consumption, so I wonder if > putting it into 'Include' would be right. Actually, the whole thing might be a non-issue. The point is that the function is not used anywhere. > Any suggestions how to deal with that issue? ...of course that question remains. Remove all traces? I'd volunteer to make a patch, as it's one less function to port. ;) cheers Uli From jimjjewett at gmail.com Sat Jan 3 03:46:13 2009 From: jimjjewett at gmail.com (Jim Jewett) Date: Fri, 2 Jan 2009 21:46:13 -0500 Subject: [Python-Dev] #ifdef __cplusplus? Message-ID: Alexander Belopolsky wrote: > 4. Should exported symbols be always declared in headers or is it ok > to just declare them as extern in .c files where they are used? Is the concern that moving them to a header makes them part of the API? In other words, does replacing PyObject * PyFile_FromString(char *name, char *mode) { extern int fclose(FILE *); ... } with #include mean that the needs to be included from then on, even if PyFile_FromString stops relying upon it? -jJ From victor.stinner at haypocalc.com Sat Jan 3 03:53:21 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Sat, 3 Jan 2009 03:53:21 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> Message-ID: <200901030353.22251.victor.stinner@haypocalc.com> Le Wednesday 31 December 2008 22:20:54, vous avez ?crit?: > When it comes to commit privs in general, I am of the school that they > should be handed out carefully. I for one do not want to have to > babysit other committers to make sure that they did something > correctly. Last time I asked if anyone could help me in Python core if I had an svn account, and I get this answer: everybody will review the changes. Anyway, why do you fear problems? Did I already push buggy commits? I posted many patches on Python bug tracker, most of them required many versions until they get perfect. But it doesn't mean that with an svn account, I will skip the bug tracker to wrote directly in the svn as my personal copy of Python!? > I also want people who have no agenda. It's okay to have an area you > care about, but that doesn't mean you should necessarily say "I will > only work on math, ever, even if something is staring me right in the > face!", etc. I wrote that I would like to improve Python quality by fuzzing, but I already contributed to many different topics by patches on the bug tracker. > There is also dedication. I don't like giving commit privileges to > people who I don't think will definitely stick around. (...) I don't understand why this is a problem. > To start, your focus on security, for me at least, goes too > far sometimes. I have disagreed with some of your decisions in the > name of security in the past and I am not quite ready to say that if > you committed something I wouldn't feel compelled to double-check it > to make sure you didn't go too far. I'm not sure that I understood correclty: does it mean that some of my issues were not reproductible in the real world (far from the real usage of Python)? It's true that some issues found by fuzzing are hard to reproduce (require a prepared environment), but my goal is to kill all bugs :-) Even if the bug is hard to reproduce, it does exist and that's why I'm thinking that it should be fixed. Sorry if I misused the name "security" but I don't remember where I wrote that "this issue is very security and related to security". Maybe by the imageop issues? -- About fuzzing: I'm still using my fuzzer Fusil on Python trunk and py3k, and I find fewer and fewer bugs ;-) Most of the time I rediscover bugs already reported to the tracker, but not fixed yet. So the fuzzing job is mostly done ;-) -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From lists at cheimes.de Sat Jan 3 03:56:07 2009 From: lists at cheimes.de (Christian Heimes) Date: Sat, 03 Jan 2009 03:56:07 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: Message-ID: Jim Jewett schrieb: > Is the concern that moving them to a header makes them part of the API? > > In other words, does replacing > > PyObject * > PyFile_FromString(char *name, char *mode) > { > extern int fclose(FILE *); > ... > } > > with > > #include > > mean that the needs to be included from then on, even if > PyFile_FromString stops relying upon it? stdio.h is included by the Python.h header file anyway. There is simply on point in declaring fclose() a second time here. Christian From rhamph at gmail.com Sat Jan 3 04:15:20 2009 From: rhamph at gmail.com (Adam Olsen) Date: Fri, 2 Jan 2009 20:15:20 -0700 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <495E3B3F.7090603@egenix.com> References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> <495E3B3F.7090603@egenix.com> Message-ID: On Fri, Jan 2, 2009 at 9:05 AM, M.-A. Lemburg wrote: > On 2009-01-02 08:26, Adam Olsen wrote: >> Python's malloc wrappers are pretty messy. Of your examples, only >> unicode->str isn't obvious what the result is, as the rest are local >> to that function. Even that is obvious when you glance at the line >> above, where the size is calculated using sizeof(Py_UNICODE). >> >> If you're concerned about correctness then you'd do better eliminating >> the redundant malloc wrappers and giving them names that directly >> match what they can be used for. > > ??? Please read the comments in pymem.h and objimpl.h. I count 7 versions of malloc. Despite the names, none of them are specific to PyObjects. It's pretty much impossible to know what different ones do without a great deal of experience. Only very specialized uses need to allocate PyObjects directly anyway. Normally PyObject_{New,NewVar,GC_New,GC_NewVar} are better. >> If the size calculation bothers you you could include the semantics of >> the PyMem_New() API, which includes the cast you want. I've no >> opposition to including casts in a single place like that (and it >> would catch errors even with C compilation). > > You should always use PyMem_NEW() (capital letters), if you ever > intend to benefit from the memory allocation debug facilities > in the Python memory allocation interfaces. I don't see why such debugging should require a full recompile, rather than having a hook inside the PyMem_Malloc (or even providing a different PyMem_Malloc). > The difference between using the _NEW() macros and the _MALLOC() > macros is that the first apply overflow checking for you. However, > the added overhead only makes sense if these overflow haven't > already been applied elsewhere. They provide assertions. There's no overflow checking in release builds. -- Adam Olsen, aka Rhamphoryncus From victor.stinner at haypocalc.com Sat Jan 3 04:29:11 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Sat, 3 Jan 2009 04:29:11 +0100 Subject: [Python-Dev] Bytes for the command line, process arguments and environment variables Message-ID: <200901030429.12003.victor.stinner@haypocalc.com> Hi, Python 3.0 is released and supports unicode everywhere, great! But as pointed by different people, bytes are required on non-Windows OS for backward compatibility. This email is just a sum up all many issues/email threads. Problems with Python 3.0: (1) Invalid unicode string on the command line => some people wants to get the command line arguments as bytes and so start even if non decodable unicode strings are present on the command line => http://bugs.python.org/issue3023 (2) Non decodable environment variables are skipped in os.environ => Create os.environb (or anything else) to get these variables as bytes (and be able to setup new variables as bytes) => Read the email thread "Python-3.0, unicode, and os.environ" (Decembre 2008) opened by Toshio Kuratomi (3) Support bytes for os.exec*() and subprocess.Popen(): process arguments and the environment variables => http://bugs.python.org/issue4035: my patch for os.exec*() => http://bugs.python.org/issue4036: my patch for subprocess.Popen() Command line ============ I like the curent behaviour and I don't want to change it. Be free to propose a solution to solve the issue ;-) Environment =========== I already proposed "os.environb" which will have the similar API than "os.environ" but with bytes. Relations between os.environb and os.environ: - for an undecodable variable value in os.environb, os.environ will raise a KeyError. Example with utf8 charset and os.environb[b'PATH'] = '\xff': path=os.environ['PATH'] will raise a KeyError to keep the current behaviour. - os.environ raises an UnicodeDecodeError if the key or value can not be encoded in the current charset. Example with ASCII charset: os.environ['PATH'] = '/home/hayp\xf4' - except undecodable variable values in os.environb, os.environ and os.environb will be consistent. Example: delete a variable in os.environb will also delete the key in os.environ. I think that most of these points (or all points) are ok for everyone (especially ok for Toshio Kuratomi and me :-)). Now I have to try to write an implementation of this, but it's complex, especially to keep os.environ and os.environb consistents! Processes ========= I proposed patches to fix non-Windows OS, but Antoine Pitrou wants also bytes on Windows. Amaury wrote that it's possible using the ANSI version of the Windows API. I don't know this API and so I can not contribute to this point. --- Rejected idea ============= Use a private Unicode block causes interoperability problems: - the block may be already used by other programs/libraires - 3rd party programs/libraries don't understand this block and may have problems this display/process the data (Is the idea really rejected? It has at least many problems) --- I don't have new solutions, it's just an email to restart the discussion about bytes ;-) Martin also asked for a PEP to change the posix module API to support bytes. -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From brett at python.org Sat Jan 3 04:30:14 2009 From: brett at python.org (Brett Cannon) Date: Fri, 2 Jan 2009 19:30:14 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: <200901030353.22251.victor.stinner@haypocalc.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> Message-ID: On Fri, Jan 2, 2009 at 18:53, Victor Stinner wrote: > Le Wednesday 31 December 2008 22:20:54, vous avez ?crit : >> When it comes to commit privs in general, I am of the school that they >> should be handed out carefully. I for one do not want to have to >> babysit other committers to make sure that they did something >> correctly. > > Last time I asked if anyone could help me in Python core if I had an svn > account, and I get this answer: everybody will review the changes. Anyway, > why do you fear problems? Did I already push buggy commits? I posted many > patches on Python bug tracker, most of them required many versions until they > get perfect. But it doesn't mean that with an svn account, I will skip the > bug tracker to wrote directly in the svn as my personal copy of Python!? > I know people will review your commits, but I prefer to have that be a safety precaution than any feeling that it is really required. And if you really do plan to continue to use the tracker heavily, that does help alleviate this worry. Once you have the ability to check in directly the temptation to skip having to wait for reviews becomes rather strong. >> I also want people who have no agenda. It's okay to have an area you >> care about, but that doesn't mean you should necessarily say "I will >> only work on math, ever, even if something is staring me right in the >> face!", etc. > > I wrote that I would like to improve Python quality by fuzzing, but I already > contributed to many different topics by patches on the bug tracker. > >> There is also dedication. I don't like giving commit privileges to >> people who I don't think will definitely stick around. (...) > > I don't understand why this is a problem. > Because when people contribute large bodies of code and then disappear someone then eventually has to step in and take up maintenance. That means more stuff for someone to have to keep up with and having to learn how the code works. >> To start, your focus on security, for me at least, goes too >> far sometimes. I have disagreed with some of your decisions in the >> name of security in the past and I am not quite ready to say that if >> you committed something I wouldn't feel compelled to double-check it >> to make sure you didn't go too far. > > I'm not sure that I understood correclty: does it mean that some of my issues > were not reproductible in the real world (far from the real usage of Python)? > It's true that some issues found by fuzzing are hard to reproduce (require a > prepared environment), but my goal is to kill all bugs :-) Even if the bug is > hard to reproduce, it does exist and that's why I'm thinking that it should > be fixed. > > Sorry if I misused the name "security" but I don't remember where I wrote > that "this issue is very security and related to security". Maybe by the > imageop issues? > What I mean is that I remember an instance or two where you found something that seemed like a security issue but that is otherwise not critical and wanting to make a change for it that I disagreed with based on it being security-related. Python is basically secure, but we have never claimed we are perfect. And I understand wanting to squash all bugs, but there have to be priorities, and sometimes security is not the highest priority for really obscure stuff. Or at least that's my opinion. > -- > > About fuzzing: I'm still using my fuzzer Fusil on Python trunk and py3k, and I > find fewer and fewer bugs ;-) Most of the time I rediscover bugs already > reported to the tracker, but not fixed yet. So the fuzzing job is mostly > done ;-) > That's good to hear! As I said, you are on your way, but I personally just am not ready to give you a +1 for commit privs. As Raymond said and you admitted to above, your patches still go through several revisions. Having commit privileges means you can skip that step and I am just not feeling comfortable with that to happen yet. But if another core developer or three are willing to say they will act as probationary officers on ALL of your commits for a while and you at least initially continue to use the issue tracker until the people watching your commits are willing to say you don't need to be watched, then I am fine with you getting commit privs. And I hope everyone realizes that they can speak up (publicly or privately) about *anyone* in regards to whether they think they need to lose their commit privileges for personal or coding reasons. I know it's tough to speak out publicly about someone and their coding abilities which is why I am trying to rationalize this for Victor instead of just sitting quietly while he does or does not get responses from people on whether he should get commit privileges. Every time commit privileges are given out it is a leap of faith and sometime the leap comes up short. -Brett From ondrej at certik.cz Sat Jan 3 09:20:36 2009 From: ondrej at certik.cz (Ondrej Certik) Date: Sat, 3 Jan 2009 00:20:36 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> Message-ID: <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> > And I hope everyone realizes that they can speak up (publicly or > privately) about *anyone* in regards to whether they think they need > to lose their commit privileges for personal or coding reasons. I know > it's tough to speak out publicly about someone and their coding > abilities which is why I am trying to rationalize this for Victor > instead of just sitting quietly while he does or does not get > responses from people on whether he should get commit privileges. > Every time commit privileges are given out it is a leap of faith and > sometime the leap comes up short. I am not a core developer, but I was following this thread with interest. A little offtopic: it seems to me it is a flaw of svn, that it encourages the model of two classes of developers, those with a commit access (first class) and those without it (second class). Victor -- maybe you can try something like "git svn", so that you don't have to use the bugtracker and wait until someone reviews the patches? If I understood correctly, your main point is that using bugtracker for committing patches is very painful (I agree). But since patches should be reviewed anyways, imho just using better tools that make the workflow more fluent could solve the problem and remove the friction of deciding if someone is good enough to get a svn commit access. Ondrej From martin at v.loewis.de Sat Jan 3 09:50:06 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 03 Jan 2009 09:50:06 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> Message-ID: <495F26BE.9050105@v.loewis.de> > A little offtopic: it seems to me it is a flaw of svn, that it > encourages the model of two classes of developers, those with a commit > access (first class) and those without it (second class). Victor -- > maybe you can try something like "git svn", so that you don't have to > use the bugtracker and wait until someone reviews the patches? I don't think that this changes anything at all. You can commit to your DVCS all the time, however, doing so is futile if your patches don't get integrated. So you will always have two classes of developers: those with write permissions to the trunk branch, and those without. FWIW, you can already get the Python tree through bazaar and a few other DVCSs. > If I > understood correctly, your main point is that using bugtracker for > committing patches is very painful (I agree). I understood differently: I thought Victor's complaint is that some of his patches stay uncommitted for a long time. Victor wants to commit small changes without review. Regards, Martin From martin at v.loewis.de Sat Jan 3 09:52:20 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 03 Jan 2009 09:52:20 +0100 Subject: [Python-Dev] Bytes for the command line, process arguments and environment variables In-Reply-To: <200901030429.12003.victor.stinner@haypocalc.com> References: <200901030429.12003.victor.stinner@haypocalc.com> Message-ID: <495F2744.1010805@v.loewis.de> > I don't have new solutions, it's just an email to restart the discussion about > bytes ;-) Martin also asked for a PEP to change the posix module API to > support bytes. And I repeat this request: we don't need new discussions; we need a draft specification. Restarting discussion will just cause it to go in circles over and over again, until everybody is bored and quits, so that some time later somebody can restart the discussion, to go in circles again. Regards, Martin From cournape at gmail.com Sat Jan 3 10:08:48 2009 From: cournape at gmail.com (David Cournapeau) Date: Sat, 3 Jan 2009 18:08:48 +0900 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F26BE.9050105@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> Message-ID: <5b8d13220901030108xf6bf5fbl3cc6e1b58c5e7d39@mail.gmail.com> On Sat, Jan 3, 2009 at 5:50 PM, "Martin v. L?wis" wrote: >> A little offtopic: it seems to me it is a flaw of svn, that it >> encourages the model of two classes of developers, those with a commit >> access (first class) and those without it (second class). Victor -- >> maybe you can try something like "git svn", so that you don't have to >> use the bugtracker and wait until someone reviews the patches? > > I don't think that this changes anything at all. You can commit to > your DVCS all the time, however, doing so is futile if your patches > don't get integrated. It does not make integration easier, but it certainly makes patch management easier for the patch writer. There are other means to manage patch on top of svn, but I find git-svn extremely useful. Actually, I use git-svn on top of svn repositories for projects I have write access to. git-svn is then a powerful way to manage patches (thanks to rebase). cheers, David From brett at python.org Sat Jan 3 10:16:44 2009 From: brett at python.org (Brett Cannon) Date: Sat, 3 Jan 2009 01:16:44 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F26BE.9050105@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> Message-ID: On Sat, Jan 3, 2009 at 00:50, "Martin v. L?wis" wrote: >> A little offtopic: it seems to me it is a flaw of svn, that it >> encourages the model of two classes of developers, those with a commit >> access (first class) and those without it (second class). Victor -- >> maybe you can try something like "git svn", so that you don't have to >> use the bugtracker and wait until someone reviews the patches? > > I don't think that this changes anything at all. You can commit to > your DVCS all the time, however, doing so is futile if your patches > don't get integrated. > > So you will always have two classes of developers: those with write > permissions to the trunk branch, and those without. > Nor will this ever change. I do not ever see us taking on the attitude of a project like Pugs where they give commit privileges to anyone who has ever written a single, good patch. > FWIW, you can already get the Python tree through bazaar and a few > other DVCSs. > And work is being done to eventually transition to a DVCS anyway, so this will not be an issue forever. >> If I >> understood correctly, your main point is that using bugtracker for >> committing patches is very painful (I agree). > > I understood differently: I thought Victor's complaint is that some > of his patches stay uncommitted for a long time. Victor wants to > commit small changes without review. This is what I understood to be Victor's desire as well. Victor is prolific enough in writing patches for Python that he has been bitten by the fact that issues are triaged based on individual committer priorities which can lead to patches sitting on the tracker for a while. -Brett From solipsis at pitrou.net Sat Jan 3 11:20:27 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 3 Jan 2009 10:20:27 +0000 (UTC) Subject: [Python-Dev] I would like an svn account References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <5b8d13220901030108xf6bf5fbl3cc6e1b58c5e7d39@mail.gmail.com> Message-ID: David Cournapeau gmail.com> writes: > > It does not make integration easier, but it certainly makes patch > management easier for the patch writer. There are other means to > manage patch on top of svn, but I find git-svn extremely useful. > Actually, I use git-svn on top of svn repositories for projects I have > write access to. git-svn is then a powerful way to manage patches > (thanks to rebase). I also use Mercurial for my Python work. It's much more practical to evolve and maintain patches, even with commit access. Regards Antoine. From doomster at knuut.de Sat Jan 3 12:13:46 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Sat, 3 Jan 2009 12:13:46 +0100 Subject: [Python-Dev] PyOS_GetLastModificationTime In-Reply-To: <495E929C.7020307@v.loewis.de> References: <200901022230.39789.doomster@knuut.de> <495E929C.7020307@v.loewis.de> Message-ID: <200901031213.46694.doomster@knuut.de> On Friday 02 January 2009 23:18:04 Martin v. L?wis wrote: > Correct me if I'm wrong: it seems that the function isn't called > anymore. So I propose to just remove it (and the file it lives > in). Filed as issue #4817, including patch. Uli From steve at holdenweb.com Sat Jan 3 13:57:45 2009 From: steve at holdenweb.com (Steve Holden) Date: Sat, 03 Jan 2009 07:57:45 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> Message-ID: <495F60C9.7050401@holdenweb.com> Brett Cannon wrote: > On Sat, Jan 3, 2009 at 00:50, "Martin v. L?wis" wrote: >>> A little offtopic: it seems to me it is a flaw of svn, that it >>> encourages the model of two classes of developers, those with a commit >>> access (first class) and those without it (second class). Victor -- >>> maybe you can try something like "git svn", so that you don't have to >>> use the bugtracker and wait until someone reviews the patches? >> I don't think that this changes anything at all. You can commit to >> your DVCS all the time, however, doing so is futile if your patches >> don't get integrated. >> >> So you will always have two classes of developers: those with write >> permissions to the trunk branch, and those without. >> > > Nor will this ever change. I do not ever see us taking on the attitude > of a project like Pugs where they give commit privileges to anyone who > has ever written a single, good patch. > >> FWIW, you can already get the Python tree through bazaar and a few >> other DVCSs. >> > > And work is being done to eventually transition to a DVCS anyway, so > this will not be an issue forever. > >>> If I >>> understood correctly, your main point is that using bugtracker for >>> committing patches is very painful (I agree). >> I understood differently: I thought Victor's complaint is that some >> of his patches stay uncommitted for a long time. Victor wants to >> commit small changes without review. > > This is what I understood to be Victor's desire as well. Victor is > prolific enough in writing patches for Python that he has been bitten > by the fact that issues are triaged based on individual committer > priorities which can lead to patches sitting on the tracker for a > while. > I think it was courageous of Brett to tackle this issue head-on as he did, and of Victor to respond so positively to the various comments that have been made on this thread. It would be a pity to lose a developer who so obviously has Python's best interests at heart. As someone with a strong interest in Python's development, but whose interests lie outside direct development at the code face I would like to see some way where committed non-committers like Victor could be mentored through the initial stages of development, to the point where they can be trusted to make commits that don't need reversion. In the old days this would have happened by a process known in the British training world as "sitting with Nellie" - doing the work next to, and directly supervised by, someone who had been doing it a long time and who knew all the wrinkles of the job. Quite how to achieve a similar effect in today's distributed development environment is less obvious. Could we talk about this at PyCon (as well as continuing this thread to some sort of conclusion)? While the sprints are great for those who are already involved some activity specifically targeted at new developers would be a welcome addition, and might even help recruit them. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From victor.stinner at haypocalc.com Sat Jan 3 16:52:56 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Sat, 3 Jan 2009 16:52:56 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> Message-ID: <200901031652.56991.victor.stinner@haypocalc.com> > A little offtopic: it seems to me it is a flaw of svn, that it > encourages the model of two classes of developers, those with a commit > access (first class) and those without it (second class). Yes, that's the problem. Is it not possible to have finer permission (instead of boolean permission: commit or not commit)? Eg. give commit access but only for a file or a directory? It looks like Tarek Ziade is now allowed to commit, but only on distutils. I like such permission because nobody knows the whole Python project, it's too huge for a single brain ;-) > your main point is that using bugtracker for committing patches > is very painful (I agree) No, my point is that some patches stay too long in the tracker. GIT, Mercurial or anything else are a little bit better than the tracker (the patches can be synchronized with upstream), but the goal is to be part of the upstream code base. A distributed VCS is useful to test huge changes. Performance improvment on integers (patches to optimize the multiplication, use base 2^30 instead of 2^15, etc.) would benefit from such tools, because cooperative work is easier. -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From victor.stinner at haypocalc.com Sat Jan 3 16:56:34 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Sat, 3 Jan 2009 16:56:34 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F26BE.9050105@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> Message-ID: <200901031656.34335.victor.stinner@haypocalc.com> > > If I understood correctly, your main point is that using bugtracker > > for committing patches is very painful (I agree). > > I understood differently: I thought Victor's complaint is that some > of his patches stay uncommitted for a long time. Not only *my* patches. I spoke about my issues because I know them better than the other ones ;-) > Victor wants to commit small changes without review. That's true. Open an issue for trivial changes takes to much time. -- I hope that the discussion of my svn acount would benefit to the whole Python process ;-) -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From doomster at knuut.de Sat Jan 3 17:13:05 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Sat, 3 Jan 2009 17:13:05 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <200901031652.56991.victor.stinner@haypocalc.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> Message-ID: <200901031713.05985.doomster@knuut.de> On Saturday 03 January 2009 16:52:56 Victor Stinner wrote: > > A little offtopic: it seems to me it is a flaw of svn, that it > > encourages the model of two classes of developers, those with a commit > > access (first class) and those without it (second class). > > Yes, that's the problem. Is it not possible to have finer permission > (instead of boolean permission: commit or not commit)? Eg. give commit > access but only for a file or a directory? Yes it is possible. As far as your goal is concerned, couldn't you live with a branch where you develop the feature? That way, people could see your code and e.g. switch their working copies there for testing or even merge it into trunk some day. SVN actually supports that rather well, it would be guaranteed to not affect the quality of the releases negatively and saying "please merge r1234 from foo into trunk" is much easier than downloading and applying a patch, which doesn't even cover all possible changes that SVN does. Actually, I'd like such a branch, too, where I could move much quicker and in particular with the backing of a VCS to port Python to MS Windows CE. Currently, I'm tempted to pull the code into a private repository, which causes problems when I want to push it back upstream. Uli From martin at v.loewis.de Sat Jan 3 17:17:13 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 03 Jan 2009 17:17:13 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <200901031652.56991.victor.stinner@haypocalc.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> Message-ID: <495F8F89.6090903@v.loewis.de> > Yes, that's the problem. Is it not possible to have finer permission (instead > of boolean permission: commit or not commit)? Eg. give commit access but only > for a file or a directory? It looks like Tarek Ziade is now allowed to > commit, but only on distutils. I like such permission because nobody knows > the whole Python project, it's too huge for a single brain ;-) I like them, too - that's why I'm generally not opposed to handing out such privileges fairly generously. In our experience, you don't need to enforce such a restriction technically - the social enforcement (you lose access if you are changing things you were not supposed to change) is sufficient. In fact, the Python repository also hosts Stackless Python, so technically, Python committers can commit to stackless also, and vice versa - but nobody does, of course. There are many other people who commit only to specific files, or only specific kinds of changes; see Misc/developers.txt for a (possibly incomplete) list. Regards, Martin From solipsis at pitrou.net Sat Jan 3 17:21:16 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 3 Jan 2009 16:21:16 +0000 (UTC) Subject: [Python-Dev] I would like an svn account References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> Message-ID: Ulrich Eckhardt knuut.de> writes: > > saying "please merge r1234 from > foo into trunk" is much easier than downloading and applying a patch, which > doesn't even cover all possible changes that SVN does. I don't know about others, but downloading and applying a patch doesn't bother me (it's actually much quicker than doing a whole new SVN checkout). What takes time and effort is to actually check and review the patch (or branch, or whatever). > Actually, I'd like such a branch, too, where I could move much quicker and > in > particular with the backing of a VCS to port Python to MS Windows CE. > Currently, I'm tempted to pull the code into a private repository, which > causes problems when I want to push it back upstream. You could clone one of the existing DCVS mirrors and open a branch on a public hosting service (bitbucket.org, launchpad, etc.). The annoying thing, though, is that it requires your co-workers to learn the DVCS in question. Regards Antoine. From cournape at gmail.com Sat Jan 3 17:30:39 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 4 Jan 2009 01:30:39 +0900 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> Message-ID: <5b8d13220901030830l271dd62y22498a71d7e8fef3@mail.gmail.com> On Sun, Jan 4, 2009 at 1:21 AM, Antoine Pitrou wrote: > > You could clone one of the existing DCVS mirrors and open a branch on a public > hosting service (bitbucket.org, launchpad, etc.). The annoying thing, though, > is that it requires your co-workers to learn the DVCS in question. The problem is pushing back to upstream; I don't know much about mercurial, but I would advise the op to take a look at git-svn or similar tools with other DVCS (each tool has its own way). It really is the best IMHO to track a project upstream, with the option of pushing back - again, it is so simple (took me ~ 1 hour to get around without any previous encounter with git and I am no genius) and useful that it is my method of choice to commit to projects I am developing for and which use svn. cheers, David From martin at v.loewis.de Sat Jan 3 17:36:04 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 03 Jan 2009 17:36:04 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <200901031713.05985.doomster@knuut.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> Message-ID: <495F93F4.6080007@v.loewis.de> > As far as your goal is concerned, couldn't you live with a branch where you > develop the feature? That still doesn't help the change getting merged into the trunk. Whether you store it in a patch file, in a DVCS, or in the very same VCS-but-different-branch - these are all minor details, which may affect the efficiency of producing and technically integrating the patch. It doesn't help to the least in speeding up reviews of the patch, or reduces the amount of work necessary to do a review. For that, all that the contributor can do is to make the contribution review-friendly (make the patch technically complete, separate independent issues, provide a concise, but explicit description, and possibly a guide through the patch); I think Victor could still improve his patches in this respect (and I do understand the difficulties of the language barrier). For me, as a reviewer, a patch is either obvious, correct, and complete at first sight - or it is difficult. I can review only one difficult patch per week (currently), and that can easily cause patches that I need to review to stay in the tracker many months. Of course, there are more active reviewers, so the acceptance rate is higher; OTOH, many committers don't review at all, or shy away from difficult patches. > Actually, I'd like such a branch, too, where I could move much quicker and in > particular with the backing of a VCS to port Python to MS Windows CE. > Currently, I'm tempted to pull the code into a private repository, which > causes problems when I want to push it back upstream. [I guess you aren't happy with the DVCS systems, such as bazaar, which supposedly work perfect in exactly this case. I won't blame you for that, but still, consider trying out one of them for this project] We can setup such a branch, unless you reconsider and try bazaar first. There wouldn't be any pushing it back upstream, though - you would still need to go through the tracker for all changes. The only advantage I can see is that it simplifies repeated merging of the trunk into your branch. Regards, Martin From martin at v.loewis.de Sat Jan 3 17:40:44 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 03 Jan 2009 17:40:44 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> Message-ID: <495F950C.5010506@v.loewis.de> > I don't know about others, but downloading and applying a patch doesn't > bother me (it's actually much quicker than doing a whole new SVN checkout). Same here. In fact, when I had to backport patches before the usage of svnmerge.py, I would always apply the original patch multiple times, rather than trying to use svn merge. Integrating patches is only tedious if they don't apply cleanly anymore, in which case I usually ask the contributor to regenerate it (which they often can easily do as they had been tracking trunk in their own sandboxes). > You could clone one of the existing DCVS mirrors and open a branch on a public > hosting service (bitbucket.org, launchpad, etc.). The annoying thing, though, > is that it requires your co-workers to learn the DVCS in question. We (as his co-workers) would continue to request patches. So the DVCS better has a convenient way to generate a patch (even from multiple DVCS commits). I think that's what (git) people call "feature branch": a branch with the sole purpose of developing a single patch. Regards, Martin From martin at v.loewis.de Sat Jan 3 17:46:39 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 03 Jan 2009 17:46:39 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <5b8d13220901030830l271dd62y22498a71d7e8fef3@mail.gmail.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <5b8d13220901030830l271dd62y22498a71d7e8fef3@mail.gmail.com> Message-ID: <495F966F.2000109@v.loewis.de> [I don't want to get into another DVCS flamewar, but I just can't let this go uncommented :-] > (took me ~ 1 hour to get around > without any previous encounter with git and I am no genius) I'm no genius, either, and I think that git is the most horrible VCS that I ever had to use. The error messages are incomprehensible, and it fails to do stuff that should be trivial (and indeed is trivial in subversion). On this project, I spent 40% of the time fighting git, 40% of the time fighting Perl, and was productive on 20% of the time. IOW, I find the learning curve for git extremely steep. Regards, Martin From doomster at knuut.de Sat Jan 3 17:54:29 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Sat, 3 Jan 2009 17:54:29 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> Message-ID: <200901031754.30081.doomster@knuut.de> On Saturday 03 January 2009 17:21:16 Antoine Pitrou wrote: > Ulrich Eckhardt knuut.de> writes: > > saying "please merge r1234 from > > foo into trunk" is much easier than downloading and applying a patch, > > which doesn't even cover all possible changes that SVN does. > > I don't know about others, but downloading and applying a patch doesn't > bother me (it's actually much quicker than doing a whole new SVN checkout). 1. I think that a patch can not e.g. capture a moved, renamed or deleted file. Further, it can not handle e.g. things like the executable bit or similar things that SVN otherwise does manage. That is what makes a patch only partially suitable. 2. You don't checkout anew. You simply switch ("svn switch") your existing working copy to the branch which just pulls the differences and merges them into your existing working copy. Or, you could merge the changes on a branch ("svn merge") into your working copy. > What takes time and effort is to actually check and review the patch (or > branch, or whatever). Yes, full ACK. Uli From victor.stinner at haypocalc.com Sat Jan 3 17:59:17 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Sat, 3 Jan 2009 17:59:17 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F93F4.6080007@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> Message-ID: <200901031759.17501.victor.stinner@haypocalc.com> > For me, as a reviewer, a patch is either obvious, > correct, and complete at first sight - or it is difficult. I can review > only one difficult patch per week (currently), and that can easily cause > patches that I need to review to stay in the tracker many months. The problem for the author of the patch is that he/she doesn't know that. You may drop a comment like "please explain how to reproduce the problem / why the patch is needed / how your patch fixes the issue" or "the patch is complex, can't you write smaller patch or try to fix it in a different way"? If Martin doesn't understand the patch, who will understand it? :-) -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From solipsis at pitrou.net Sat Jan 3 18:02:18 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 3 Jan 2009 17:02:18 +0000 (UTC) Subject: [Python-Dev] I would like an svn account References: <200812310155.40206.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <200901031754.30081.doomster@knuut.de> Message-ID: Ulrich Eckhardt knuut.de> writes: > > 1. I think that a patch can not e.g. capture a moved, renamed or deleted file. > Further, it can not handle e.g. things like the executable bit or similar > things that SVN otherwise does manage. That is what makes a patch only > partially suitable. You are right, I had forgotten about that. regards Antoine. From martin at v.loewis.de Sat Jan 3 18:10:27 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 03 Jan 2009 18:10:27 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <200901031754.30081.doomster@knuut.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <200901031754.30081.doomster@knuut.de> Message-ID: <495F9C03.4000708@v.loewis.de> > 1. I think that a patch can not e.g. capture a moved, renamed or deleted file. Correct. However, this rarely happened. Contributors are not supposed rename files, and they can indicate deletions and additions in plain English (I typically request a tarfile for additions). > Further, it can not handle e.g. things like the executable bit or similar > things that SVN otherwise does manage. That is what makes a patch only > partially suitable. Probably correct; this isn't a problem in practice, either. In fact, it is better if properties come from the subversion installation of the committer, rather than from the contributor, as the committer is supposed to have his autoprops set correctly. I do think that "svn diff" will record property changes, and that may include svn:executable. I don't know which patch tool would interpret them, though. Regards, Martin From cournape at gmail.com Sat Jan 3 18:24:36 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 4 Jan 2009 02:24:36 +0900 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F966F.2000109@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <5b8d13220901030830l271dd62y22498a71d7e8fef3@mail.gmail.com> <495F966F.2000109@v.loewis.de> Message-ID: <5b8d13220901030924k4df372fetae6f36fdcdd92a7c@mail.gmail.com> On Sun, Jan 4, 2009 at 1:46 AM, "Martin v. L?wis" wrote: > [I don't want to get into another DVCS flamewar, but I just > can't let this go uncommented :-] I am sorry if that sounded like a flamewar, that was not my intention: I just wanted to point out that there are solution that the op can implement all by himself to make his life easier - or not. > IOW, I find the learning curve for git extremely steep. But a steep learning curve means that little input gives great output, no ? :) David From ondrej at certik.cz Sat Jan 3 18:27:55 2009 From: ondrej at certik.cz (Ondrej Certik) Date: Sat, 3 Jan 2009 09:27:55 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F966F.2000109@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <5b8d13220901030830l271dd62y22498a71d7e8fef3@mail.gmail.com> <495F966F.2000109@v.loewis.de> Message-ID: <85b5c3130901030927q6668b0b5g97098d07136d1586@mail.gmail.com> On Sat, Jan 3, 2009 at 8:46 AM, "Martin v. L?wis" wrote: > [I don't want to get into another DVCS flamewar, but I just > can't let this go uncommented :-] >> (took me ~ 1 hour to get around >> without any previous encounter with git and I am no genius) > > I'm no genius, either, and I think that git is the most horrible > VCS that I ever had to use. The error messages are incomprehensible, > and it fails to do stuff that should be trivial (and indeed is trivial > in subversion). On this project, I spent 40% of the time fighting > git, 40% of the time fighting Perl, and was productive on 20% of the > time. IOW, I find the learning curve for git extremely steep. That is interesting, I had the opposite experience. :) http://ondrejcertik.blogspot.com/2008/12/experience-with-git-after-4-months.html Ondrej From martin at v.loewis.de Sat Jan 3 18:31:45 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 03 Jan 2009 18:31:45 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <5b8d13220901030924k4df372fetae6f36fdcdd92a7c@mail.gmail.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <5b8d13220901030830l271dd62y22498a71d7e8fef3@mail.gmail.com> <495F966F.2000109@v.loewis.de> <5b8d13220901030924k4df372fetae6f36fdcdd92a7c@mail.gmail.com> Message-ID: <495FA101.5020406@v.loewis.de> David Cournapeau wrote: > On Sun, Jan 4, 2009 at 1:46 AM, "Martin v. L?wis" wrote: >> [I don't want to get into another DVCS flamewar, but I just >> can't let this go uncommented :-] > > I am sorry if that sounded like a flamewar, that was not my intention: Oops - maybe the smiley was not obvious enough. I didn't think of your message as flaming - rather of mine. Sorry for the misunderstanding. > I just wanted to point out that there are solution that the op can > implement all by himself to make his life easier - or not. Right. I believe the bazaar branches that we have set up can do the same thing, just as well. >> IOW, I find the learning curve for git extremely steep. > > But a steep learning curve means that little input gives great output, no ? :) :-) I never understood that picture well. In any case (to stay in the picture), the problem is not only that it is steep (and thus stressing to climb), but also that it (the amount of stuff you need know to use git) is very large. Regards, Martin From g.brandl at gmx.net Sat Jan 3 18:42:43 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 03 Jan 2009 18:42:43 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <200901031656.34335.victor.stinner@haypocalc.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <200901031656.34335.victor.stinner@haypocalc.com> Message-ID: Victor Stinner schrieb: >> > If I understood correctly, your main point is that using bugtracker >> > for committing patches is very painful (I agree). >> >> I understood differently: I thought Victor's complaint is that some >> of his patches stay uncommitted for a long time. > > Not only *my* patches. I spoke about my issues because I know them better than > the other ones ;-) > >> Victor wants to commit small changes without review. > > That's true. Open an issue for trivial changes takes to much time. I've become cautious of labeling patches as "trivial". Some may really be, e.g. typos and the like, but those are almost always dealt with quickly. Others may seem trivial, as in "add that line here", but there is often a problem associated -- like the question of portability, or backwards compatibility. In a few cases, we can see that as committing the fix leads to some complaint, and it is backed out again. But there might be others where the problem is overlooked and only noticed after some time in a more public fashion. > -- > > I hope that the discussion of my svn acount would benefit to the whole Python > process ;-) It looks like it does, and that's a good thing (once in a while). Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From solipsis at pitrou.net Sat Jan 3 18:45:56 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 3 Jan 2009 17:45:56 +0000 (UTC) Subject: [Python-Dev] I would like an svn account References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <200901031656.34335.victor.stinner@haypocalc.com> Message-ID: Georg Brandl gmx.net> writes: > > It looks like it does, and that's a good thing (once in a while). Hey, and we've even had a DVCS sub-thread in the process ;) From g.brandl at gmx.net Sat Jan 3 18:47:11 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 03 Jan 2009 18:47:11 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F950C.5010506@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F950C.5010506@v.loewis.de> Message-ID: Martin v. L?wis schrieb: >> I don't know about others, but downloading and applying a patch doesn't >> bother me (it's actually much quicker than doing a whole new SVN checkout). > > Same here. In fact, when I had to backport patches before the usage of > svnmerge.py, I would always apply the original patch multiple times, > rather than trying to use svn merge. > > Integrating patches is only tedious if they don't apply cleanly anymore, > in which case I usually ask the contributor to regenerate it (which they > often can easily do as they had been tracking trunk in their own > sandboxes). > >> You could clone one of the existing DCVS mirrors and open a branch on a public >> hosting service (bitbucket.org, launchpad, etc.). The annoying thing, though, >> is that it requires your co-workers to learn the DVCS in question. > > We (as his co-workers) would continue to request patches. So the DVCS > better has a convenient way to generate a patch (even from multiple DVCS > commits). I think that's what (git) people call "feature branch": a > branch with the sole purpose of developing a single patch. One good thing is also that a big change is usually split up into multiple commits, and each commit can be exported as a single patch. I for one am much better at reviewing small, isolated changes, than glorious rewrites of a whole module, and I suspect I'm not alone in this. So it's much better to have a large change chunked into small, manageable bites that can even be applied individually without having to pull in everything at once. cheers, Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From g.brandl at gmx.net Sat Jan 3 18:52:23 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 03 Jan 2009 18:52:23 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F60C9.7050401@holdenweb.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <495F60C9.7050401@holdenweb.com> Message-ID: Steve Holden schrieb: > I think it was courageous of Brett to tackle this issue head-on as he > did, and of Victor to respond so positively to the various comments that > have been made on this thread. It would be a pity to lose a developer > who so obviously has Python's best interests at heart. Full ACK. > As someone with a strong interest in Python's development, but whose > interests lie outside direct development at the code face I would like > to see some way where committed non-committers like Victor could be > mentored through the initial stages of development, to the point where > they can be trusted to make commits that don't need reversion. I don't think we have the manpower to do that beyond the already established "I have to sign off all your commits" procedure. Of course, this is time consuming, so maybe for Victor it is just the matter of no developer currently finding the time to do it. > In the old days this would have happened by a process known in the > British training world as "sitting with Nellie" - doing the work next > to, and directly supervised by, someone who had been doing it a long > time and who knew all the wrinkles of the job. Quite how to achieve a > similar effect in today's distributed development environment is less > obvious. IRC gets relatively close to sitting next to someone :) > Could we talk about this at PyCon (as well as continuing this thread to > some sort of conclusion)? While the sprints are great for those who are > already involved some activity specifically targeted at new developers > would be a welcome addition, and might even help recruit them. Topic for the language summit? Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From benjamin at python.org Sat Jan 3 18:58:41 2009 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 3 Jan 2009 11:58:41 -0600 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F950C.5010506@v.loewis.de> Message-ID: <1afaf6160901030958y6560819q6fc17f141b9fc46f@mail.gmail.com> On Sat, Jan 3, 2009 at 11:47 AM, Georg Brandl wrote: > Martin v. L?wis schrieb: >>> I don't know about others, but downloading and applying a patch doesn't >>> bother me (it's actually much quicker than doing a whole new SVN checkout). >> >> Same here. In fact, when I had to backport patches before the usage of >> svnmerge.py, I would always apply the original patch multiple times, >> rather than trying to use svn merge. >> >> Integrating patches is only tedious if they don't apply cleanly anymore, >> in which case I usually ask the contributor to regenerate it (which they >> often can easily do as they had been tracking trunk in their own >> sandboxes). >> >>> You could clone one of the existing DCVS mirrors and open a branch on a public >>> hosting service (bitbucket.org, launchpad, etc.). The annoying thing, though, >>> is that it requires your co-workers to learn the DVCS in question. >> >> We (as his co-workers) would continue to request patches. So the DVCS >> better has a convenient way to generate a patch (even from multiple DVCS >> commits). I think that's what (git) people call "feature branch": a >> branch with the sole purpose of developing a single patch. > > One good thing is also that a big change is usually split up into multiple > commits, and each commit can be exported as a single patch. I for one am > much better at reviewing small, isolated changes, than glorious rewrites > of a whole module, and I suspect I'm not alone in this. So it's much > better to have a large change chunked into small, manageable bites that > can even be applied individually without having to pull in everything > at once. Another advantage is the commit logs, which give the reviewer some insight into what the patch author was thinking. I groan internally whenever I see a patch which starts with 100 "-" lines followed by a complete rewrite of the code. Incremental diffs make it easier to follow the evolution of the code leading to a better review. For patch authors, it also conferrers the beauty of version control to their work. For example, if I review dislikes the last change, it's trivial to revert. -- Regards, Benjamin From arfrever.fta at gmail.com Sat Jan 3 18:56:51 2009 From: arfrever.fta at gmail.com (Arfrever Frehtes Taifersar Arahesis) Date: Sat, 3 Jan 2009 18:56:51 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F9C03.4000708@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901031754.30081.doomster@knuut.de> <495F9C03.4000708@v.loewis.de> Message-ID: <200901031857.46903.Arfrever.FTA@gmail.com> 2009-01-03 18:10:27 Martin v. L?wis napisa?(a): > > 1. I think that a patch can not e.g. capture a moved, renamed or deleted file. > > Correct. However, this rarely happened. Contributors are not supposed > rename files, and they can indicate deletions and additions in plain > English (I typically request a tarfile for additions). > > > Further, it can not handle e.g. things like the executable bit or similar > > things that SVN otherwise does manage. That is what makes a patch only > > partially suitable. > > Probably correct; this isn't a problem in practice, either. In fact, it > is better if properties come from the subversion installation of the > committer, rather than from the contributor, as the committer is > supposed to have his autoprops set correctly. > > I do think that "svn diff" will record property changes, and that may > include svn:executable. I don't know which patch tool would interpret > them, though. Subversion 1.7 will probably contain 'svn patch' subcommand, which will be able to apply patches which change properties, or copy/add/delete files/directories... -- Arfrever Frehtes Taifersar Arahesis -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part. URL: From dirkjan at ochtman.nl Sat Jan 3 18:48:24 2009 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Sat, 03 Jan 2009 18:48:24 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <200901031754.30081.doomster@knuut.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <200901031754.30081.doomster@knuut.de> Message-ID: <495FA4E8.6000507@ochtman.nl> On 03/01/2009 17:54, Ulrich Eckhardt wrote: > 1. I think that a patch can not e.g. capture a moved, renamed or deleted file. > Further, it can not handle e.g. things like the executable bit or similar > things that SVN otherwise does manage. That is what makes a patch only > partially suitable. Actually, git and Mercurial support the git extensions to the diff format, which support those things (mode changes, moves, copies, etc.) So if you're willing to sacrifice patch(1) compatibility, there are options... Cheers, Dirkjan From doomster at knuut.de Sat Jan 3 19:24:07 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Sat, 3 Jan 2009 19:24:07 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F93F4.6080007@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> Message-ID: <200901031924.07482.doomster@knuut.de> On Saturday 03 January 2009 17:36:04 Martin v. L?wis wrote: > > As far as your goal is concerned, couldn't you live with a branch where > > you develop the feature? > > That still doesn't help the change getting merged into the trunk. > Whether you store it in a patch file, in a DVCS, or in the very same > VCS-but-different-branch - these are all minor details, which may affect > the efficiency of producing and technically integrating the patch. It > doesn't help to the least in speeding up reviews of the patch, or > reduces the amount of work necessary to do a review. This is true... > > Actually, I'd like such a branch, too, where I could move much quicker > > and in particular with the backing of a VCS to port Python to MS Windows > > CE. Currently, I'm tempted to pull the code into a private repository, > > which causes problems when I want to push it back upstream. > > [I guess you aren't happy with the DVCS systems, such as bazaar, which > supposedly work perfect in exactly this case. I won't blame you for > that, but still, consider trying out one of them for this project] I tried bazaar, but it's just too much to tackle at once: porting to CE, learning BZR and maintaining a feature branch on trunk (though the latter should not be too difficult, according to BZR's reputation). > We can setup such a branch, unless you reconsider and try bazaar first. > There wouldn't be any pushing it back upstream, though - you would still > need to go through the tracker for all changes. The only advantage I > can see is that it simplifies repeated merging of the trunk into your > branch. Actually, now that I think about it, I must admit that it doesn't really matter to me whether I use a local mirror or work on a remote branch. The problem is still splitting up the whole port into pieces that can be digested (read: reviewed) at a time. Since I'm confident with the use of SVN, I'll for now stay with it and a local mirror, but any single change that can be submitted will be submitted, hopefully to have something working here soon. Cheers! Uli From steve at holdenweb.com Sat Jan 3 19:25:42 2009 From: steve at holdenweb.com (Steve Holden) Date: Sat, 03 Jan 2009 13:25:42 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <495F60C9.7050401@holdenweb.com> Message-ID: Georg Brandl wrote: > Steve Holden schrieb: > >> I think it was courageous of Brett to tackle this issue head-on as he >> did, and of Victor to respond so positively to the various comments that >> have been made on this thread. It would be a pity to lose a developer >> who so obviously has Python's best interests at heart. > > Full ACK. > >> As someone with a strong interest in Python's development, but whose >> interests lie outside direct development at the code face I would like >> to see some way where committed non-committers like Victor could be >> mentored through the initial stages of development, to the point where >> they can be trusted to make commits that don't need reversion. > > I don't think we have the manpower to do that beyond the already > established "I have to sign off all your commits" procedure. Of course, > this is time consuming, so maybe for Victor it is just the matter of > no developer currently finding the time to do it. > >> In the old days this would have happened by a process known in the >> British training world as "sitting with Nellie" - doing the work next >> to, and directly supervised by, someone who had been doing it a long >> time and who knew all the wrinkles of the job. Quite how to achieve a >> similar effect in today's distributed development environment is less >> obvious. > > IRC gets relatively close to sitting next to someone :) > >> Could we talk about this at PyCon (as well as continuing this thread to >> some sort of conclusion)? While the sprints are great for those who are >> already involved some activity specifically targeted at new developers >> would be a welcome addition, and might even help recruit them. > > Topic for the language summit? > +1, but I won't be at the summit because I'll be teaching tutorials that day. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From barry at python.org Sat Jan 3 19:34:56 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 3 Jan 2009 13:34:56 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F60C9.7050401@holdenweb.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <495F60C9.7050401@holdenweb.com> Message-ID: <452CE414-E8C5-4F05-9524-46677B84DFE0@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 3, 2009, at 7:57 AM, Steve Holden wrote: > In the old days this would have happened by a process known in the > British training world as "sitting with Nellie" - doing the work next > to, and directly supervised by, someone who had been doing it a long > time and who knew all the wrinkles of the job. Quite how to achieve a > similar effect in today's distributed development environment is less > obvious. > > Could we talk about this at PyCon (as well as continuing this thread > to > some sort of conclusion)? While the sprints are great for those who > are > already involved some activity specifically targeted at new developers > would be a welcome addition, and might even help recruit them. I think this is a great idea. I would like to see a more formal mentoring process. As a community I think we need a constant source of fresh blood, and mentoring is a great way to shepherd those new recruits to full commit privileges. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSV+v0XEjvBPtnXfVAQIUsAP/aAO7ykXaSP/mA6Cs2874vYIHWZnzYnJx +hyv2i0A65Td9FX2+Jno/TtXLamnU7qC+gqOvf+bkPKyV1T0SlInm0ZXPa0hcNou tKCN0xQCSpKIKnSWMI1VFapHyTUHneDvwY6AHh3mK77MLWdZBK1GTr3Pp10D+5Tj eH9b93mBehY= =xJml -----END PGP SIGNATURE----- From barry at python.org Sat Jan 3 19:38:49 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 3 Jan 2009 13:38:49 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: <200901031652.56991.victor.stinner@haypocalc.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 3, 2009, at 10:52 AM, Victor Stinner wrote: >> A little offtopic: it seems to me it is a flaw of svn, that it >> encourages the model of two classes of developers, those with a >> commit >> access (first class) and those without it (second class). > > Yes, that's the problem. Is it not possible to have finer permission > (instead > of boolean permission: commit or not commit)? Eg. give commit access > but only > for a file or a directory? It looks like Tarek Ziade is now allowed to > commit, but only on distutils. I like such permission because nobody > knows > the whole Python project, it's too huge for a single brain ;-) Well, except for Guido and Tim maybe :) Python does have finer grain permissions, but it's strictly by convention. We /could/ have technical means to control those permissions, but it's never been worth the effort before. >> your main point is that using bugtracker for committing patches >> is very painful (I agree) > > No, my point is that some patches stay too long in the tracker. GIT, > Mercurial > or anything else are a little bit better than the tracker (the > patches can be > synchronized with upstream), but the goal is to be part of the > upstream code > base. > > A distributed VCS is useful to test huge changes. Performance > improvment on > integers (patches to optimize the multiplication, use base 2^30 > instead of > 2^15, etc.) would benefit from such tools, because cooperative work is > easier. A DVCS has lots and lots of benefits. One that I like a lot is that it will be much easier for people to maintain such bigger branches while still tracking changes to the trunk. And a DVCS like Bazaar supports bundles which are essentially super-patches that contain all the meta data that a real branch would have. So a bzr bundle would be a fine thing to attach to a tracker issue, and it would be much more alive than a plain old patch. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSV+wuXEjvBPtnXfVAQK/bQQAnZIjOCZAvRX/Jgzwn7Qkq5cqSnB/6qs2 gDls7tTlGJdtmYgSoZDVhosExaLA7AqvSMxsdTgEID4ejhh1TX42xzifeWyAhwrz WrK591SfoNXHG+YxhIRebt9wenGYzn3S/Qe5eJ0Jct7u0G6rDWK0X35OyZa+woC1 BNK6H0fTfUo= =mK3Y -----END PGP SIGNATURE----- From barry at python.org Sat Jan 3 19:42:41 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 3 Jan 2009 13:42:41 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F93F4.6080007@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 3, 2009, at 11:36 AM, Martin v. L?wis wrote: > We can setup such a branch, unless you reconsider and try bazaar > first. > There wouldn't be any pushing it back upstream, though - you would > still > need to go through the tracker for all changes. The only advantage I > can see is that it simplifies repeated merging of the trunk into your > branch. Although it doesn't help Victor specifically, anyone with svn commit privileges also has permission to push Bazaar (and I think Mercurial) branches back to code.python.org. Not the official branches, but a little sandbox for yourself. I don't know if anybody's actually doing this. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSV+xoXEjvBPtnXfVAQKhkwP+JgSPtX5CPSNYr9O15ISr1BB8d/fLYmhN SvJlMaSEADZeaetIaiFfbTBA0YQJHiGrQW/KIHshaJEOAyRrghuCYk0OupMw76H9 MSnsEQSEClOicbRKsZN3HuyTuO6QQq7RDg5nfGWX1yE6oUjlhpaofsz6dpSIPnwE jAYncLW4x0c= =INSu -----END PGP SIGNATURE----- From barry at python.org Sat Jan 3 19:46:50 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 3 Jan 2009 13:46:50 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: <200901031754.30081.doomster@knuut.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <200901031754.30081.doomster@knuut.de> Message-ID: <4D1506D3-3E8E-4132-BB91-FF581A0245CC@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 3, 2009, at 11:54 AM, Ulrich Eckhardt wrote: > 1. I think that a patch can not e.g. capture a moved, renamed or > deleted file. > Further, it can not handle e.g. things like the executable bit or > similar > things that SVN otherwise does manage. That is what makes a patch only > partially suitable. Bazaar bundles handle moved, renamed and deleted files and directories, afaik. Executable bits and other file metadata are harder because such properties are OS specific, and Bazaar tries to be OS agnostic. > 2. You don't checkout anew. You simply switch ("svn switch") your > existing > working copy to the branch which just pulls the differences and > merges them > into your existing working copy. Or, you could merge the changes on > a branch > ("svn merge") into your working copy. I know that many VCSs support this style of working, but this just doesn't fit my brain. I much prefer separate working trees, each with its own feature or fix. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSV+ymnEjvBPtnXfVAQIDKgQArzTiPmBZnrVBnfrn4kfIJ/21cT+RVCsI S9rtIrMRtxAETIeA0ko/9zPLatktYft8hpK77IBo2f1ZSs9vpGRZbm30j4OtQOAR /A/1hr6yGCNzc0OtbHoB7OoJ+1+0ORv+otmUIXJYWTiIOhG3y6oj1gWYkmwhYJRc 87333i8VIj0= =azL0 -----END PGP SIGNATURE----- From solipsis at pitrou.net Sat Jan 3 19:51:56 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 3 Jan 2009 18:51:56 +0000 (UTC) Subject: [Python-Dev] I would like an svn account References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> Message-ID: Barry Warsaw python.org> writes: > > Although it doesn't help Victor specifically, anyone with svn commit > privileges also has permission to push Bazaar (and I think Mercurial) > branches back to code.python.org. Actually the Mercurial repositories are read-only. We would need some server-side support (a script or something) to allow people to create separate repos or branches. From barry at python.org Sat Jan 3 19:55:01 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 3 Jan 2009 13:55:01 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 3, 2009, at 1:51 PM, Antoine Pitrou wrote: > Barry Warsaw python.org> writes: >> >> Although it doesn't help Victor specifically, anyone with svn commit >> privileges also has permission to push Bazaar (and I think Mercurial) >> branches back to code.python.org. > > Actually the Mercurial repositories are read-only. We would need some > server-side support (a script or something) to allow people to > create separate > repos or branches. That /should/ be fairly easy to do I think. There's a script that Martin runs to give new people svn write access. At Pycon I hacked that to basically write out the ~bzr/.ssh/authorized_keys file. We'd have to set up a ~hg user and then hack the script once more. I don't have time to do that right now unfortunately, but maybe one of the pydotorg'ers could give it a shot. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSV+0hXEjvBPtnXfVAQLSOQP/YJXdOU5QbcHLaSbkXxx5MjmwruqNSniD t/cbLbqQ6NgJYlskqqpWbvmBsZmN040KUdj4DI9nyymHAwB4LzFnc1rbErf4RCHd daopYPazwlS8Dv2r2ryjzdhrGDKlnYCbwUIb0f/JDvVZChCtcaN3m+lGWS7EnhD2 hFTHCo+0f6M= =v98I -----END PGP SIGNATURE----- From ziade.tarek at gmail.com Sat Jan 3 21:15:50 2009 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Sat, 3 Jan 2009 21:15:50 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495F8F89.6090903@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <495F8F89.6090903@v.loewis.de> Message-ID: <94bdd2610901031215s7f35dfaalfb4aa554443873d9@mail.gmail.com> On Sat, Jan 3, 2009 at 5:17 PM, "Martin v. L?wis" wrote: >> Yes, that's the problem. Is it not possible to have finer permission (instead >> of boolean permission: commit or not commit)? Eg. give commit access but only >> for a file or a directory? It looks like Tarek Ziade is now allowed to >> commit, but only on distutils. I like such permission because nobody knows >> the whole Python project, it's too huge for a single brain ;-) > > I like them, too - that's why I'm generally not opposed to handing out > such privileges fairly generously. In our experience, you don't need to > enforce such a restriction technically - the social enforcement (you > lose access if you are changing things you were not supposed to change) > is sufficient. This is a great model, as long as the concerned people focus in specific topics/areas. I think it is harder to apply for people that does fuzz testing on the code base : the core is impacted most of the time. There's another concern with that model, and I am wondering about it for the next series of patches I am working on in distutils. Since I will probably add some documentation, and since this documentation will probably benefit from some reviews, what would be the best process ? 1/ commit the changeset and ask for a post-review by Georg (or others) 2/ hold the changeset in a diff for a pre-review ? -> 1/ is better for the flow, but the quality of the doc might suffer from it if Georg (or others) doesn't have time to review it 2/ slows things down, make the feature/change unavailable until Georg (or others) had the time to review it Regards Tarek -- Tarek Ziad? | Association AfPy | www.afpy.org Blog FR | http://programmation-python.org Blog EN | http://tarekziade.wordpress.com/ From martin at v.loewis.de Sat Jan 3 21:36:52 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 03 Jan 2009 21:36:52 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <94bdd2610901031215s7f35dfaalfb4aa554443873d9@mail.gmail.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <495F8F89.6090903@v.loewis.de> <94bdd2610901031215s7f35dfaalfb4aa554443873d9@mail.gmail.com> Message-ID: <495FCC64.6050306@v.loewis.de> > Since I will probably add some documentation, and since this > documentation will probably > benefit from some reviews, what would be the best process ? > > 1/ commit the changeset and ask for a post-review by Georg (or others) > 2/ hold the changeset in a diff for a pre-review ? If you are confident that the documentation actually builds, feel free to commit it without pre-review. I recommend that you build the documentation at least once; I personally often commit documentation patches without testing first that they build when I'm confident about the markup I use. > 1/ is better for the flow, but the quality of the doc might suffer > from it if Georg (or others) doesn't have time to review it This is of little concern. As long as the documentation continues to build (into html), nearly all documentation changes are improvements. Regards, Martin From solipsis at pitrou.net Sat Jan 3 21:44:33 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 3 Jan 2009 20:44:33 +0000 (UTC) Subject: [Python-Dev] Wrong buildbot page for 3.x-stable Message-ID: Hello, it seems that both the following pages are identical, and are for release30-maint: http://www.python.org/dev/buildbot/3.0.stable/ http://www.python.org/dev/buildbot/3.x.stable/ Isn't the latter supposed to point to the py3k branch buildbots? Regards Antoine. From ncoghlan at gmail.com Sat Jan 3 22:11:45 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 04 Jan 2009 07:11:45 +1000 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <200901031656.34335.victor.stinner@haypocalc.com> Message-ID: <495FD491.2000005@gmail.com> Georg Brandl wrote: > I've become cautious of labeling patches as "trivial". Some may really be, > e.g. typos and the like, but those are almost always dealt with quickly. > Others may seem trivial, as in "add that line here", but there is often > a problem associated -- like the question of portability, or backwards > compatibility. In a few cases, we can see that as committing the > fix leads to some complaint, and it is backed out again. But there might > be others where the problem is overlooked and only noticed after some > time in a more public fashion. And other times something that *seems* to have a simple fix turns out to be a symptom of a deeper problem (there was one along those lines recently where there was an underlying issue with the changes to __hash__ inheritance in Py3k that surfaced as an apparent misbehaviour of hashing of range() instances - the problem was actually in PyObject_Hash(), range() just happened to trigger it). Deciding when to commit a fix directly and when to use the tracker (or even a branch) to get additional input on a change is actually one of the more interesting judgment calls that comes with commit privileges. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Sat Jan 3 22:21:53 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 04 Jan 2009 07:21:53 +1000 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495FCC64.6050306@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <495F8F89.6090903@v.loewis.de> <94bdd2610901031215s7f35dfaalfb4aa554443873d9@mail.gmail.com> <495FCC64.6050306@v.loewis.de> Message-ID: <495FD6F1.7020807@gmail.com> Martin v. L?wis wrote: >> Since I will probably add some documentation, and since this >> documentation will probably >> benefit from some reviews, what would be the best process ? >> >> 1/ commit the changeset and ask for a post-review by Georg (or others) >> 2/ hold the changeset in a diff for a pre-review ? > > If you are confident that the documentation actually builds, feel > free to commit it without pre-review. I recommend that you build > the documentation at least once; I personally often commit > documentation patches without testing first that they build when > I'm confident about the markup I use. > >> 1/ is better for the flow, but the quality of the doc might suffer >> from it if Georg (or others) doesn't have time to review it > > This is of little concern. As long as the documentation continues > to build (into html), nearly all documentation changes are > improvements. I agree with Martin here - breaking the documentation build isn't good, but other than that most doc changes are going to be OK. And as for doing your own doc build, these days that should be as simple as changing to the Docs directory and typing "make html" (stale code in the Docs/tools directory can sometimes be a problem, but if you haven't built the docs before then that shouldn't come up). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ziade.tarek at gmail.com Sat Jan 3 22:39:10 2009 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Sat, 3 Jan 2009 22:39:10 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495FD6F1.7020807@gmail.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <495F8F89.6090903@v.loewis.de> <94bdd2610901031215s7f35dfaalfb4aa554443873d9@mail.gmail.com> <495FCC64.6050306@v.loewis.de> <495FD6F1.7020807@gmail.com> Message-ID: <94bdd2610901031339i20f79c8au43c33f19048ddc80@mail.gmail.com> On Sat, Jan 3, 2009 at 10:21 PM, Nick Coghlan wrote: >> [cut] >> >>> 1/ is better for the flow, but the quality of the doc might suffer >>> from it if Georg (or others) doesn't have time to review it >> >> This is of little concern. As long as the documentation continues >> to build (into html), nearly all documentation changes are >> improvements. > > I agree with Martin here - breaking the documentation build isn't good, > but other than that most doc changes are going to be OK. Ok, I'll stick with that process, > > And as for doing your own doc build, these days that should be as simple > as changing to the Docs directory and typing "make html" (stale code in > the Docs/tools directory can sometimes be a problem, but if you haven't > built the docs before then that shouldn't come up). Running "make html" is part of my process when I change Doc, but I didn't know about the stale code issue, thanks for the tip Out of curiosity : is there any mechanism in the post-commit that checks if "make html" doesn't spit any error ? Cheers, Tarek From benjamin at python.org Sat Jan 3 22:41:29 2009 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 3 Jan 2009 15:41:29 -0600 Subject: [Python-Dev] I would like an svn account In-Reply-To: <94bdd2610901031339i20f79c8au43c33f19048ddc80@mail.gmail.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <495F8F89.6090903@v.loewis.de> <94bdd2610901031215s7f35dfaalfb4aa554443873d9@mail.gmail.com> <495FCC64.6050306@v.loewis.de> <495FD6F1.7020807@gmail.com> <94bdd2610901031339i20f79c8au43c33f19048ddc80@mail.gmail.com> Message-ID: <1afaf6160901031341t1ed38460s586366a42d07649c@mail.gmail.com> On Sat, Jan 3, 2009 at 3:39 PM, Tarek Ziad? wrote: > > Out of curiosity : is there any mechanism in the post-commit that > checks if "make html" > doesn't spit any error ? Not automatically. However, Georg and I test it fairly often and fix markup errors if they're present. -- Regards, Benjamin From martin at v.loewis.de Sat Jan 3 22:46:16 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 03 Jan 2009 22:46:16 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <94bdd2610901031339i20f79c8au43c33f19048ddc80@mail.gmail.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <495F8F89.6090903@v.loewis.de> <94bdd2610901031215s7f35dfaalfb4aa554443873d9@mail.gmail.com> <495FCC64.6050306@v.loewis.de> <495FD6F1.7020807@gmail.com> <94bdd2610901031339i20f79c8au43c33f19048ddc80@mail.gmail.com> Message-ID: <495FDCA8.7060506@v.loewis.de> > Out of curiosity : is there any mechanism in the post-commit that > checks if "make html" > doesn't spit any error ? No, there is no such mechanism. There are daily builds which will report errors eventually. Regards, Martin From brett at python.org Sat Jan 3 23:00:56 2009 From: brett at python.org (Brett Cannon) Date: Sat, 3 Jan 2009 14:00:56 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> Message-ID: On Sat, Jan 3, 2009 at 10:42, Barry Warsaw wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Jan 3, 2009, at 11:36 AM, Martin v. L?wis wrote: > >> We can setup such a branch, unless you reconsider and try bazaar first. >> There wouldn't be any pushing it back upstream, though - you would still >> need to go through the tracker for all changes. The only advantage I >> can see is that it simplifies repeated merging of the trunk into your >> branch. > > Although it doesn't help Victor specifically, anyone with svn commit > privileges also has permission to push Bazaar (and I think Mercurial) > branches back to code.python.org. Not the official branches, but a little > sandbox for yourself. I don't know if anybody's actually doing this. > I have been using bzr for all of my importlib work. It's worked out well sans the problem that SOMEONE Barry has not upgraded the bzr installation to support the newest wire protocol. -Brett From lkcl at lkcl.net Sat Jan 3 22:22:33 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 3 Jan 2009 21:22:33 +0000 Subject: [Python-Dev] compiling python2.5 on linux under wine Message-ID: hey, has anyone investigated compiling python2.5 using winegcc, under wine? i'm presently working my way through it, just for kicks, and was wondering if anyone would like to pitch in or stare at the mess under a microscope. it's not as crazed as it sounds. cross-compiling python2.5 for win32 with mingw32 is an absolute miserable bitch of a job that goes horribly wrong when you actually try to use the minimalist compiler to do any real work. so i figured that it would be easier to get python compiled using wine. i _have_ got some success - a python script and a python.exe.so (which is winegcc's friendly way of telling you you have something that stands a chance of working) as well as a libpython25.dll.so. what i _don't_ yet have is an _md5.dll (or should it be _md5.lib?) i.e. the standard modules are a bit... iffy. the _winreg.o is compiled; the _md5.o is compiled; the winreg.lib is not. whoops. plus, it's necessary to enable nt_dl.c which is in PC/ _not_ in Modules/. one of the key issues that's a bit of a bitch is that python is compiled up for win32 with a hard-coded pyconfig.h which someone went to a _lot_ of trouble to create by hand instead of using autoconf. oh - and it uses visualstudio so there's not even a Makefile. ignoring that for the time-being was what allowed me to get as far as actually having a python interpreter (with no c-based modules). so there's a whole _stack_ of stuff that needs dragging kicking and screaming into the 21st century. there _is_ a reason why i want to do this. actually, there's two. firstly, i sure as shit do _not_ want to buy, download, install _or_ run visual studio. i flat-out refuse to run an MS os and visual studio runs like a dog under wine. secondly, i want a python25.lib which i can use to cross-compile modules for poor windows users _despite_ sticking to my principles and keeping my integrity as a free software developer. thirdly i'd like to cross-compile pywebkitgtk for win32 fourthly i'd like to compile and link applications to the extremely successful and well wicked MSHTML.DLL... in the _wine_ project :) not the one in windows (!) i want to experiment with DOM model manipulation - from python - similar to the OLPC HulaHop project - _but_ i want to compile or cross-compile everything from linux, not windows (see 1 above) fifthly i'd like to see COM (DCOM) working and pywin32 compiled and useable under wine, even if it means having to get a license to use dcom98 and oleauth.lib and oleauth.h etc. and all the developer files needed to link DCOM applications under windows. actually what i'd _really_ like to see is FreeDCE's DCOM work actually damn well finished, it's only been eight years since wez committed the first versions of the IDL and header files, and it's only been over fifteen years since microsoft began its world domination using COM and DCOM. ... but that's another story :) so that's ... five reasons not two. if anyone would like to collaborate on a crazed project with someone who can't count, i'm happy to make available what i've got up to so far, on github.org. l. From brett at python.org Sat Jan 3 23:16:38 2009 From: brett at python.org (Brett Cannon) Date: Sat, 3 Jan 2009 14:16:38 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <495F60C9.7050401@holdenweb.com> Message-ID: On Sat, Jan 3, 2009 at 09:52, Georg Brandl wrote: > Steve Holden schrieb: > >> I think it was courageous of Brett to tackle this issue head-on as he >> did, and of Victor to respond so positively to the various comments that >> have been made on this thread. It would be a pity to lose a developer >> who so obviously has Python's best interests at heart. > > Full ACK. > >> As someone with a strong interest in Python's development, but whose >> interests lie outside direct development at the code face I would like >> to see some way where committed non-committers like Victor could be >> mentored through the initial stages of development, to the point where >> they can be trusted to make commits that don't need reversion. > > I don't think we have the manpower to do that beyond the already > established "I have to sign off all your commits" procedure. Of course, > this is time consuming, so maybe for Victor it is just the matter of > no developer currently finding the time to do it. > This is why I am trying to document the development procedures. That way at least the initial steps for handling various details are obvious and thus won't take up someone's time in explaining them. And to help make sure this thread stays on course, it very well might be the case that no one has the time to be Victor's mentor at this moment. I know I don't have the time right now. >> In the old days this would have happened by a process known in the >> British training world as "sitting with Nellie" - doing the work next >> to, and directly supervised by, someone who had been doing it a long >> time and who knew all the wrinkles of the job. Quite how to achieve a >> similar effect in today's distributed development environment is less >> obvious. > > IRC gets relatively close to sitting next to someone :) > >> Could we talk about this at PyCon (as well as continuing this thread to >> some sort of conclusion)? While the sprints are great for those who are >> already involved some activity specifically targeted at new developers >> would be a welcome addition, and might even help recruit them. > > Topic for the language summit? Maybe. We will see how that whole thing goes. I suspect it will be rather organic so it will depend on how much time there is. And the sprints at PyCon have actually acted as a mentoring session for a lot of people. People end up helping out with a new feature and the committers there are able to do a review instantly. And with the tight feedback loop between committer and contributor along with working on a new feature instead of existing code leads to people getting commit privileges on the spot (if someone is there to give them the privileges; I honestly don't know who has the abilities to give the rights anymore beyond Barry, Martin, and Neal). -Brett From g.brandl at gmx.net Sat Jan 3 23:16:13 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 03 Jan 2009 23:16:13 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495FCC64.6050306@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <495F8F89.6090903@v.loewis.de> <94bdd2610901031215s7f35dfaalfb4aa554443873d9@mail.gmail.com> <495FCC64.6050306@v.loewis.de> Message-ID: Martin v. L?wis schrieb: >> Since I will probably add some documentation, and since this >> documentation will probably >> benefit from some reviews, what would be the best process ? >> >> 1/ commit the changeset and ask for a post-review by Georg (or others) >> 2/ hold the changeset in a diff for a pre-review ? > > If you are confident that the documentation actually builds, feel > free to commit it without pre-review. I recommend that you build > the documentation at least once; I personally often commit > documentation patches without testing first that they build when > I'm confident about the markup I use. FWIW, I review most doc patches as they come into the commits mailing list. Also, since the docs are built regularly, and problems are usually fixed very fast, I don't want anybody to hold back a patch because of docs only. >> 1/ is better for the flow, but the quality of the doc might suffer >> from it if Georg (or others) doesn't have time to review it > > This is of little concern. As long as the documentation continues > to build (into html), nearly all documentation changes are > improvements. Agreed. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From martin at v.loewis.de Sat Jan 3 23:17:52 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 03 Jan 2009 23:17:52 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> Message-ID: <495FE410.9060409@v.loewis.de> > I have been using bzr for all of my importlib work. It's worked out > well sans the problem that SOMEONE Barry has not > upgraded the bzr installation to support the newest wire protocol. I'm probably to blame for this. Debian doesn't come with the latest bzr revision (bzr evolves way too fast for Debian, so that even their backports infrastructure doesn't provide recent binaries). I'm fairly opposed to installing non-vendor packages on www.python.org, as those typically don't see any maintenance, and often break as the regular packages get upgraded. As a consequence, I would always request that whatever VCS Python uses: the version that is in the current Debian's "stable" distribution must be sufficient to use the VCS, and must in particular be sufficient on the server side. Unfortunately, the current Debian release is stuck in political debates, so that we still can't use subversion 1.5 on the server. Regards, Martin From brett at python.org Sat Jan 3 23:24:46 2009 From: brett at python.org (Brett Cannon) Date: Sat, 3 Jan 2009 14:24:46 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495FE410.9060409@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> Message-ID: On Sat, Jan 3, 2009 at 14:17, "Martin v. L?wis" wrote: >> I have been using bzr for all of my importlib work. It's worked out >> well sans the problem that SOMEONE Barry has not >> upgraded the bzr installation to support the newest wire protocol. > > I'm probably to blame for this. Debian doesn't come with the latest > bzr revision (bzr evolves way too fast for Debian, so that even their > backports infrastructure doesn't provide recent binaries). I'm fairly > opposed to installing non-vendor packages on www.python.org, as those > typically don't see any maintenance, and often break as the regular > packages get upgraded. > > As a consequence, I would always request that whatever VCS Python > uses: the version that is in the current Debian's "stable" distribution > must be sufficient to use the VCS, and must in particular be sufficient > on the server side. > Even if someone like me or Barry volunteers to maintain the installation of the DVCS software? I would be willing to do this if/when the replacement for svn is chosen. > Unfortunately, the current Debian release is stuck in political debates, > so that we still can't use subversion 1.5 on the server. This is why depending wholly on Debian for everything can be annoying. I understand the policy and support it overall, but in the case of something like a DVCS that doesn't have ridiculous dependencies like svn and someone explicitly taking the lead on the specific installation it would seem like an exception could potentially be made. -Brett From g.brandl at gmx.net Sat Jan 3 23:36:39 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 03 Jan 2009 23:36:39 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <495F60C9.7050401@holdenweb.com> Message-ID: Brett Cannon schrieb: > And the sprints at PyCon have actually acted as a mentoring session > for a lot of people. People end up helping out with a new feature and > the committers there are able to do a review instantly. And with the > tight feedback loop between committer and contributor along with > working on a new feature instead of existing code leads to people > getting commit privileges on the spot (if someone is there to give > them the privileges; I honestly don't know who has the abilities to > give the rights anymore beyond Barry, Martin, and Neal). FWIW, I'm also among them. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From martin at v.loewis.de Sat Jan 3 23:47:35 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 03 Jan 2009 23:47:35 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> Message-ID: <495FEB07.3090604@v.loewis.de> >> As a consequence, I would always request that whatever VCS Python >> uses: the version that is in the current Debian's "stable" distribution >> must be sufficient to use the VCS, and must in particular be sufficient >> on the server side. >> > > Even if someone like me or Barry volunteers to maintain the > installation of the DVCS software? I would be willing to do this > if/when the replacement for svn is chosen. Now we need to separate between server side and client side; for each side, there should be a minimum required version (which might be different). If Debian stable doesn't include the minimum required client version, I will be opposed to switching to the DVCS. If it doesn't include the minimum required server version, I could live with somebody maintaining a manual installation (which then hopefully can be replaced with an official package on the next upgrade). > This is why depending wholly on Debian for everything can be annoying. > I understand the policy and support it overall, but in the case of > something like a DVCS that doesn't have ridiculous dependencies like > svn and someone explicitly taking the lead on the specific > installation it would seem like an exception could potentially be > made. It's always possible to make exceptions. It's not just about the VCS; there have been requests to replace Apache, NTP, Zope, Postgres, MoinMoin, and a few other packages. There have been many problems on upgrade for the cases where we gave in: shared libraries were missing after the upgrade (for Zope), the software wasn't available anymore after the upgrade (in case of manually-install Python pacakges), and so on. Very few people have actually helped in fixing these problems (applause to AMK for being very helpful with the most recent incidents). I'd rather have the users annoyed than finding out that the custom setup opened an entrance for hackers. Regards, Martin From barry at python.org Sat Jan 3 23:55:12 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 3 Jan 2009 17:55:12 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495FEB07.3090604@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 3, 2009, at 5:47 PM, Martin v. L?wis wrote: > It's always possible to make exceptions. It's not just about the VCS; > there have been requests to replace Apache, NTP, Zope, Postgres, > MoinMoin, and a few other packages. There have been many problems > on upgrade for the cases where we gave in: shared libraries were > missing after the upgrade (for Zope), the software wasn't available > anymore after the upgrade (in case of manually-install Python > pacakges), > and so on. Very few people have actually helped in fixing these > problems (applause to AMK for being very helpful with the most recent > incidents). > > I'd rather have the users annoyed than finding out that the custom > setup opened an entrance for hackers. Maybe this is a false choice. Maybe the problem is standardizing on Debian stable. If that distribution isn't giving us and our users what we need, maybe we need to re-evaluate that choice. Yes I know we've talked about that before and yes I know it would not be easy to switch to something different, but still. If you can't even upgrade svn to 1.5 on Debian stable, then I think you'll find it impossible to switch to any modern DVCS. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSV/s0HEjvBPtnXfVAQLkXQQAhuWFDoOUuA44JFtiTYGXJE1r3khAlUyL jo8kLDPRBUG4X9yFmsLdd1dqYSHjTJTin1aHLtfN804pKhaCQRwoWCGl9fi5quks Y39axH0L0FjDhteSVFiDYefgALJR9OELyrrxCpB5EJtxPE/cxyuQzOSeEts/QBzi ViW3h5OidGg= =YwjS -----END PGP SIGNATURE----- From skippy.hammond at gmail.com Sun Jan 4 00:01:37 2009 From: skippy.hammond at gmail.com (Mark Hammond) Date: Sun, 04 Jan 2009 10:01:37 +1100 Subject: [Python-Dev] ParseTuple question In-Reply-To: <200901021232.30572.doomster@knuut.de> References: <200901021232.30572.doomster@knuut.de> Message-ID: <495FEE51.1000202@gmail.com> On 2/01/2009 10:32 PM, Ulrich Eckhardt wrote: > Hi! > > I'm looking at NullImporter_init in import.c and especially at the call to > PyArg_ParseTuple there. What I'm wondering is what that call will do when I > call the function with a Unicode object. Will it convert the Unicode to a > char string first, will it return the Unicode object in a certain (default) > encoding, will it fail? PyArg_ParseTuple will fail if a unicode object is passed where a 's' format string is specified. > I'm working on the MS Windows CE port, and I don't have stat() there. Also, I > don't have GetFileAttributesA(char const*) there, so I need a wchar_t > (UTF-16) string anyway. What would be the best way to get one? On 'normal' windows you generally would need to use WideCharToMultiByte() to get a 'char *' version of your wchar string - but I expect you already know that, so I doubt I understand the question... Cheers, Mark From brett at python.org Sun Jan 4 00:03:40 2009 From: brett at python.org (Brett Cannon) Date: Sat, 3 Jan 2009 15:03:40 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495FEB07.3090604@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> Message-ID: On Sat, Jan 3, 2009 at 14:47, "Martin v. L?wis" wrote: >>> As a consequence, I would always request that whatever VCS Python >>> uses: the version that is in the current Debian's "stable" distribution >>> must be sufficient to use the VCS, and must in particular be sufficient >>> on the server side. >>> >> >> Even if someone like me or Barry volunteers to maintain the >> installation of the DVCS software? I would be willing to do this >> if/when the replacement for svn is chosen. > > Now we need to separate between server side and client side; for > each side, there should be a minimum required version (which might > be different). > > If Debian stable doesn't include the minimum required client version, > I will be opposed to switching to the DVCS. > OK. > If it doesn't include the minimum required server version, I could > live with somebody maintaining a manual installation (which then > hopefully can be replaced with an official package on the next upgrade). > That's what I am talking about. >> This is why depending wholly on Debian for everything can be annoying. >> I understand the policy and support it overall, but in the case of >> something like a DVCS that doesn't have ridiculous dependencies like >> svn and someone explicitly taking the lead on the specific >> installation it would seem like an exception could potentially be >> made. > > It's always possible to make exceptions. It's not just about the VCS; > there have been requests to replace Apache, NTP, Zope, Postgres, > MoinMoin, and a few other packages. There have been many problems > on upgrade for the cases where we gave in: shared libraries were > missing after the upgrade (for Zope), the software wasn't available > anymore after the upgrade (in case of manually-install Python pacakges), > and so on. Very few people have actually helped in fixing these > problems (applause to AMK for being very helpful with the most recent > incidents). > Right, which is why I wouldn't want to do this unless the installation was owned by someone who was definitely going to be around for a LONG time. > I'd rather have the users annoyed than finding out that the custom > setup opened an entrance for hackers. > Right. Whomever stepped forward to maintain a custom install would need to really stay on top of things. -Brett From martin at v.loewis.de Sun Jan 4 00:12:23 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 04 Jan 2009 00:12:23 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> Message-ID: <495FF0D7.5000501@v.loewis.de> > Maybe this is a false choice. Maybe the problem is standardizing on > Debian stable. If that distribution isn't giving us and our users what > we need, maybe we need to re-evaluate that choice. Yes I know we've > talked about that before and yes I know it would not be easy to switch > to something different, but still. If you can't even upgrade svn to 1.5 > on Debian stable, then I think you'll find it impossible to switch to > any modern DVCS. Maybe. Again, we should separate between client and server. The server we can control, and adjust as needed. The clients we can't (heck, we even support Windows :-) If "switching to a modern DVCS" means that users now need to start compiling their VCS before they can check out Python, I don't think we should switch to a modern DVCS. Such a system must be mature, and if it isn't included in Debian stable, it can't be mature (and free software). In the specific case, if a decision is made to switch to bazaar, and bzr 1.5 is recent enough, then I'd be happy to upgrade to testing (although 1.5 is also available from backports, and already installed; stable has *bzr 0.11*). Since lenny was frozen, bzr managed to release 5 minor versions (so it is 1.10 now); this makes me very worried whether this software is mature. IOW, Python shouldn't require a VCS that is not even a year old (a year ago, bzr 1.1 was released). Regards, Martin From krstic at solarsail.hcs.harvard.edu Sun Jan 4 00:21:18 2009 From: krstic at solarsail.hcs.harvard.edu (=?UTF-8?Q?Ivan_Krsti=C4=87?=) Date: Sat, 3 Jan 2009 18:21:18 -0500 Subject: [Python-Dev] Infra issues (was: Re: I would like an svn account) In-Reply-To: <495FEB07.3090604@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> Message-ID: <32209A06-550D-4685-92E7-C23A2CBD353C@solarsail.hcs.harvard.edu> On Jan 3, 2009, at 5:47 PM, Martin v. L?wis wrote: > There have been many problems on upgrade for the cases where we gave > in: shared libraries were missing after the upgrade (for Zope), the > software wasn't available anymore after the upgrade (in case of > manually-install Python pacakges), and so on. Very few people have > actually helped in fixing these problems What's the preferred way of offering help with infrastructure problems, and to what extent, in your opinion, is the solution to have more hands on deck vs. farming out certain (groups of) services to different machines? -- Ivan Krsti? | http://radian.org From martin at v.loewis.de Sun Jan 4 00:27:54 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 04 Jan 2009 00:27:54 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <495F60C9.7050401@holdenweb.com> Message-ID: <495FF47A.4070108@v.loewis.de> > And with the > tight feedback loop between committer and contributor along with > working on a new feature instead of existing code leads to people > getting commit privileges on the spot (if someone is there to give > them the privileges; I honestly don't know who has the abilities to > give the rights anymore beyond Barry, Martin, and Neal). [I don't think Barry actually can/does provide these privileges] I'd like to point out that there is a separation between the management privilege, and the technical implementation. Neal, Georg, and I do add committers to the database, however, we don't make the decision to add them. Instead, we (atleast I) try to sense consensus among committers, and then implement what I feel this consensus is. There can always a BDFL pronouncement, and also the release manager (i.e. Barry) can order that somebody gets commit access. In most other cases, consensus was fairly obvious. In the specific case of Victor Stinner, there had been a few seconds, but none of the long-time committers supported him, so I did not add him (I was opposed myself as well). Now that he asked a second time, the opposition spoke up. Regards, Martin From barry at python.org Sun Jan 4 00:29:03 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 3 Jan 2009 18:29:03 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495FF0D7.5000501@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 3, 2009, at 6:12 PM, Martin v. L?wis wrote: >> Maybe this is a false choice. Maybe the problem is standardizing on >> Debian stable. If that distribution isn't giving us and our users >> what >> we need, maybe we need to re-evaluate that choice. Yes I know we've >> talked about that before and yes I know it would not be easy to >> switch >> to something different, but still. If you can't even upgrade svn >> to 1.5 >> on Debian stable, then I think you'll find it impossible to switch to >> any modern DVCS. > > Maybe. Again, we should separate between client and server. The server > we can control, and adjust as needed. The clients we can't (heck, we > even support Windows :-) Ouch. :) > If "switching to a modern DVCS" means that users now need to start > compiling their VCS before they can check out Python, I don't think we > should switch to a modern DVCS. Such a system must be mature, and if > it > isn't included in Debian stable, it can't be mature (and free > software). Well, I'm not sure I agree with that definition, but aside from that, I can tell you that for Bazaar, our users would have access to installers for the major OSes: http://bazaar-vcs.org/Download > In the specific case, if a decision is made to switch to bazaar, and > bzr 1.5 is recent enough, then I'd be happy to upgrade to testing > (although 1.5 is also available from backports, and already installed; > stable has *bzr 0.11*). Since lenny was frozen, bzr managed to release > 5 minor versions (so it is 1.10 now); this makes me very worried > whether this software is mature. > > IOW, Python shouldn't require a VCS that is not even a year old > (a year ago, bzr 1.1 was released). Do any of the DVCS under consideration satisfy that requirement? I guess I'm asking whether you think all this talk about DVCSes is futile or premature? - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSV/0v3EjvBPtnXfVAQJCUAQAqecbBtn5NnadHTl1CaHAwfA9ku51StNS k6YD2q39IokqwtjjJpiNTlPRseh8LuQVzG+Dt8fp0PndkTxS4SvbGEY1iRK11XEg wmLthKbxylBe6yuaGW4RcsmgaOMiEnr22QvY639I3yVJPVzI/0rIpak8BDod6EaT 9wGKe6xxQXg= =OUqf -----END PGP SIGNATURE----- From barry at python.org Sun Jan 4 00:31:26 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 3 Jan 2009 18:31:26 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495FF47A.4070108@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901030353.22251.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <495F26BE.9050105@v.loewis.de> <495F60C9.7050401@holdenweb.com> <495FF47A.4070108@v.loewis.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 3, 2009, at 6:27 PM, Martin v. L?wis wrote: > [I don't think Barry actually can/does provide these privileges] I probably could, but I got pretty burned out doing regular admin stuff. ;/ - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSV/1TnEjvBPtnXfVAQKtwwP/XCZhr6PiAQ74aygiAV2BgsWjuc7iTVrC Rjssr2U6BKIhHDz8g1WkIoaaeKDfq1TU9eCiEPf4+FxtFRsqVox3j1r71PuHDqtc 9jfiXje0x1CtXa7SKJbdU55EUWHMuf1kOwqk1LoiotdVP82Jq/cbOhQ+/QlOPzsk aajuxL5eoqA= =mpRv -----END PGP SIGNATURE----- From martin at v.loewis.de Sun Jan 4 00:38:58 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 04 Jan 2009 00:38:58 +0100 Subject: [Python-Dev] Infra issues In-Reply-To: <32209A06-550D-4685-92E7-C23A2CBD353C@solarsail.hcs.harvard.edu> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <32209A06-550D-4685-92E7-C23A2CBD353C@solarsail.hcs.harvard.edu> Message-ID: <495FF712.9060106@v.loewis.de> > What's the preferred way of offering help with infrastructure problems, > and to what extent, in your opinion, is the solution to have more hands > on deck vs. farming out certain (groups of) services to different machines? With the current installations, there aren't that many issues. The one service that could always use more people to help is the roundup installation; in many cases, this involves extending roundup (through the builtin extension mechanisms). Unfortunately, nearly everybody who ever offered to help with the roundup installation ran away after a month or so. We have most services centralized at xs4all, with a fairly clear separation between mail on the one side, and web on the other side. For the bug tracker, we use a service offered by Upfront Hosting. This takes care of hardware issues and software installation, but we still manage the tracker installation(s). We recently also had a shortage of people managing email; fortunately, new (active) volunteers were found. For the web content, there is currently nobody really in charge, it seems, which is usually not an issue (my impression is that Aahz and Skip try to see that important things get done). For the Wiki, there is an active group of despammers. The job board has active maintainers again as well. The various other services don't need steady attention. In particular wrt. software installation, it all works fine - IMO also due to the policy that we use vendor packages if at all possible. Regards, Martin P.S. I might have left out important activities. Please do let me know in private which these are - I might honestly not know, or forgotten how much day-to-day work they actually cost. From solipsis at pitrou.net Sun Jan 4 00:53:14 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 3 Jan 2009 23:53:14 +0000 (UTC) Subject: [Python-Dev] I would like an svn account References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> Message-ID: Barry Warsaw python.org> writes: > > Do any of the DVCS under consideration satisfy that requirement? Out of curiosity, I apt-get'ed Mercurial on a stable Debian (0.9.1-1+etch1) and I was able to clone the trunk mirror (*) fine. It just took a bit over two minutes. (*) http://code.python.org/hg/trunk/ Regards Antoine. From brett at python.org Sun Jan 4 01:00:44 2009 From: brett at python.org (Brett Cannon) Date: Sat, 3 Jan 2009 16:00:44 -0800 Subject: [Python-Dev] Infra issues (was: Re: I would like an svn account) In-Reply-To: <32209A06-550D-4685-92E7-C23A2CBD353C@solarsail.hcs.harvard.edu> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <32209A06-550D-4685-92E7-C23A2CBD353C@solarsail.hcs.harvard.edu> Message-ID: On Sat, Jan 3, 2009 at 15:21, Ivan Krsti? wrote: > On Jan 3, 2009, at 5:47 PM, Martin v. L?wis wrote: >> >> There have been many problems on upgrade for the cases where we gave in: >> shared libraries were missing after the upgrade (for Zope), the software >> wasn't available anymore after the upgrade (in case of manually-install >> Python pacakges), and so on. Very few people have actually helped in fixing >> these problems > > What's the preferred way of offering help with infrastructure problems, and > to what extent, in your opinion, is the solution to have more hands on deck > vs. farming out certain (groups of) services to different machines? For volunteering with infrastructure stuff, you can just speak up on the proper mailing list: either pydotorg or tracker-discuss (the former is a catch-all while the latter is specifically the issue tracker). And for people who are PSF members there is an infrastructure committee that I chair which handles the big-picture issues (e.g. choosing the issue tracker, server purchases, etc.). -Brett From martin at v.loewis.de Sun Jan 4 01:06:55 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 04 Jan 2009 01:06:55 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> Message-ID: <495FFD9F.6030009@v.loewis.de> > Do any of the DVCS under consideration satisfy that requirement? I > guess I'm asking whether you think all this talk about DVCSes is futile > or premature? I still do hope that Debian releases lenny before any of this advances. This would mean bzr 1.5 git 1.5.6 mercurial 1.0.1 I don't have the experience with any of them to be able to tell whether they are good enough. A year ago, the revision numbers were bzr 1.0 git 1.5.4 mercurial 0.9.5 Again, I don't know these packages well enough to understand what these numbers mean. I know for bzr that apparently bzr 1.0 is considered unsuitable for anything, so this would be ruled out. For git, 1.5.4 vs. 1.5.6 doesn't look too frightening, so the software appears to be in good shape. For Mercurial, the 1.0 release was made in March 2008, which might meet the "one year" criteria before this discussion is over. I know that when switching to Subversion was discussed, there was opposition on grounds of subversion still being too young, and indeed, it took more than a year from the start of the discussion until the switch was made. I do think Subversion was mature since 1.0, which was released in Feb 2004; PEP 347 was written in August 2005; the switchover happened in Oct 2005. So I think I will be fine if the software that I use has been mature for a year. From what I've heard, bazaar might not qualify (apparently, there were recent protocol changes); it seems that git would qualify. Whether mercurial is mature, and for how long it had been, I don't know. Regards, Martin From andrew-pythondev at puzzling.org Sun Jan 4 01:20:32 2009 From: andrew-pythondev at puzzling.org (Andrew Bennetts) Date: Sun, 4 Jan 2009 11:20:32 +1100 Subject: [Python-Dev] I would like an svn account In-Reply-To: <4D1506D3-3E8E-4132-BB91-FF581A0245CC@python.org> References: <200812310155.40206.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <200901031754.30081.doomster@knuut.de> <4D1506D3-3E8E-4132-BB91-FF581A0245CC@python.org> Message-ID: <20090104002032.GD25206@steerpike.home.puzzling.org> Barry Warsaw wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Jan 3, 2009, at 11:54 AM, Ulrich Eckhardt wrote: > >> 1. I think that a patch can not e.g. capture a moved, renamed or >> deleted file. >> Further, it can not handle e.g. things like the executable bit or >> similar >> things that SVN otherwise does manage. That is what makes a patch only >> partially suitable. > > Bazaar bundles handle moved, renamed and deleted files and directories, > afaik. Executable bits and other file metadata are harder because such > properties are OS specific, and Bazaar tries to be OS agnostic. Actually, Bazaar also handles the execute bit too. But it doesn't try to capture any other file metadata like mtime or permissions. -Andrew. From brett at python.org Sun Jan 4 01:28:23 2009 From: brett at python.org (Brett Cannon) Date: Sat, 3 Jan 2009 16:28:23 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495FFD9F.6030009@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> <495FFD9F.6030009@v.loewis.de> Message-ID: On Sat, Jan 3, 2009 at 16:06, "Martin v. L?wis" wrote: >> Do any of the DVCS under consideration satisfy that requirement? I >> guess I'm asking whether you think all this talk about DVCSes is futile >> or premature? > > I still do hope that Debian releases lenny before any of this advances. > This would mean > > bzr 1.5 > git 1.5.6 > mercurial 1.0.1 > > I don't have the experience with any of them to be able to tell whether > they are good enough. > > A year ago, the revision numbers were > > bzr 1.0 > git 1.5.4 > mercurial 0.9.5 > > Again, I don't know these packages well enough to understand what > these numbers mean. I know for bzr that apparently bzr 1.0 is considered > unsuitable for anything, so this would be ruled out. > > For git, 1.5.4 vs. 1.5.6 doesn't look too frightening, so the software > appears to be in good shape. For Mercurial, the 1.0 release was made > in March 2008, which might meet the "one year" criteria before this > discussion is over. > > I know that when switching to Subversion was discussed, there was > opposition on grounds of subversion still being too young, and indeed, > it took more than a year from the start of the discussion until the > switch was made. I do think Subversion was mature since 1.0, which was > released in Feb 2004; PEP 347 was written in August 2005; the switchover > happened in Oct 2005. > > So I think I will be fine if the software that I use has been mature > for a year. From what I've heard, bazaar might not qualify (apparently, > there were recent protocol changes); it seems that git would qualify. > Whether mercurial is mature, and for how long it had been, I don't > know. > Bazaar has been backwards-compatible with everything from my understanding, so any changes they have made to the repository layout or network protocol they use should not be an issue regardless of what client or server versions are being used. As for the version number, the team does monthly releases, so it has nothing to do with stability and more with their timed release schedule. As for Mercurial, I have been told their repository layout has not changed since their first release and updates have been more about bug fixes and speed improvements. -Brett From martin at v.loewis.de Sun Jan 4 01:39:44 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 04 Jan 2009 01:39:44 +0100 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> <495FFD9F.6030009@v.loewis.de> Message-ID: <49600550.5090400@v.loewis.de> > Bazaar has been backwards-compatible with everything from my > understanding, so any changes they have made to the repository layout > or network protocol they use should not be an issue regardless of what > client or server versions are being used. As for the version number, > the team does monthly releases, so it has nothing to do with stability > and more with their timed release schedule. Ok: so what *is* the minimum required version, and why is the 1.5 version installed on code.python.org not good enough (as you complained about it)? Every time this comes up, I say "we can provide version A", and then somebody comes along saying "this is really bad, can't you provide version A+X?". This was the case always with bzr: first, 0.11 was not good enough, then 1.0, now 1.5. I wonder that if I install 1.10, whether that might not be good enough anymore 6 months after it got installed. Please understand that I don't know bazaar at all, so I have to trust the claims that any past version that was ever in use is so bad that it absolutely must be replaced. > As for Mercurial, I have been told their repository layout has not > changed since their first release and updates have been more about bug > fixes and speed improvements. Speed improvements we can ignore; for bug fixes, it would be good to know how much one can go back without hitting a serious bug (e.g. one that might break the repository). Regards, Martin From cournape at gmail.com Sun Jan 4 01:41:19 2009 From: cournape at gmail.com (David Cournapeau) Date: Sun, 4 Jan 2009 09:41:19 +0900 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> <495FFD9F.6030009@v.loewis.de> Message-ID: <5b8d13220901031641m36854fcdp7b220ee12f4632ae@mail.gmail.com> On Sun, Jan 4, 2009 at 9:28 AM, Brett Cannon wrote: > On Sat, Jan 3, 2009 at 16:06, "Martin v. L?wis" wrote: >>> Do any of the DVCS under consideration satisfy that requirement? I >>> guess I'm asking whether you think all this talk about DVCSes is futile >>> or premature? >> >> I still do hope that Debian releases lenny before any of this advances. >> This would mean >> >> bzr 1.5 >> git 1.5.6 >> mercurial 1.0.1 >> >> I don't have the experience with any of them to be able to tell whether >> they are good enough. >> >> A year ago, the revision numbers were >> >> bzr 1.0 >> git 1.5.4 >> mercurial 0.9.5 >> >> Again, I don't know these packages well enough to understand what >> these numbers mean. I know for bzr that apparently bzr 1.0 is considered >> unsuitable for anything, so this would be ruled out. >> >> For git, 1.5.4 vs. 1.5.6 doesn't look too frightening, so the software >> appears to be in good shape. For Mercurial, the 1.0 release was made >> in March 2008, which might meet the "one year" criteria before this >> discussion is over. >> >> I know that when switching to Subversion was discussed, there was >> opposition on grounds of subversion still being too young, and indeed, >> it took more than a year from the start of the discussion until the >> switch was made. I do think Subversion was mature since 1.0, which was >> released in Feb 2004; PEP 347 was written in August 2005; the switchover >> happened in Oct 2005. >> >> So I think I will be fine if the software that I use has been mature >> for a year. From what I've heard, bazaar might not qualify (apparently, >> there were recent protocol changes); it seems that git would qualify. >> Whether mercurial is mature, and for how long it had been, I don't >> know. >> > > Bazaar has been backwards-compatible with everything from my > understanding, so any changes they have made to the repository layout > or network protocol they use should not be an issue regardless of what > client or server versions are being used. It is not true in my experience: it is backward compatible, yes, in the sense that you can often manage to get out of the situation, but with some extra work. I would consider myself a relatively knowledgeable bzr user (I have been using it for more than 2 years now for almost all my projects, before switching to git), and I had several times some problems with it. The ML occasionally also have quite a few people having problems. David From solipsis at pitrou.net Sun Jan 4 02:39:59 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 4 Jan 2009 01:39:59 +0000 (UTC) Subject: [Python-Dev] I would like an svn account References: <200812310155.40206.victor.stinner@haypocalc.com> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> <495FFD9F.6030009@v.loewis.de> <49600550.5090400@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > As for Mercurial, I have been told their repository layout has not > > changed since their first release and updates have been more about bug > > fixes and speed improvements. > > Speed improvements we can ignore; for bug fixes, it would be good to > know how much one can go back without hitting a serious bug (e.g. one > that might break the repository). History of the release notes can be read here: http://www.selenic.com/mercurial/wiki/index.cgi/WhatsNew >From a quick look (and assuming everything is here), there hasn't been any repository corruption bug since 0.9.5, although a couple of crashes are mentioned. Regards Antoine. From brett at python.org Sun Jan 4 04:10:36 2009 From: brett at python.org (Brett Cannon) Date: Sat, 3 Jan 2009 19:10:36 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: <49600550.5090400@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> <495FFD9F.6030009@v.loewis.de> <49600550.5090400@v.loewis.de> Message-ID: On Sat, Jan 3, 2009 at 16:39, "Martin v. L?wis" wrote: >> Bazaar has been backwards-compatible with everything from my >> understanding, so any changes they have made to the repository layout >> or network protocol they use should not be an issue regardless of what >> client or server versions are being used. As for the version number, >> the team does monthly releases, so it has nothing to do with stability >> and more with their timed release schedule. > > Ok: so what *is* the minimum required version, and why is the 1.5 > version installed on code.python.org not good enough (as you complained > about it)? > Not sure what the minimum is (Barry would know better than me), but my complaint is purely speed-related. -Brett From steve at holdenweb.com Sun Jan 4 05:29:11 2009 From: steve at holdenweb.com (Steve Holden) Date: Sat, 03 Jan 2009 23:29:11 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> Message-ID: Brett Cannon wrote: [...] > I have been using bzr for all of my importlib work. It's worked out > well sans the problem that SOMEONE Barry has not > upgraded the bzr installation to support the newest wire protocol. > If you think *that's* a problem try getting him to write a simple bloody blog entry ... i-can-say-this-now-the-entry-is-published-ly y'rs - steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From steve at holdenweb.com Sun Jan 4 05:39:42 2009 From: steve at holdenweb.com (Steve Holden) Date: Sat, 03 Jan 2009 23:39:42 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> Message-ID: <49603D8E.1090009@holdenweb.com> Barry Warsaw wrote: > On Jan 3, 2009, at 5:47 PM, Martin v. L?wis wrote: > >> It's always possible to make exceptions. It's not just about the VCS; >> there have been requests to replace Apache, NTP, Zope, Postgres, >> MoinMoin, and a few other packages. There have been many problems >> on upgrade for the cases where we gave in: shared libraries were >> missing after the upgrade (for Zope), the software wasn't available >> anymore after the upgrade (in case of manually-install Python pacakges), >> and so on. Very few people have actually helped in fixing these >> problems (applause to AMK for being very helpful with the most recent >> incidents). > >> I'd rather have the users annoyed than finding out that the custom >> setup opened an entrance for hackers. > > Maybe this is a false choice. Maybe the problem is standardizing on > Debian stable. If that distribution isn't giving us and our users what > we need, maybe we need to re-evaluate that choice. Yes I know we've > talked about that before and yes I know it would not be easy to switch > to something different, but still. If you can't even upgrade svn to 1.5 > on Debian stable, then I think you'll find it impossible to switch to > any modern DVCS. > I appreciate Martin's conservatism, especially since he usually ends up being the one who has to fill in the gaps when things fall though the cracks. I would never suggest that he went against his instincts about likely pitfalls in installation of non-standard software. I suspect this was the PyCon web server's downfall: a dog's breakfast disguised as a unified installation. Is it maybe time we thought about hiring a part-time sysadmin to take care of the cruddy stuff that soaks up time without increasing productivity? We have the finds, and if they run out the I'd rather spend my time raising more funds that administering systems ... there are many people who are better at that than I was in my prime! Hey, isn't Ubuntu Debian-based? ... Don't we know people who work for the vendor? ... Maybe they could offer some support if we switched? ... regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From skippy.hammond at gmail.com Sun Jan 4 07:47:31 2009 From: skippy.hammond at gmail.com (Mark Hammond) Date: Sun, 04 Jan 2009 17:47:31 +1100 Subject: [Python-Dev] py3k, bad sys.path and abort() Message-ID: <49605B83.3080204@gmail.com> I've recently noticed that in py3k, the lack of a suitable sys.path will cause Py_FatalError() to be called, which immediately terminates the entire application. On Windows, it is fairly easy for this to happen for developers or anyone who hasn't run the official Python installation; just have Python used in a 'service', web server, COM object or anything else which loads python3x.dll from the system32 etc directory, and neglect to have added the PYTHONPATH entry in the registry or global environment. As a result Python can't sniff a good default sys.path, fails to import encodings, then winds up calling Py_FatalError("Py_Initialize: can't initialize sys standard streams"); I realize this is something of an edge-case, but having abort() called on the application in this case is somewhat harsh and difficult to diagnose. In Python 2.x, you end up with a fairly crippled Python environment, but it functions well enough to offer clues that your sys.path isn't setup. Would it be practical and desirable to handle this situation more gracefully, possibly just leaving sys.std* set to None and letting whatever exceptions then occur happen as normal without terminating the process? Given it is an edge-case, I thought I'd open it here for discussion before putting work into a patch or opening a bug. Thanks, Mark From stephen at xemacs.org Sun Jan 4 10:21:56 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 04 Jan 2009 18:21:56 +0900 Subject: [Python-Dev] I would like an svn account In-Reply-To: <49603D8E.1090009@holdenweb.com> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> Message-ID: <87wsdbv32j.fsf@xemacs.org> Steve Holden writes: > Hey, isn't Ubuntu Debian-based? ... Ouch. I don't actually use Ubuntu, but when everybody on my local LUG list from the "Linux should be Windows but cheaper" newbies to former NetBSD developers is grouching about upgrade hell, I don't see any real benefits to be gained. You're still going to need to go with a "don't think about fixing what ain't broke, and even if it's just kinda broke, Just Say No to that upgrade dope" policy. No, something has to be done about the "no upgrades" policy, or it's not worth switching from Debian stable. From stephen at xemacs.org Sun Jan 4 10:50:09 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Sun, 04 Jan 2009 18:50:09 +0900 Subject: [Python-Dev] I would like an svn account In-Reply-To: <495FF0D7.5000501@v.loewis.de> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> Message-ID: <87vdsvv1ri.fsf@xemacs.org> Disclaimer: I'm a member of the team working with Brett on the DVCS PEP, and definitely pro-DVCS (specifically working on the git parts). "Martin v. L?wis" writes: > If "switching to a modern DVCS" means that users now need to start > compiling their VCS before they can check out Python, It doesn't mean that. All of the DVCS contenders have Windows and Mac OS installers (usually from 3rd parties, but working closely with the core). For *nix users, does anybody really use a vanilla Debian stable for a development workstation? Everybody else has reasonably fresh versions available via the standard package manager, even Debian Lenny. > I don't think we should switch to a modern DVCS. Such a system must > be mature, and if it isn't included in Debian stable, it can't be > mature (and free software). The versions in Debian stable were all usable for their time, but this is a rapidly developing field, even when, like Subversion, your goal is to refactor software designed in the early 1990s! "It's in Debian stable" is an excessively strict standard for the client's version. > In the specific case, if a decision is made to switch to bazaar, and > bzr 1.5 is recent enough, then I'd be happy to upgrade to testing > (although 1.5 is also available from backports, and already installed; > stable has *bzr 0.11*). Since lenny was frozen, bzr managed to release > 5 minor versions (so it is 1.10 now); this makes me very worried > whether this software is mature. The bzr team is experimenting with a time-based release process; the rate at which minor versions appear should not worry you. More important is the count of new repository formats. There are about 5 currently in common use. Great efforts are made to keep them interoperable, though some are not. Python should avoid use of those for the near future but I don't think it should be considered a showstopper. From skip at pobox.com Sun Jan 4 11:28:59 2009 From: skip at pobox.com (skip at pobox.com) Date: Sun, 4 Jan 2009 04:28:59 -0600 Subject: [Python-Dev] How to configure with icc on Mac? Message-ID: <18784.36715.188514.447004@montanaro.dyndns.org> I downloaded an evaluation copy of the Intel compiler for Mac and tried (so far unsuccessfully) to configure with it. I have tried: CC=icc ./configure --prefix=$HOME/tmp/icc-python That failed computing the size of size_t because it tries to incorrectly link with -lgcc_s. Then I tried forcing it to not use gcc: CC=icc ./configure --without-gcc --prefix=$HOME/tmp/icc-python That failed because of a bug in configure.in: case $withval in no) CC=cc without_gcc=yes;; yes) CC=gcc without_gcc=no;; *) CC=$withval without_gcc=$withval;; It ignores the CC value on the command line. I fixed that like so: case $withval in no) CC=${CC:-cc} without_gcc=yes;; yes) CC=gcc without_gcc=no;; *) CC=$withval without_gcc=$withval;; and reran autoconf. Now I'm back to the -lgcc_s problem. "gcc_s" is not mentioned in configure. It's accessed implicitly. Poking around in the icc man page didn't yield any obvious clues. (I did try using the -gcc-name and -gxx-name flags. They didn't help.) Is there some way to configure Python for use with icc on a Mac? Thanks, Skip From doomster at knuut.de Sun Jan 4 11:29:34 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Sun, 4 Jan 2009 11:29:34 +0100 Subject: [Python-Dev] ParseTuple question In-Reply-To: <018701c96e00$63320e50$29962af0$@com.au> References: <200901021232.30572.doomster@knuut.de> <200901040032.05337.doomster@knuut.de> <018701c96e00$63320e50$29962af0$@com.au> Message-ID: <200901041129.34289.doomster@knuut.de> [sorry, dropped one pair of mails off the list, hence also the overquoting] On Sunday 04 January 2009 01:07:08 Mark Hammond wrote: > > > On 'normal' windows you generally would need to use > > > WideCharToMultiByte() to get a 'char *' version of your wchar > > > string but I expect you already know that, so I doubt I understand > > > the question... > > > > Actually, I want the exact opposite. > > I'm not sure you can get it though :) Unless you can change Python itself, > you will be forced to convert filenames etc back and forward from wide to > narrow strings. This should roundtrip OK for all character in the current > code set. > > > 'char' strings under CE are mostly useless, in particular for > > filenames. Actually, they are useless under MS Windows in general, > > at least for the versions supported by Python 2.6/3.0, they all use > > UTF-16, only the desktop variants still have backward compatibility > > code to use 'char' strings, though with restricted functionality. > > Exactly - but for python 2.6, we are still somewhat forced to use that > 'char *' API in many cases. If the API didn't offer the automatic unicode > conversion, we would most likely have needed to implement it ourselves. > > > So, point is that I need a UTF-16 string in a function that might > > get either a string or a Unicode string. Preferably, I'd like to take > > the way that poses least resistance, so which one would you suggest? > > I'm not with you still: if a Python implemented function accepts either > string or unicode, then your utf16 string is perfect - its already unicode. > On the other hand, I thought you were faced with a Python function which, > as currently implemented, only accepts whatever the 's' format string > accepts. Such a function only accepts real PyString objects, so attempts at > passing it unicode will be futile. Obviously you could modify it to accept > a unicode, but clearly that would also mean adjusting everything that uses > the existing 'char *', which may end up fanning out to much more than you > expect. > > If I'm still misunderstanding, can you be more specific about the exact > problem (ie, the exact function you are referring to, and how you intend > calling it)? trunk/_fileio.c/fileio_init() Let's leave aside that you can also pass a filedescriptor, that function either takes a string or a Unicode string as first parameter. Now, under CE, I always need a 'wchar_t*' in order to open a file, how would I get at that easiest? My approach now is to simply use "O" as format specifier for the filename and then take a look at the object's type. If it is a char-string, I have to convert (under CE) or call the char-API (desktop MS Windows), if it is a Unicode string, I can use it as it is. Uli From skippy.hammond at gmail.com Sun Jan 4 11:42:49 2009 From: skippy.hammond at gmail.com (Mark Hammond) Date: Sun, 04 Jan 2009 21:42:49 +1100 Subject: [Python-Dev] ParseTuple question In-Reply-To: <200901041129.34289.doomster@knuut.de> References: <200901021232.30572.doomster@knuut.de> <200901040032.05337.doomster@knuut.de> <018701c96e00$63320e50$29962af0$@com.au> <200901041129.34289.doomster@knuut.de> Message-ID: <496092A9.4080507@gmail.com> On 4/01/2009 9:29 PM, Ulrich Eckhardt wrote: >> If I'm still misunderstanding, can you be more specific about the exact >> problem (ie, the exact function you are referring to, and how you intend >> calling it)? > > trunk/_fileio.c/fileio_init() > > Let's leave aside that you can also pass a filedescriptor, that function > either takes a string or a Unicode string as first parameter. Now, under CE, > I always need a 'wchar_t*' in order to open a file, how would I get at that > easiest? > > My approach now is to simply use "O" as format specifier for the filename and > then take a look at the object's type. If it is a char-string, I have to > convert (under CE) or call the char-API (desktop MS Windows), if it is a > Unicode string, I can use it as it is. IIUC, the block: #ifdef MS_WINDOWS if (widename != NULL) self->fd = _wopen(widename, flags, 0666); else #endif self->fd = open(name, flags, 0666); ... Would probably need to change - instead of falling back to plain open(), if widename is NULL you would need to use MultiByteToWideChar before calling _wopen(). This assumes the unicode CRT is available of course - if not, I guess you'd need to call the win32 api instead of _wopen. Alternatively, the PyArg_ParseTuple() call could possibly be changed to use the 'e' format string Hoping-that-was-what-you-were-asking, ly, Mark From martin at v.loewis.de Sun Jan 4 14:44:30 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 04 Jan 2009 14:44:30 +0100 Subject: [Python-Dev] How to configure with icc on Mac? In-Reply-To: <18784.36715.188514.447004@montanaro.dyndns.org> References: <18784.36715.188514.447004@montanaro.dyndns.org> Message-ID: <4960BD3E.70809@v.loewis.de> > CC=icc ./configure --prefix=$HOME/tmp/icc-python > > That failed computing the size of size_t because it tries to incorrectly link > with -lgcc_s. Can you provide the relevant section of config.log? What is the precise command that configure is invoking, and what is the precise error message that icc reports? > That failed because of a bug in configure.in: > > case $withval in > no) CC=cc > without_gcc=yes;; > yes) CC=gcc > without_gcc=no;; > *) CC=$withval > without_gcc=$withval;; > > It ignores the CC value on the command line. I don't think it is a bug. --without-gcc *overrides* the CC environment variable, rather than ignoring it. Regards, Martin From aahz at pythoncraft.com Sun Jan 4 15:51:53 2009 From: aahz at pythoncraft.com (Aahz) Date: Sun, 4 Jan 2009 06:51:53 -0800 Subject: [Python-Dev] python.org OS In-Reply-To: <87wsdbv32j.fsf@xemacs.org> References: <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> <87wsdbv32j.fsf@xemacs.org> Message-ID: <20090104145153.GA23297@panix.com> On Sun, Jan 04, 2009, Stephen J. Turnbull wrote: > Steve Holden writes: >> >> Hey, isn't Ubuntu Debian-based? ... > > Ouch. I don't actually use Ubuntu, but when everybody on my local LUG > list from the "Linux should be Windows but cheaper" newbies to former > NetBSD developers is grouching about upgrade hell, I don't see any > real benefits to be gained. You're still going to need to go with a > "don't think about fixing what ain't broke, and even if it's just > kinda broke, Just Say No to that upgrade dope" policy. What kind of upgrade hell are you talking about? I have used several different Linux distributions, Windows, and OS X, and I have to say that upgrading Ubuntu has been by far the easist and least painful of them all. Because I was lazy, last weekend I finally did a two-stage upgrade from 7.10 to 8.04 and then 8.10, with zero noticeable problems. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "roses are reddish, violets are bluish, Chanukah is 8 days, don't you wish you were Jewish?" From p.f.moore at gmail.com Sun Jan 4 17:21:40 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Sun, 4 Jan 2009 16:21:40 +0000 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> <495FFD9F.6030009@v.loewis.de> Message-ID: <79990c6b0901040821tdf87c7dod01314dba6f0e2cc@mail.gmail.com> 2009/1/4 Brett Cannon : > Bazaar has been backwards-compatible with everything from my > understanding, so any changes they have made to the repository layout > or network protocol they use should not be an issue regardless of what > client or server versions are being used. As for the version number, > the team does monthly releases, so it has nothing to do with stability > and more with their timed release schedule. As far as I am aware (and it's not based on much practical experience, so I could be wrong) the big issue with older Bazaar formats is that they are substantially slower. And there's some sort of interoperability constraint that I don't understand, which means that, although newer clients can read from older servers, the fact that the server uses an older format means that the slowness affects the client (it may be that it's possible to get around this with some level of juggling at the client). It would be very useful to have a good statement of the impact of different client/server versions. > As for Mercurial, I have been told their repository layout has not > changed since their first release and updates have been more about bug > fixes and speed improvements. According to Mercurial compatibility rules, - New Mercurial should always be able to read old Mercurial repositories - Old Mercurial should always be able to pull from new Mercurial servers - Old Mercurial should break with a meaningful error message if it can't read a new Mercurial repository which basically means, the server version used will not affect the client and you will always be able to upgrade the server version without pain (point 2, "pull", is about client/server interaction). The last point states that you can even *down*grade the server with minimal pain (for *real* conservatives :-)) In practical terms Mercurial is 100% compatible back to at least June 2007 (version 0.9.4, the earliest documented at http://www.selenic.com/mercurial/wiki/index.cgi/WhatsNew). Paul. From stephen at xemacs.org Sun Jan 4 17:26:26 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Mon, 05 Jan 2009 01:26:26 +0900 Subject: [Python-Dev] python.org OS In-Reply-To: <20090104145153.GA23297@panix.com> References: <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> <87wsdbv32j.fsf@xemacs.org> <20090104145153.GA23297@panix.com> Message-ID: <87aba7xcjx.fsf@xemacs.org> Aahz writes: > all. Because I was lazy, last weekend I finally did a two-stage upgrade > from 7.10 to 8.04 and then 8.10, with zero noticeable problems. The scary one is two independent reports of fstab corruption in the 8.04 to 8.10 upgrade. It is claimed to be unfixable by booting from CD, mounting the partition, and editing fstab: the editor saves but the fstab returns to the original corrupt state upon reboot. Several reports of inability to use formerly working Japanese input methods. Several reports of formerly working xorg.conf suddenly reverting to VESA 1024x768x8 (or worse). I don't use Ubuntu so this is all hearsay, but I do trust the ex-NetBSD dev to be reporting accurately. He's only having problems with his custom X11 keymap getting trashed, and something else relatively minor with Xorg. And for that ML this is huge; I don't recall so many screams on a commercial vendor upgrade since Red Hat went from HJ Liu libc to glibc 2. From skip at pobox.com Sun Jan 4 17:28:42 2009 From: skip at pobox.com (skip at pobox.com) Date: Sun, 4 Jan 2009 10:28:42 -0600 Subject: [Python-Dev] How to configure with icc on Mac? In-Reply-To: <4960BD3E.70809@v.loewis.de> References: <18784.36715.188514.447004@montanaro.dyndns.org> <4960BD3E.70809@v.loewis.de> Message-ID: <18784.58298.72695.148186@montanaro.dyndns.org> >> CC=icc ./configure --prefix=$HOME/tmp/icc-python >> >> That failed computing the size of size_t because it tries to incorrectly link >> with -lgcc_s. Martin> Can you provide the relevant section of config.log? What is the Martin> precise command that configure is invoking, and what is the Martin> precise error message that icc reports? Sorry, should have been more complete in my report. I configured with CC='icc' ../configure --prefix=$HOME/tmp/icc-python --without-gcc That officially succeeds but is worthless because it overrode CC=icc from the command line with CC=cc. On my Mac cc == gcc. So, I fix that, at least temporarily, to demonstrate the error I'm getting, run autoreconf then repeat the above configure line. The failure is ... configure:10332: checking size of size_t configure:10637: icc -o conftest -g -O2 conftest.c >&5 ld: library not found for -lgcc_s configure:10641: $? = 1 configure: program exited with status 1 configure: failed program was: ... with fairly innocuous conftest.c source. BTW, I'm using autoconf 2.63. >> That failed because of a bug in configure.in: >> >> case $withval in >> no) CC=cc >> without_gcc=yes;; >> yes) CC=gcc >> without_gcc=no;; >> *) CC=$withval >> without_gcc=$withval;; >> >> It ignores the CC value on the command line. Martin> I don't think it is a bug. --without-gcc *overrides* the CC Martin> environment variable, rather than ignoring it. I don't think that's right. There's no telling what the non-gcc compiler is called. As far as I can tell you can't give any arguments to --without-gcc. All values I tried yielded errors: % ../configure --prefix=$HOME/tmp/icc-python --without-gcc=yes configure: error: invalid package name: gcc=yes % ../configure --prefix=$HOME/tmp/icc-python --without-gcc=icc configure: error: invalid package name: gcc=icc The only way I can see to tell it what compiler to use is to set CC and have the configure script use it. Skip From barry at python.org Sun Jan 4 17:45:09 2009 From: barry at python.org (Barry Warsaw) Date: Sun, 4 Jan 2009 11:45:09 -0500 Subject: [Python-Dev] python.org OS In-Reply-To: <87aba7xcjx.fsf@xemacs.org> References: <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> <87wsdbv32j.fsf@xemacs.org> <20090104145153.GA23297@panix.com> <87aba7xcjx.fsf@xemacs.org> Message-ID: <25E0144E-B62A-4E7B-AFED-4A7007E0A91C@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 4, 2009, at 11:26 AM, Stephen J. Turnbull wrote: > Aahz writes: > >> all. Because I was lazy, last weekend I finally did a two-stage >> upgrade >> from 7.10 to 8.04 and then 8.10, with zero noticeable problems. > > The scary one is two independent reports of fstab corruption in the > 8.04 to 8.10 upgrade. It is claimed to be unfixable by booting from > CD, mounting the partition, and editing fstab: the editor saves but > the fstab returns to the original corrupt state upon reboot. > > Several reports of inability to use formerly working Japanese input > methods. Several reports of formerly working xorg.conf suddenly > reverting to VESA 1024x768x8 (or worse). Just as a data point, we routinely upgrade our Ubuntu desktop machines as soon as the next version goes beta, exactly so we can help smooth out any hitches long before our users do. Some of us tend to be conservative, some radical, but we have a fairly wide range of systems and close interaction with our distro team. At any one time I have four or more Ubuntu boxes (servers, laptops, VMs, desktops) and I tend to upgrade them one at a time. For me, servers have been the easiest to upgrade. I've never had a problem with the OS specifically. I have had problems with certain applications (e.g. Moin) where it's usually a matter of sussing out all the new configuration changes, but I'm not sure you can avoid that. On the desktops, IME the most troublesome part is always X, mostly because it's a nightmare in its own right :) but also because my proprietary hardware drivers lag behind. I've had one or two problems with wireless. I've never had a problem with some of the crazier things I try, like encrypted file systems. Remember too, these are all beta upgrades. I'm not shy about contacting our excellent support team about these problems. I usually hold back at least one VM to upgrade to final, and that's always gone easier than any other OS, reflecting Aahz's experience. I don't doubt that people have problems upgrading, in fact for anything as complex as an operating system, it would be impossible to avoid. I'm biased of course, but I have to say our distro team does an excellent job here. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSWDnlnEjvBPtnXfVAQLIbwP+MiYoO0eBm77dc/nfyjHp593C1+CyprCQ 9TMNI5O5sD5VdiXWuhO5XSn6hvTf7tgZ4pAAQgYhcgapEoG3rYCjQ5RGs4jSdQTs SxLptzj4U2gODRFMNCOBspQf97krSGxp1UKFzRujUvPJP3NQw7Xp90FkT/rjLd3N iCAlNWtp2Hs= =FzKC -----END PGP SIGNATURE----- From martin at v.loewis.de Sun Jan 4 17:50:15 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 04 Jan 2009 17:50:15 +0100 Subject: [Python-Dev] How to configure with icc on Mac? In-Reply-To: <18784.58298.72695.148186@montanaro.dyndns.org> References: <18784.36715.188514.447004@montanaro.dyndns.org> <4960BD3E.70809@v.loewis.de> <18784.58298.72695.148186@montanaro.dyndns.org> Message-ID: <4960E8C7.2030104@v.loewis.de> > ... > configure:10332: checking size of size_t > configure:10637: icc -o conftest -g -O2 conftest.c >&5 > ld: library not found for -lgcc_s I think you have the source of the problem right there: your icc installation is broken. It is unable to build even trivial programs. To confirm this theory, take the source of the program, and invoke it with the very same command line. If it gives you the same error, then this has nothing to do with autoconf, or Python, or anything: that command line *must* work, or else the compiler is useless. Apparently, icc choses to invoke ld(1) with an option -lgcc_s, and apparently, ld(1) can't find the library. Why icc choses to do so, and why ld(1) can't find it, I don't know - this is a question to ask on Macintosh or icc mailing lists. > Martin> I don't think it is a bug. --without-gcc *overrides* the CC > Martin> environment variable, rather than ignoring it. > > I don't think that's right. There's no telling what the non-gcc compiler is > called. Correct. To specify a different compiler, set the CC environment variable, and don't pass the --without-gcc flag. Regards, Martin From solipsis at pitrou.net Sun Jan 4 17:56:41 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 4 Jan 2009 16:56:41 +0000 (UTC) Subject: [Python-Dev] How to configure with icc on Mac? References: <18784.36715.188514.447004@montanaro.dyndns.org> <4960BD3E.70809@v.loewis.de> <18784.58298.72695.148186@montanaro.dyndns.org> <4960E8C7.2030104@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > Correct. To specify a different compiler, set the CC environment > variable, and don't pass the --without-gcc flag. Perhaps --without-gcc should be removed, if it's both useless and misleading? (note: I don't have an interest in the matter) From barry at python.org Sun Jan 4 18:03:56 2009 From: barry at python.org (Barry Warsaw) Date: Sun, 4 Jan 2009 12:03:56 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> Message-ID: <35165DB2-F325-4DF0-B2CF-AB56C1C244BD@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 3, 2009, at 11:29 PM, Steve Holden wrote: > Brett Cannon wrote: > [...] >> I have been using bzr for all of my importlib work. It's worked out >> well sans the problem that SOMEONE Barry has not >> upgraded the bzr installation to support the newest wire protocol. >> > If you think *that's* a problem try getting him to write a simple > bloody > blog entry ... Ouch. That's the thanks I get. And you didn't even post a url to help increase my net.fame. > i-can-say-this-now-the-entry-is-published-ly y'rs - steve B. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSWDr/XEjvBPtnXfVAQJu0gQAqBnayG5cwKJ7N7FRFRoaaeyT38xKjnZ0 Y1l8qKWx1ErN92rKPfeVf28XAqxaedE9rNUOPd2PVOEjU61Pbf7mHEY7yjFk7jZT nuHoAuvTUPh8Ip5nkVmfh4LzpX6Z/3uuP5+aMz1QI0nREXok9pPTtKsktZo4UiLU CJrk/Gzpl+g= =Y8O9 -----END PGP SIGNATURE----- From skip at pobox.com Sun Jan 4 18:05:26 2009 From: skip at pobox.com (skip at pobox.com) Date: Sun, 4 Jan 2009 11:05:26 -0600 Subject: [Python-Dev] How to configure with icc on Mac? In-Reply-To: <4960E8C7.2030104@v.loewis.de> References: <18784.36715.188514.447004@montanaro.dyndns.org> <4960BD3E.70809@v.loewis.de> <18784.58298.72695.148186@montanaro.dyndns.org> <4960E8C7.2030104@v.loewis.de> Message-ID: <18784.60502.506536.677488@montanaro.dyndns.org> >> ... >> configure:10332: checking size of size_t >> configure:10637: icc -o conftest -g -O2 conftest.c >&5 >> ld: library not found for -lgcc_s Martin> I think you have the source of the problem right there: your icc Martin> installation is broken. It is unable to build even trivial Martin> programs. Hmmm... All I did was download the installer from Intel's site and run it. Martin> To confirm this theory, take the source of the program, and Martin> invoke it with the very same command line. If it gives you the Martin> same error, then this has nothing to do with autoconf, or Martin> Python, or anything: that command line *must* work, or else the Martin> compiler is useless. Martin> Apparently, icc choses to invoke ld(1) with an option -lgcc_s, and Martin> apparently, ld(1) can't find the library. Why icc choses to do so, Martin> and why ld(1) can't find it, I don't know - this is a question to Martin> ask on Macintosh or icc mailing lists. I'll take a look at that. Thanks. Martin> I don't think it is a bug. --without-gcc *overrides* the CC Martin> environment variable, rather than ignoring it. >> >> I don't think that's right. There's no telling what the non-gcc compiler is >> called. Martin> Correct. To specify a different compiler, set the CC environment Martin> variable, and don't pass the --without-gcc flag. Hmmm, OK... Why do we need two ways to spell "don't use gcc"? Skip From barry at python.org Sun Jan 4 18:11:52 2009 From: barry at python.org (Barry Warsaw) Date: Sun, 4 Jan 2009 12:11:52 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: <87wsdbv32j.fsf@xemacs.org> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> <87wsdbv32j.fsf@xemacs.org> Message-ID: <69D3F015-FBC5-43A2-90D8-1D7A0A90656A@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 4, 2009, at 4:21 AM, Stephen J. Turnbull wrote: > Steve Holden writes: > >> Hey, isn't Ubuntu Debian-based? ... > > Ouch. I don't actually use Ubuntu, but when everybody on my local LUG > list from the "Linux should be Windows but cheaper" newbies to former > NetBSD developers is grouching about upgrade hell, I don't see any > real benefits to be gained. You're still going to need to go with a > "don't think about fixing what ain't broke, and even if it's just > kinda broke, Just Say No to that upgrade dope" policy. > > No, something has to be done about the "no upgrades" policy, or it's > not worth switching from Debian stable. One interesting thing about Ubuntu is that you can hook into the Personal Package Archives feature on Launchpad, so if you want to track newer versions of individual packages than either the distro or backports provides, you can do so using the standard package manager, with dependency tracking, etc. It's up to the PPA owner to make sure new releases are available to address things like security concerns and such. The bzr team is pretty good about that (I regularly run bzr from a PPA on my Ubuntu machines). B. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSWDt2HEjvBPtnXfVAQKjGwP8DHd+vp7ELDEtMNBZxNIktPJXAbYOtnP2 uSvueN3TIoguTtCLTMKuObhHems9bttIodroWxLQ4/8hGEAI3yPS7FTadxdO00hK FjJQffKaEaGOQ/bKdr/nTxCvArfzhYCSwSfYMFsq/85roM3UpsHirT9oyWjWyJIw p5nOczWAi70= =opqm -----END PGP SIGNATURE----- From leif.walsh at gmail.com Sun Jan 4 18:39:34 2009 From: leif.walsh at gmail.com (Leif Walsh) Date: Sun, 4 Jan 2009 12:39:34 -0500 Subject: [Python-Dev] python.org OS In-Reply-To: <20090104145153.GA23297@panix.com> References: <200901031713.05985.doomster@knuut.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> <87wsdbv32j.fsf@xemacs.org> <20090104145153.GA23297@panix.com> Message-ID: I missed the beginning here; oh well. On Sun, Jan 4, 2009 at 9:51 AM, Aahz wrote: > On Sun, Jan 04, 2009, Stephen J. Turnbull wrote: >> Steve Holden writes: >>> >>> Hey, isn't Ubuntu Debian-based? ... >> >> Ouch. I don't actually use Ubuntu, but when everybody on my local LUG >> list from the "Linux should be Windows but cheaper" newbies to former >> NetBSD developers is grouching about upgrade hell, I don't see any >> real benefits to be gained. You're still going to need to go with a >> "don't think about fixing what ain't broke, and even if it's just >> kinda broke, Just Say No to that upgrade dope" policy. In my experience, Ubuntu tends to stray away from the Way Things Are Traditionally Done, and this can be problematic, sometimes. Because they change things so drastically, they can do some pretty neat stuff, but if I decide I want to tweak something on my own, I usually find that they haven't provided a mechanism for changing something they've already changed (or at least not for a few releases). So, I blunder on ahead and mess with it, and when it comes time to upgrade, their scripts are expecting it to be the way they left it, but obviously it isn't that way, so it breaks horribly. Of course, if you aren't trying to mess with all manner of weird things (I think my latest trouble came from messing with pam and shared memory to get my audio software to run smoothly), it should be perfectly stable and upgradable. -- Cheers, Leif From leif.walsh at gmail.com Sun Jan 4 18:44:18 2009 From: leif.walsh at gmail.com (Leif Walsh) Date: Sun, 4 Jan 2009 12:44:18 -0500 Subject: [Python-Dev] How to configure with icc on Mac? In-Reply-To: <18784.60502.506536.677488@montanaro.dyndns.org> References: <18784.36715.188514.447004@montanaro.dyndns.org> <4960BD3E.70809@v.loewis.de> <18784.58298.72695.148186@montanaro.dyndns.org> <4960E8C7.2030104@v.loewis.de> <18784.60502.506536.677488@montanaro.dyndns.org> Message-ID: On Sun, Jan 4, 2009 at 12:05 PM, wrote: > Hmmm, OK... Why do we need two ways to spell "don't use gcc"? Think of it like the two keys to the atom bomb. :-P -- Cheers, Leif From g.brandl at gmx.net Sun Jan 4 18:47:00 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Sun, 04 Jan 2009 18:47:00 +0100 Subject: [Python-Dev] How to configure with icc on Mac? In-Reply-To: <4960E8C7.2030104@v.loewis.de> References: <18784.36715.188514.447004@montanaro.dyndns.org> <4960BD3E.70809@v.loewis.de> <18784.58298.72695.148186@montanaro.dyndns.org> <4960E8C7.2030104@v.loewis.de> Message-ID: Martin v. L?wis schrieb: >> ... >> configure:10332: checking size of size_t >> configure:10637: icc -o conftest -g -O2 conftest.c >&5 >> ld: library not found for -lgcc_s > > I think you have the source of the problem right there: your icc > installation is broken. It is unable to build even trivial programs. > > To confirm this theory, take the source of the program, and invoke > it with the very same command line. If it gives you the same error, > then this has nothing to do with autoconf, or Python, or anything: > that command line *must* work, or else the compiler is useless. > > Apparently, icc choses to invoke ld(1) with an option -lgcc_s, and > apparently, ld(1) can't find the library. Why icc choses to do so, > and why ld(1) can't find it, I don't know - this is a question to > ask on Macintosh or icc mailing lists. > >> Martin> I don't think it is a bug. --without-gcc *overrides* the CC >> Martin> environment variable, rather than ignoring it. >> >> I don't think that's right. There's no telling what the non-gcc compiler is >> called. > > Correct. To specify a different compiler, set the CC environment > variable, and don't pass the --without-gcc flag. If I read that code correctly, you can also do ./configure --with-gcc=icc which is however not much less confusing. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From skip at pobox.com Sun Jan 4 19:03:22 2009 From: skip at pobox.com (skip at pobox.com) Date: Sun, 4 Jan 2009 12:03:22 -0600 Subject: [Python-Dev] How to configure with icc on Mac? In-Reply-To: <4960E8C7.2030104@v.loewis.de> References: <18784.36715.188514.447004@montanaro.dyndns.org> <4960BD3E.70809@v.loewis.de> <18784.58298.72695.148186@montanaro.dyndns.org> <4960E8C7.2030104@v.loewis.de> Message-ID: <18784.63978.749398.176884@montanaro.dyndns.org> >> ... >> configure:10332: checking size of size_t >> configure:10637: icc -o conftest -g -O2 conftest.c >&5 >> ld: library not found for -lgcc_s Martin> I think you have the source of the problem right there: your icc Martin> installation is broken. It is unable to build even trivial Martin> programs. Martin> To confirm this theory, take the source of the program, and Martin> invoke it with the very same command line. If it gives you the Martin> same error, then this has nothing to do with autoconf, or Martin> Python, or anything: that command line *must* work, or else the Martin> compiler is useless. It compiled without error. Hmmm... I added -v to the command. it does indeed ask for libgcc_s though it specifies it with a version number: ld -lcrt1.10.5.o -dynamic -arch x86_64 -weak_reference_mismatches non-weak -o conftest /var/folders/5q/5qTPn6xq2RaWqk+1Ytw3-U+++TI/-Tmp-//iccTRv8HW.o -L/opt/intel/Compiler/11.0/056/lib -L/usr/lib/i686-apple-darwin9/4.0.1/ -L/usr/lib/ -L/usr/lib/gcc/i686-apple-darwin9/4.0.1/x86_64 -L/usr/lib/gcc/i686-apple-darwin9/4.0.1/ -L/usr/lib/gcc/i686-apple-darwin9/4.0.1/../../../i686-apple-darwin9/4.0.1/ -L/usr/lib/gcc/i686-apple-darwin9/4.0.1/../../.. /opt/intel/Compiler/11.0/056/lib/libimf.a /opt/intel/Compiler/11.0/056/lib/libsvml.a /opt/intel/Compiler/11.0/056/lib/libipgo.a /opt/intel/Compiler/11.0/056/lib/libdecimal.a /opt/intel/Compiler/11.0/056/lib/libirc.a -lgcc_s.10.5 -lgcc -lSystemStubs -lmx -lSystem /opt/intel/Compiler/11.0/056/lib/libirc.a /opt/intel/Compiler/11.0/056/lib/libirc_s.a -ldl This is from the same shell where the configure run failed so I'm fairly certain it can't be related to a different set of environment variables. The only possible environment change would seem to be something configure imposed. I added -v to the ac_link command to see what it was generating for the ld command: ld -lcrt1.o -dynamic -arch x86_64 -weak_reference_mismatches non-weak -macosx_version_min 10.3 -o conftest /var/folders/5q/5qTPn6xq2RaWqk+1Ytw3-U+++TI/-Tmp-//iccTIMK7D.o -L/opt/intel/Compiler/11.0/056/lib -L/usr/lib/i686-apple-darwin9/4.0.1/ -L/usr/lib/ -L/usr/lib/gcc/i686-apple-darwin9/4.0.1/x86_64 -L/usr/lib/gcc/i686-apple-darwin9/4.0.1/ -L/usr/lib/gcc/i686-apple-darwin9/4.0.1/../../../i686-apple-darwin9/4.0.1/ -L/usr/lib/gcc/i686-apple-darwin9/4.0.1/../../.. /opt/intel/Compiler/11.0/056/lib/libimf.a /opt/intel/Compiler/11.0/056/lib/libsvml.a /opt/intel/Compiler/11.0/056/lib/libipgo.a /opt/intel/Compiler/11.0/056/lib/libdecimal.a /opt/intel/Compiler/11.0/056/lib/libirc.a -lgcc_s -lgcc -lSystemStubs -lmx -lSystem /opt/intel/Compiler/11.0/056/lib/libirc.a /opt/intel/Compiler/11.0/056/lib/libirc_s.a -ldl I searched back through config.log looking for gcc_s. I noticed that the ld command used for -fno-strict-aliasing linked against -lgcc_s.10.5 but that the check for -Olimit 1500 linked against -lgcc_s. In between there is this block of code: # Calculate the right deployment target for this build. # cur_target=`sw_vers -productVersion | sed 's/\(10\.[[0-9]]*\).*/\1/'` if test ${cur_target} '>' 10.2; then cur_target=10.3 fi if test "${UNIVERSAL_ARCHS}" = "all"; then # Ensure that the default platform for a 4-way # universal build is OSX 10.5, that's the first # OS release where 4-way builds make sense. cur_target='10.5' fi CONFIGURE_MACOSX_DEPLOYMENT_TARGET=${MACOSX_DEPLOYMENT_TARGET-${cur_target}} # Make sure that MACOSX_DEPLOYMENT_TARGET is set in the # environment with a value that is the same as what we'll use # in the Makefile to ensure that we'll get the same compiler # environment during configure and build time. MACOSX_DEPLOYMENT_TARGET="$CONFIGURE_MACOSX_DEPLOYMENT_TARGET" export MACOSX_DEPLOYMENT_TARGET EXPORT_MACOSX_DEPLOYMENT_TARGET='' I stuck in an echo after the export statement: ... checking whether icc accepts -fno-strict-aliasing... yes >>> 10.3 checking whether icc accepts -OPT:Olimit=0... (cached) no ... When I installed Xcode I didn't include the 10.3 stuff since I don't run that version anymore, so it's quite possible I have a somehow "deficient" Xcode install. Still, the 10.3 stuff is not installed by default these days so it shouldn't be required. This code looks suspicious: if test ${cur_target} '>' 10.2; then cur_target=10.3 fi If I comment it out configure succeeds. This code dates from r65061 which states: #3381 fix framework builds on 10.4 Maybe it should be if test ${cur_target} '>' 10.2 -a ${cur_target} '<' 10.5 ; then cur_target=10.3 fi I'll open a ticket. Skip From ncoghlan at gmail.com Sun Jan 4 19:39:36 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 05 Jan 2009 04:39:36 +1000 Subject: [Python-Dev] python.org OS In-Reply-To: <87aba7xcjx.fsf@xemacs.org> References: <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> <87wsdbv32j.fsf@xemacs.org> <20090104145153.GA23297@panix.com> <87aba7xcjx.fsf@xemacs.org> Message-ID: <49610268.6050709@gmail.com> Stephen J. Turnbull wrote: > And for that ML this is huge; I don't recall so many screams on a > commercial vendor upgrade since Red Hat went from HJ Liu libc to glibc > 2. I've had problems with Kubuntu's graphical updater crashing, but never anything a "sudo apt-get dist-upgrade" didn't fix. Although I'm still on Kubuntu 8.04 - KDE 4 isn't far enough along for me to consider switching to 8.10, so I probably won't be upgrading again until the next LTS release. (Also, if the infrastructure committee *do* consider using Ubuntu for any of the python.org servers, then the LTS releases are the ones that should probably be considered, rather than the 6-monthly ones). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From lists at cheimes.de Sun Jan 4 20:49:50 2009 From: lists at cheimes.de (Christian Heimes) Date: Sun, 04 Jan 2009 20:49:50 +0100 Subject: [Python-Dev] py3k, bad sys.path and abort() In-Reply-To: <49605B83.3080204@gmail.com> References: <49605B83.3080204@gmail.com> Message-ID: Mark Hammond schrieb: > Would it be practical and desirable to handle this situation more > gracefully, possibly just leaving sys.std* set to None and letting > whatever exceptions then occur happen as normal without terminating the > process? Given it is an edge-case, I thought I'd open it here for > discussion before putting work into a patch or opening a bug. It's probably more helpful to keep the stderrprinter objects (Object/fileobject.c) than setting stderr and stdout to None. At least you can print something to stderr and stdout. On the other hand neither GUI apps nor NT services have standard streams. You'll probably get it right :) Christian From martin at v.loewis.de Sun Jan 4 23:38:51 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 04 Jan 2009 23:38:51 +0100 Subject: [Python-Dev] How to configure with icc on Mac? In-Reply-To: <18784.63978.749398.176884@montanaro.dyndns.org> References: <18784.36715.188514.447004@montanaro.dyndns.org> <4960BD3E.70809@v.loewis.de> <18784.58298.72695.148186@montanaro.dyndns.org> <4960E8C7.2030104@v.loewis.de> <18784.63978.749398.176884@montanaro.dyndns.org> Message-ID: <49613A7B.2000009@v.loewis.de> > This code looks suspicious: > > if test ${cur_target} '>' 10.2; then > cur_target=10.3 > fi > > If I comment it out configure succeeds. This code dates from r65061 No, it dates from r45800: r45800 | ronald.oussoren | 2006-04-29 13:31:35 +0200 (Sa, 29. Apr 2006) | 2 lines Patch 1471883: --enable-universalsdk on Mac OS X I think the intention is that the binaries built work on OSX 10.3 and later, hence OSX_DEPLOYMENT_TARGET is set to 10.3. Regards, Martin From martin at v.loewis.de Sun Jan 4 23:52:13 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 04 Jan 2009 23:52:13 +0100 Subject: [Python-Dev] How to configure with icc on Mac? In-Reply-To: References: <18784.36715.188514.447004@montanaro.dyndns.org> <4960BD3E.70809@v.loewis.de> <18784.58298.72695.148186@montanaro.dyndns.org> <4960E8C7.2030104@v.loewis.de> Message-ID: <49613D9D.9080002@v.loewis.de> > Perhaps --without-gcc should be removed, if it's both useless and misleading? > (note: I don't have an interest in the matter) I had the same thought. Regards, Martin From skip at pobox.com Mon Jan 5 00:28:25 2009 From: skip at pobox.com (skip at pobox.com) Date: Sun, 4 Jan 2009 17:28:25 -0600 Subject: [Python-Dev] Why is there still a PRINT_EXPR opcode in Python 3? Message-ID: <18785.17945.480606.294889@montanaro.dyndns.org> Since print is now a builtin function why is there still a PRINT_EXPR opcode? Skip From martin at v.loewis.de Mon Jan 5 00:30:16 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 05 Jan 2009 00:30:16 +0100 Subject: [Python-Dev] Wrong buildbot page for 3.x-stable In-Reply-To: References: Message-ID: <49614688.10102@v.loewis.de> > Isn't the latter supposed to point to the py3k branch buildbots? Thanks for pointing that out - it is fixed now. Martin From benjamin at python.org Mon Jan 5 00:32:54 2009 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 4 Jan 2009 17:32:54 -0600 Subject: [Python-Dev] Why is there still a PRINT_EXPR opcode in Python 3? In-Reply-To: <18785.17945.480606.294889@montanaro.dyndns.org> References: <18785.17945.480606.294889@montanaro.dyndns.org> Message-ID: <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> On Sun, Jan 4, 2009 at 5:28 PM, wrote: > Since print is now a builtin function why is there still a PRINT_EXPR > opcode? I believe it's used in the interactive interpreter to display the repr of an expression. -- Regards, Benjamin From skip at pobox.com Mon Jan 5 00:36:38 2009 From: skip at pobox.com (skip at pobox.com) Date: Sun, 4 Jan 2009 17:36:38 -0600 Subject: [Python-Dev] Why is there still a PRINT_EXPR opcode in Python 3? In-Reply-To: <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> References: <18785.17945.480606.294889@montanaro.dyndns.org> <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> Message-ID: <18785.18438.876594.628144@montanaro.dyndns.org> >> Since print is now a builtin function why is there still a PRINT_EXPR >> opcode? Benjamin> I believe it's used in the interactive interpreter to display Benjamin> the repr of an expression. Wouldn't it make more sense for the interactive interpreter to call print(repr(expr)) ? Skip From benjamin at python.org Mon Jan 5 00:53:43 2009 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 4 Jan 2009 17:53:43 -0600 Subject: [Python-Dev] Why is there still a PRINT_EXPR opcode in Python 3? In-Reply-To: <18785.18438.876594.628144@montanaro.dyndns.org> References: <18785.17945.480606.294889@montanaro.dyndns.org> <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> <18785.18438.876594.628144@montanaro.dyndns.org> Message-ID: <1afaf6160901041553x51f7573cq670088eed743b8dd@mail.gmail.com> On Sun, Jan 4, 2009 at 5:36 PM, wrote: > > >> Since print is now a builtin function why is there still a PRINT_EXPR > >> opcode? > > Benjamin> I believe it's used in the interactive interpreter to display > Benjamin> the repr of an expression. > > Wouldn't it make more sense for the interactive interpreter to call > > print(repr(expr)) I'm not sure about the reasoning for keeping PRINT_EXPR alive. When I look at the code of PyRun_InteractiveOne, it seems it should be possible to kill it off. -- Regards, Benjamin From steve at holdenweb.com Mon Jan 5 01:46:52 2009 From: steve at holdenweb.com (Steve Holden) Date: Sun, 04 Jan 2009 19:46:52 -0500 Subject: [Python-Dev] I would like an svn account In-Reply-To: <35165DB2-F325-4DF0-B2CF-AB56C1C244BD@python.org> References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <35165DB2-F325-4DF0-B2CF-AB56C1C244BD@python.org> Message-ID: Barry Warsaw wrote: > On Jan 3, 2009, at 11:29 PM, Steve Holden wrote: > >> Brett Cannon wrote: >> [...] >>> I have been using bzr for all of my importlib work. It's worked out >>> well sans the problem that SOMEONE Barry has not >>> upgraded the bzr installation to support the newest wire protocol. >>> >> If you think *that's* a problem try getting him to write a simple bloody >> blog entry ... > > Ouch. That's the thanks I get. And you didn't even post a url to help > increase my net.fame. > http://onyourdesktop.blogspot.com/2008/12/barry-warsaw.html always-willing-to-oblige-ly yr's - steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From steve at holdenweb.com Mon Jan 5 02:51:32 2009 From: steve at holdenweb.com (Steve Holden) Date: Sun, 04 Jan 2009 20:51:32 -0500 Subject: [Python-Dev] python.org OS In-Reply-To: <87aba7xcjx.fsf@xemacs.org> References: <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> <87wsdbv32j.fsf@xemacs.org> <20090104145153.GA23297@panix.com> <87aba7xcjx.fsf@xemacs.org> Message-ID: <496167A4.9070003@holdenweb.com> Stephen J. Turnbull wrote: > Aahz writes: > > > all. Because I was lazy, last weekend I finally did a two-stage upgrade > > from 7.10 to 8.04 and then 8.10, with zero noticeable problems. > > The scary one is two independent reports of fstab corruption in the > 8.04 to 8.10 upgrade. It is claimed to be unfixable by booting from > CD, mounting the partition, and editing fstab: the editor saves but > the fstab returns to the original corrupt state upon reboot. > > Several reports of inability to use formerly working Japanese input > methods. Several reports of formerly working xorg.conf suddenly > reverting to VESA 1024x768x8 (or worse). > You can add my name to those reporting inexplicable reversion of video settings. I'm getting tired of it seeing a 640 x 480 screen. > I don't use Ubuntu so this is all hearsay, but I do trust the > ex-NetBSD dev to be reporting accurately. He's only having problems > with his custom X11 keymap getting trashed, and something else > relatively minor with Xorg. > > And for that ML this is huge; I don't recall so many screams on a > commercial vendor upgrade since Red Hat went from HJ Liu libc to glibc > 2. Ubuntu is a victim of its own success. They now have to deal with the same diversity of hardware environments as Windows. I hope that Canonical will find a way to stabilize things. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From stephen at xemacs.org Mon Jan 5 05:20:30 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Mon, 05 Jan 2009 13:20:30 +0900 Subject: [Python-Dev] How to configure with icc on Mac? In-Reply-To: <18784.58298.72695.148186@montanaro.dyndns.org> References: <18784.36715.188514.447004@montanaro.dyndns.org> <4960BD3E.70809@v.loewis.de> <18784.58298.72695.148186@montanaro.dyndns.org> Message-ID: <87zli6gz8x.fsf@xemacs.org> skip at pobox.com writes: > >> That failed because of a bug in configure.in: > >> > >> case $withval in > >> no) CC=cc > >> without_gcc=yes;; > >> yes) CC=gcc > >> without_gcc=no;; > >> *) CC=$withval > >> without_gcc=$withval;; > >> > >> It ignores the CC value on the command line. > > Martin> I don't think it is a bug. --without-gcc *overrides* the CC > Martin> environment variable, rather than ignoring it. > > I don't think that's right. There's no telling what the non-gcc compiler is > called. As far as I can tell you can't give any arguments to --without-gcc. That's right. The theory is that there's a vendor default compiler installed as "cc" on PATH, and there's GCC. configure tries to encourage use of GCC, but you can use the vendor compiler with --without-gcc, which is 100% equivalent to --with-gcc=no. But you *can* give arguments to --with-gcc. If you want to use a different compiler, use --with-gcc=a-different-compiler. If autoconf and configure.in are written correctly, GCC-dependent features will be bracketed with case "$without_gcc" in no ) gcc* ) # test for and configure GCC feature here ;; icc* ) # optional # test for and configure similar icc feature here ;; * ) # test for and configure similar portable feature here ;; Don't flame me if you agree with me that this is a poor interface. The option should be --with-compiler, of course. From stephen at xemacs.org Mon Jan 5 05:26:24 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Mon, 05 Jan 2009 13:26:24 +0900 Subject: [Python-Dev] python.org OS In-Reply-To: <25E0144E-B62A-4E7B-AFED-4A7007E0A91C@python.org> References: <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> <87wsdbv32j.fsf@xemacs.org> <20090104145153.GA23297@panix.com> <87aba7xcjx.fsf@xemacs.org> <25E0144E-B62A-4E7B-AFED-4A7007E0A91C@python.org> Message-ID: <87y6xqgyz3.fsf@xemacs.org> Barry Warsaw writes: > > The scary one is two independent reports of fstab corruption in the > > 8.04 to 8.10 upgrade. It is claimed to be unfixable by booting from > > CD, mounting the partition, and editing fstab: the editor saves but > > the fstab returns to the original corrupt state upon reboot. > For me, servers have been the easiest to upgrade. I've never had a > problem with the OS specifically. I have had problems with certain > applications [...]. On the desktops, IME the most troublesome part > is always X, [...]. This certainly conforms to what I've seen on that LUG list. Since nobody on that list is running Ubuntu server, the "scary one" (quoted above) can probably be discounted, too. That looks like some user-friendliness run amok. From leif.walsh at gmail.com Mon Jan 5 07:32:18 2009 From: leif.walsh at gmail.com (Leif Walsh) Date: Mon, 5 Jan 2009 01:32:18 -0500 Subject: [Python-Dev] python.org OS In-Reply-To: <496167A4.9070003@holdenweb.com> References: <200901031713.05985.doomster@knuut.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> <87wsdbv32j.fsf@xemacs.org> <20090104145153.GA23297@panix.com> <87aba7xcjx.fsf@xemacs.org> <496167A4.9070003@holdenweb.com> Message-ID: On Sun, Jan 4, 2009 at 8:51 PM, Steve Holden wrote: > Ubuntu is a victim of its own success. They now have to deal with the > same diversity of hardware environments as Windows. I hope that > Canonical will find a way to stabilize things. I think it's actually worse. Microsoft can always (and, in my experience, often does) restrict their support to hardware sets approved for "Windows ver. N". Custom-built or upgraded ("tampered-with") boxes often get worse (or no) support than OEM boxes. Linux distributions, on the other hand, are expected to provide support for any hardware. In this respect, since Ubuntu has a larger user base, and therefore a larger range of hardware sets, yes, their job is difficult, but I'm not sure this is a victimization, rather more of an inherent issue exacerbated by its success. Either way, it does kind of suck. On Sun, Jan 4, 2009 at 11:26 PM, Stephen J. Turnbull wrote: > This certainly conforms to what I've seen on that LUG list. Since > nobody on that list is running Ubuntu server, the "scary one" (quoted > above) can probably be discounted, too. That looks like some > user-friendliness run amok. True, most of the upgrade problems deal with packages that aren't in the server install. This should be an easy one, but now I'd ask, why not use Debian instead? -- Cheers, Leif From kristjan at ccpgames.com Mon Jan 5 11:21:02 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Mon, 5 Jan 2009 10:21:02 +0000 Subject: [Python-Dev] issue 3582 Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D1883EE3@exchis.ccp.ad.local> http://bugs.python.org/issue3582 I submitted a patch last august, but have had no comments. Any thoughts? Here is a suggested update to thread_nt.c. It has two significant changes: 1) it uses the new and safer _beginthreadex API to start a thread 2) it implements native TLS functions on NT, which are certain to be as fast as possible. Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen at xemacs.org Mon Jan 5 11:38:21 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Mon, 05 Jan 2009 19:38:21 +0900 Subject: [Python-Dev] python.org OS In-Reply-To: References: <200901031713.05985.doomster@knuut.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <49603D8E.1090009@holdenweb.com> <87wsdbv32j.fsf@xemacs.org> <20090104145153.GA23297@panix.com> <87aba7xcjx.fsf@xemacs.org> <496167A4.9070003@holdenweb.com> Message-ID: <87hc4excki.fsf@xemacs.org> Leif Walsh writes: > True, most of the upgrade problems deal with packages that aren't in > the server install. This should be an easy one, but now I'd ask, why > not use Debian instead? You mean, "why not stick with Debian instead?" The reason is that Debian stable lags the real world dramatically. It's an extremely stable platform (in all meanings of "stable"), but quite restrictive. Ubuntu's LTS versions are much more up-to-date. Debian "sid" is out, obviously, and Debian "testing" has the problem that it is a fairly fast-moving target. Not so much as "sid", but if I were the sysadmin, I would not be happy with installing "testing" as of some date, knowing that many components would not correspond to that tag within hours, at most days. It's a hard question IMO. From kristjan at ccpgames.com Mon Jan 5 11:41:02 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Mon, 5 Jan 2009 10:41:02 +0000 Subject: [Python-Dev] ParseTuple question In-Reply-To: <200901021232.30572.doomster@knuut.de> References: <200901021232.30572.doomster@knuut.de> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D1883EFE@exchis.ccp.ad.local> Funny, I was just looking at this code. Anyway, whenever I need Unicode stuff as an argument, I use this idiom: PyObject *uO; PyObject *uU; Py_UNICODE *u; If (!PyArg_ParseTuple(args, "O", &uO)) return 0; uU = PyUnicode_FromObject(uO); if (!uU) return 0; u = PyUnicode_AS_UNICODE(uU); There is no automatic conversion in PyArg_ParseTuple, because there is no temporary place to store the converted item. It does work the other way round (turning a Unicode object to a char*) because the Unicode object has a default conversion member slot to store it. It should be possible to augment the PyArg_ParseTuple to provide a slot for a temporary object, something like: PyArg_ParseTuple(args, "u", &uU, &u) K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Ulrich Eckhardt Sent: 2. jan?ar 2009 11:33 To: python-dev at python.org Subject: [Python-Dev] ParseTuple question Hi! I'm looking at NullImporter_init in import.c and especially at the call to PyArg_ParseTuple there. What I'm wondering is what that call will do when I call the function with a Unicode object. Will it convert the Unicode to a char string first, will it return the Unicode object in a certain (default) encoding, will it fail? I'm working on the MS Windows CE port, and I don't have stat() there. Also, I don't have GetFileAttributesA(char const*) there, so I need a wchar_t (UTF-16) string anyway. What would be the best way to get one? Thanks! Uli _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From mal at egenix.com Mon Jan 5 13:13:56 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Mon, 05 Jan 2009 13:13:56 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> <495E3B3F.7090603@egenix.com> Message-ID: <4961F984.5060307@egenix.com> On 2009-01-03 04:15, Adam Olsen wrote: > On Fri, Jan 2, 2009 at 9:05 AM, M.-A. Lemburg wrote: >> On 2009-01-02 08:26, Adam Olsen wrote: >>> Python's malloc wrappers are pretty messy. Of your examples, only >>> unicode->str isn't obvious what the result is, as the rest are local >>> to that function. Even that is obvious when you glance at the line >>> above, where the size is calculated using sizeof(Py_UNICODE). >>> >>> If you're concerned about correctness then you'd do better eliminating >>> the redundant malloc wrappers and giving them names that directly >>> match what they can be used for. >> ??? Please read the comments in pymem.h and objimpl.h. > > I count 7 versions of malloc. Despite the names, none of them are > specific to PyObjects. It's pretty much impossible to know what > different ones do without a great deal of experience. Is it ? I suggest you read up on the Python memory management and the comments in the header files. The APIs are pretty straight forward... http://docs.python.org/c-api/allocation.html http://docs.python.org/c-api/memory.html > Only very specialized uses need to allocate PyObjects directly anyway. > Normally PyObject_{New,NewVar,GC_New,GC_NewVar} are better. Better for what ? The APIs you referenced are only used to allocate Python objects. The malloc() wrappers provide a sane interface not only for allocating Python objects, but also for arbitrary memory chunks, e.g. ones referenced by Python objects. >>> If the size calculation bothers you you could include the semantics of >>> the PyMem_New() API, which includes the cast you want. I've no >>> opposition to including casts in a single place like that (and it >>> would catch errors even with C compilation). >> You should always use PyMem_NEW() (capital letters), if you ever >> intend to benefit from the memory allocation debug facilities >> in the Python memory allocation interfaces. > > I don't see why such debugging should require a full recompile, rather > than having a hook inside the PyMem_Malloc (or even providing a > different PyMem_Malloc). Of course it does: you don't want the debug overhead in a production build. >> The difference between using the _NEW() macros and the _MALLOC() >> macros is that the first apply overflow checking for you. However, >> the added overhead only makes sense if these overflow haven't >> already been applied elsewhere. > > They provide assertions. There's no overflow checking in release builds. See above. Assertions are not meant to be checked in a production build. You use debug builds for debugging such low-level things. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 05 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From amauryfa at gmail.com Mon Jan 5 13:25:14 2009 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Mon, 5 Jan 2009 13:25:14 +0100 Subject: [Python-Dev] Why is there still a PRINT_EXPR opcode in Python 3? In-Reply-To: <1afaf6160901041553x51f7573cq670088eed743b8dd@mail.gmail.com> References: <18785.17945.480606.294889@montanaro.dyndns.org> <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> <18785.18438.876594.628144@montanaro.dyndns.org> <1afaf6160901041553x51f7573cq670088eed743b8dd@mail.gmail.com> Message-ID: On Mon, Jan 5, 2009 at 00:53, Benjamin Peterson wrote: > On Sun, Jan 4, 2009 at 5:36 PM, wrote: >> >> >> Since print is now a builtin function why is there still a PRINT_EXPR >> >> opcode? >> >> Benjamin> I believe it's used in the interactive interpreter to display >> Benjamin> the repr of an expression. >> >> Wouldn't it make more sense for the interactive interpreter to call >> >> print(repr(expr)) > > I'm not sure about the reasoning for keeping PRINT_EXPR alive. When I > look at the code of PyRun_InteractiveOne, it seems it should be > possible to kill it off. How would you display multiple lines, like: >>> for x in range(3): ... x, x * x ... (0, 0) (1, 1) (2, 4) >>> if 1: ... "some line" ... "another line" ... 'some line' 'another line' OTOH this seems an obscure feature. "for" and "if" are statements after all. -- Amaury Forgeot d'Arc From jimjjewett at gmail.com Mon Jan 5 16:12:34 2009 From: jimjjewett at gmail.com (Jim Jewett) Date: Mon, 5 Jan 2009 10:12:34 -0500 Subject: [Python-Dev] [Python-checkins] r68182 - in python/trunk: Lib/decimal.py Misc/NEWS In-Reply-To: <20090102230709.032011E4002@bag.python.org> References: <20090102230709.032011E4002@bag.python.org> Message-ID: Our of curiousity, why are these constants for internal use only? Is there concern that people might start using "is", or is it just to keep the (beyond the spec) API small, or ...? -jJ On Fri, Jan 2, 2009 at 6:07 PM, mark. dickinson wrote: > Author: mark.dickinson > Date: Sat Jan 3 00:07:08 2009 > New Revision: 68182 > > Log: > Issue #4812: add missing underscore prefix to some internal-use-only > constants in the decimal module. (Dec_0 becomes _Dec_0, etc.) > > > > Modified: > python/trunk/Lib/decimal.py > python/trunk/Misc/NEWS > > Modified: python/trunk/Lib/decimal.py > ============================================================================== > --- python/trunk/Lib/decimal.py (original) > +++ python/trunk/Lib/decimal.py Sat Jan 3 00:07:08 2009 > @@ -216,7 +216,7 @@ > if args: > ans = _dec_from_triple(args[0]._sign, args[0]._int, 'n', True) > return ans._fix_nan(context) > - return NaN > + return _NaN > > class ConversionSyntax(InvalidOperation): > """Trying to convert badly formed string. > @@ -226,7 +226,7 @@ > syntax. The result is [0,qNaN]. > """ > def handle(self, context, *args): > - return NaN > + return _NaN > > class DivisionByZero(DecimalException, ZeroDivisionError): > """Division by 0. > @@ -242,7 +242,7 @@ > """ > > def handle(self, context, sign, *args): > - return Infsign[sign] > + return _Infsign[sign] > > class DivisionImpossible(InvalidOperation): > """Cannot perform the division adequately. > @@ -253,7 +253,7 @@ > """ > > def handle(self, context, *args): > - return NaN > + return _NaN > > class DivisionUndefined(InvalidOperation, ZeroDivisionError): > """Undefined result of division. > @@ -264,7 +264,7 @@ > """ > > def handle(self, context, *args): > - return NaN > + return _NaN > > class Inexact(DecimalException): > """Had to round, losing information. > @@ -290,7 +290,7 @@ > """ > > def handle(self, context, *args): > - return NaN > + return _NaN > > class Rounded(DecimalException): > """Number got rounded (not necessarily changed during rounding). > @@ -340,15 +340,15 @@ > def handle(self, context, sign, *args): > if context.rounding in (ROUND_HALF_UP, ROUND_HALF_EVEN, > ROUND_HALF_DOWN, ROUND_UP): > - return Infsign[sign] > + return _Infsign[sign] > if sign == 0: > if context.rounding == ROUND_CEILING: > - return Infsign[sign] > + return _Infsign[sign] > return _dec_from_triple(sign, '9'*context.prec, > context.Emax-context.prec+1) > if sign == 1: > if context.rounding == ROUND_FLOOR: > - return Infsign[sign] > + return _Infsign[sign] > return _dec_from_triple(sign, '9'*context.prec, > context.Emax-context.prec+1) > > @@ -1171,12 +1171,12 @@ > if self._isinfinity(): > if not other: > return context._raise_error(InvalidOperation, '(+-)INF * 0') > - return Infsign[resultsign] > + return _Infsign[resultsign] > > if other._isinfinity(): > if not self: > return context._raise_error(InvalidOperation, '0 * (+-)INF') > - return Infsign[resultsign] > + return _Infsign[resultsign] > > resultexp = self._exp + other._exp > > @@ -1226,7 +1226,7 @@ > return context._raise_error(InvalidOperation, '(+-)INF/(+-)INF') > > if self._isinfinity(): > - return Infsign[sign] > + return _Infsign[sign] > > if other._isinfinity(): > context._raise_error(Clamped, 'Division by infinity') > @@ -1329,7 +1329,7 @@ > ans = context._raise_error(InvalidOperation, 'divmod(INF, INF)') > return ans, ans > else: > - return (Infsign[sign], > + return (_Infsign[sign], > context._raise_error(InvalidOperation, 'INF % x')) > > if not other: > @@ -1477,7 +1477,7 @@ > if other._isinfinity(): > return context._raise_error(InvalidOperation, 'INF // INF') > else: > - return Infsign[self._sign ^ other._sign] > + return _Infsign[self._sign ^ other._sign] > > if not other: > if self: > @@ -1732,12 +1732,12 @@ > if not other: > return context._raise_error(InvalidOperation, > 'INF * 0 in fma') > - product = Infsign[self._sign ^ other._sign] > + product = _Infsign[self._sign ^ other._sign] > elif other._exp == 'F': > if not self: > return context._raise_error(InvalidOperation, > '0 * INF in fma') > - product = Infsign[self._sign ^ other._sign] > + product = _Infsign[self._sign ^ other._sign] > else: > product = _dec_from_triple(self._sign ^ other._sign, > str(int(self._int) * int(other._int)), > @@ -2087,7 +2087,7 @@ > if not self: > return context._raise_error(InvalidOperation, '0 ** 0') > else: > - return Dec_p1 > + return _Dec_p1 > > # result has sign 1 iff self._sign is 1 and other is an odd integer > result_sign = 0 > @@ -2109,19 +2109,19 @@ > if other._sign == 0: > return _dec_from_triple(result_sign, '0', 0) > else: > - return Infsign[result_sign] > + return _Infsign[result_sign] > > # Inf**(+ve or Inf) = Inf; Inf**(-ve or -Inf) = 0 > if self._isinfinity(): > if other._sign == 0: > - return Infsign[result_sign] > + return _Infsign[result_sign] > else: > return _dec_from_triple(result_sign, '0', 0) > > # 1**other = 1, but the choice of exponent and the flags > # depend on the exponent of self, and on whether other is a > # positive integer, a negative integer, or neither > - if self == Dec_p1: > + if self == _Dec_p1: > if other._isinteger(): > # exp = max(self._exp*max(int(other), 0), > # 1-context.prec) but evaluating int(other) directly > @@ -2154,7 +2154,7 @@ > if (other._sign == 0) == (self_adj < 0): > return _dec_from_triple(result_sign, '0', 0) > else: > - return Infsign[result_sign] > + return _Infsign[result_sign] > > # from here on, the result always goes through the call > # to _fix at the end of this function. > @@ -2674,9 +2674,9 @@ > """ > # if one is negative and the other is positive, it's easy > if self._sign and not other._sign: > - return Dec_n1 > + return _Dec_n1 > if not self._sign and other._sign: > - return Dec_p1 > + return _Dec_p1 > sign = self._sign > > # let's handle both NaN types > @@ -2686,51 +2686,51 @@ > if self_nan == other_nan: > if self._int < other._int: > if sign: > - return Dec_p1 > + return _Dec_p1 > else: > - return Dec_n1 > + return _Dec_n1 > if self._int > other._int: > if sign: > - return Dec_n1 > + return _Dec_n1 > else: > - return Dec_p1 > - return Dec_0 > + return _Dec_p1 > + return _Dec_0 > > if sign: > if self_nan == 1: > - return Dec_n1 > + return _Dec_n1 > if other_nan == 1: > - return Dec_p1 > + return _Dec_p1 > if self_nan == 2: > - return Dec_n1 > + return _Dec_n1 > if other_nan == 2: > - return Dec_p1 > + return _Dec_p1 > else: > if self_nan == 1: > - return Dec_p1 > + return _Dec_p1 > if other_nan == 1: > - return Dec_n1 > + return _Dec_n1 > if self_nan == 2: > - return Dec_p1 > + return _Dec_p1 > if other_nan == 2: > - return Dec_n1 > + return _Dec_n1 > > if self < other: > - return Dec_n1 > + return _Dec_n1 > if self > other: > - return Dec_p1 > + return _Dec_p1 > > if self._exp < other._exp: > if sign: > - return Dec_p1 > + return _Dec_p1 > else: > - return Dec_n1 > + return _Dec_n1 > if self._exp > other._exp: > if sign: > - return Dec_n1 > + return _Dec_n1 > else: > - return Dec_p1 > - return Dec_0 > + return _Dec_p1 > + return _Dec_0 > > > def compare_total_mag(self, other): > @@ -2771,11 +2771,11 @@ > > # exp(-Infinity) = 0 > if self._isinfinity() == -1: > - return Dec_0 > + return _Dec_0 > > # exp(0) = 1 > if not self: > - return Dec_p1 > + return _Dec_p1 > > # exp(Infinity) = Infinity > if self._isinfinity() == 1: > @@ -2927,15 +2927,15 @@ > > # ln(0.0) == -Infinity > if not self: > - return negInf > + return _negInf > > # ln(Infinity) = Infinity > if self._isinfinity() == 1: > - return Inf > + return _Inf > > # ln(1.0) == 0.0 > - if self == Dec_p1: > - return Dec_0 > + if self == _Dec_p1: > + return _Dec_0 > > # ln(negative) raises InvalidOperation > if self._sign == 1: > @@ -3007,11 +3007,11 @@ > > # log10(0.0) == -Infinity > if not self: > - return negInf > + return _negInf > > # log10(Infinity) = Infinity > if self._isinfinity() == 1: > - return Inf > + return _Inf > > # log10(negative or -Infinity) raises InvalidOperation > if self._sign == 1: > @@ -3063,7 +3063,7 @@ > > # logb(+/-Inf) = +Inf > if self._isinfinity(): > - return Inf > + return _Inf > > # logb(0) = -Inf, DivisionByZero > if not self: > @@ -3220,7 +3220,7 @@ > return ans > > if self._isinfinity() == -1: > - return negInf > + return _negInf > if self._isinfinity() == 1: > return _dec_from_triple(0, '9'*context.prec, context.Etop()) > > @@ -3243,7 +3243,7 @@ > return ans > > if self._isinfinity() == 1: > - return Inf > + return _Inf > if self._isinfinity() == -1: > return _dec_from_triple(1, '9'*context.prec, context.Etop()) > > @@ -5490,15 +5490,15 @@ > ##### Useful Constants (internal use only) ################################ > > # Reusable defaults > -Inf = Decimal('Inf') > -negInf = Decimal('-Inf') > -NaN = Decimal('NaN') > -Dec_0 = Decimal(0) > -Dec_p1 = Decimal(1) > -Dec_n1 = Decimal(-1) > +_Inf = Decimal('Inf') > +_negInf = Decimal('-Inf') > +_NaN = Decimal('NaN') > +_Dec_0 = Decimal(0) > +_Dec_p1 = Decimal(1) > +_Dec_n1 = Decimal(-1) > > -# Infsign[sign] is infinity w/ that sign > -Infsign = (Inf, negInf) > +# _Infsign[sign] is infinity w/ that sign > +_Infsign = (_Inf, _negInf) > > > > > Modified: python/trunk/Misc/NEWS > ============================================================================== > --- python/trunk/Misc/NEWS (original) > +++ python/trunk/Misc/NEWS Sat Jan 3 00:07:08 2009 > @@ -108,6 +108,9 @@ > Library > ------- > > +- Issue #4812: add missing underscore prefix to some internal-use-only > + constants in the decimal module. (Dec_0 becomes _Dec_0, etc.) > + > - Issue #4795: inspect.isgeneratorfunction() returns False instead of None when > the function is not a generator. > > _______________________________________________ > Python-checkins mailing list > Python-checkins at python.org > http://mail.python.org/mailman/listinfo/python-checkins > From duncan.mcgreggor at gmail.com Mon Jan 5 18:10:32 2009 From: duncan.mcgreggor at gmail.com (Duncan McGreggor) Date: Mon, 5 Jan 2009 11:10:32 -0600 Subject: [Python-Dev] address manipulation in the standard lib Message-ID: <4327dfbd0901050910p58d7935fie494b3144dd09018@mail.gmail.com> Last Fall, Guido opened a ticket to include Google's ipaddr.py in the standard lib: http://bugs.python.org/issue3959 There has been some recent discussion on that ticket, enough so that it might benefit everyone if it was moved on to the dev list. I do recommend reading that ticket, though -- lots of good perspectives are represented. The two libraries that are being discussed the most for possible inclusion are the following: * http://code.google.com/p/ipaddr-py/wiki/IPAddrExmples * http://code.google.com/p/netaddr/wiki/NetAddrExamples The most immediately obvious differences between the two are: * ipaddr supports subnet/supernet/net exclusions * netaddr supports EUI/MAC address manipulations * the netaddr API differentiates between an IP and a CIDR block * netaddr supports wildcard notation * netaddr supports binary representations of addresses * ipaddr is one module whereas netaddr consists of several (as well as IANA data for such things as vendor lookups on MAC addresses) * ipaddr benchmarks as faster than netaddr * netaddr is currently PEP-8 compliant That's a quick proto-assessment based on looking at examples and unit tests and didn't include a thorough evaluation of the code itself. Martin provided some very nice guidelines in a comment on the ticket: "I think Guido's original message summarizes [what we need]: a module that fills a gap for address manipulations... In addition, it should have all the organisational qualities (happy user base, determined maintainers, copyright forms, documentation, tests). As to what precisely its API should be - that is for the experts (i.e. you) to determine. I personally think performance is important, in addition to a well-designed, useful API. Conformance to PEP 8 is also desirable." I'm planning to chat with both David Moss (netaddr) and Peter Moody (ipaddr) on the mail lists about API details, and I encourage others to do this as well. As for this list, it's probably important to define the limits of the desired feature set for an ip address manipulation library: * do we want to limit it to IP (i.e. no EUI/MAC support)? * do we want a single module or is a package acceptable? * what features would folks consider essential or highly desirable (details on this will be discussed on the project mail lists) * other thoughts? d From guido at python.org Mon Jan 5 18:44:12 2009 From: guido at python.org (Guido van Rossum) Date: Mon, 5 Jan 2009 09:44:12 -0800 Subject: [Python-Dev] address manipulation in the standard lib In-Reply-To: <4327dfbd0901050910p58d7935fie494b3144dd09018@mail.gmail.com> References: <4327dfbd0901050910p58d7935fie494b3144dd09018@mail.gmail.com> Message-ID: On Mon, Jan 5, 2009 at 9:10 AM, Duncan McGreggor wrote: > Last Fall, Guido opened a ticket to include Google's ipaddr.py in the > standard lib: > http://bugs.python.org/issue3959 > > There has been some recent discussion on that ticket, enough so that > it might benefit everyone if it was moved on to the dev list. I do > recommend reading that ticket, though -- lots of good perspectives are > represented. > > The two libraries that are being discussed the most for possible > inclusion are the following: > * http://code.google.com/p/ipaddr-py/wiki/IPAddrExmples > * http://code.google.com/p/netaddr/wiki/NetAddrExamples > > The most immediately obvious differences between the two are: > * ipaddr supports subnet/supernet/net exclusions > * netaddr supports EUI/MAC address manipulations > * the netaddr API differentiates between an IP and a CIDR block > * netaddr supports wildcard notation > * netaddr supports binary representations of addresses > * ipaddr is one module whereas netaddr consists of several (as well > as IANA data for such things as vendor lookups on MAC addresses) > * ipaddr benchmarks as faster than netaddr > * netaddr is currently PEP-8 compliant > > That's a quick proto-assessment based on looking at examples and unit > tests and didn't include a thorough evaluation of the code itself. Thanks for the summary! I've been on vacation and unable to follow the details. Note that I have no vested interest in Google's module except knowing it has many happy users (I have never used it myself). > Martin provided some very nice guidelines in a comment on the ticket: > > "I think Guido's original message summarizes [what we need]: a module > that fills a gap for address manipulations... In addition, it should > have all the organisational qualities (happy user base, determined > maintainers, copyright forms, documentation, tests). As to what > precisely its API should be - that is for the experts (i.e. you) to > determine. I personally think performance is important, in addition to > a well-designed, useful API. Conformance to PEP 8 is also desirable." > > I'm planning to chat with both David Moss (netaddr) and Peter Moody > (ipaddr) on the mail lists about API details, and I encourage others > to do this as well. As for this list, it's probably important to > define the limits of the desired feature set for an ip address > manipulation library: > * do we want to limit it to IP (i.e. no EUI/MAC support)? I don't want to exclude EUI/MAC support, but it seems quit a separate (and much more specialized) application area, so it's probably best to keep it separate (even if it may make sense to use a common (abstract or concrete) base class or just have similar APIs). > * do we want a single module or is a package acceptable? I don't care either way. > * what features would folks consider essential or highly desirable > (details on this will be discussed on the project mail lists) > * other thoughts? How about a merger? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From duncan.mcgreggor at gmail.com Mon Jan 5 19:00:50 2009 From: duncan.mcgreggor at gmail.com (Duncan McGreggor) Date: Mon, 5 Jan 2009 12:00:50 -0600 Subject: [Python-Dev] address manipulation in the standard lib In-Reply-To: References: <4327dfbd0901050910p58d7935fie494b3144dd09018@mail.gmail.com> Message-ID: <4327dfbd0901051000h22fca39dnb7d25fbbc8e1999b@mail.gmail.com> On Mon, Jan 5, 2009 at 11:44 AM, Guido van Rossum wrote: > On Mon, Jan 5, 2009 at 9:10 AM, Duncan McGreggor > wrote: >> Last Fall, Guido opened a ticket to include Google's ipaddr.py in the >> standard lib: >> http://bugs.python.org/issue3959 >> >> There has been some recent discussion on that ticket, enough so that >> it might benefit everyone if it was moved on to the dev list. I do >> recommend reading that ticket, though -- lots of good perspectives are >> represented. >> >> The two libraries that are being discussed the most for possible >> inclusion are the following: >> * http://code.google.com/p/ipaddr-py/wiki/IPAddrExmples >> * http://code.google.com/p/netaddr/wiki/NetAddrExamples >> >> The most immediately obvious differences between the two are: >> * ipaddr supports subnet/supernet/net exclusions >> * netaddr supports EUI/MAC address manipulations >> * the netaddr API differentiates between an IP and a CIDR block >> * netaddr supports wildcard notation >> * netaddr supports binary representations of addresses >> * ipaddr is one module whereas netaddr consists of several (as well >> as IANA data for such things as vendor lookups on MAC addresses) >> * ipaddr benchmarks as faster than netaddr >> * netaddr is currently PEP-8 compliant >> >> That's a quick proto-assessment based on looking at examples and unit >> tests and didn't include a thorough evaluation of the code itself. > > Thanks for the summary! I've been on vacation and unable to follow the > details. Note that I have no vested interest in Google's module except > knowing it has many happy users (I have never used it myself). > >> Martin provided some very nice guidelines in a comment on the ticket: >> >> "I think Guido's original message summarizes [what we need]: a module >> that fills a gap for address manipulations... In addition, it should >> have all the organisational qualities (happy user base, determined >> maintainers, copyright forms, documentation, tests). As to what >> precisely its API should be - that is for the experts (i.e. you) to >> determine. I personally think performance is important, in addition to >> a well-designed, useful API. Conformance to PEP 8 is also desirable." >> >> I'm planning to chat with both David Moss (netaddr) and Peter Moody >> (ipaddr) on the mail lists about API details, and I encourage others >> to do this as well. As for this list, it's probably important to >> define the limits of the desired feature set for an ip address >> manipulation library: > >> * do we want to limit it to IP (i.e. no EUI/MAC support)? > > I don't want to exclude EUI/MAC support, but it seems quit a separate > (and much more specialized) application area, so it's probably best to > keep it separate (even if it may make sense to use a common (abstract > or concrete) base class or just have similar APIs). > >> * do we want a single module or is a package acceptable? > > I don't care either way. > >> * what features would folks consider essential or highly desirable >> (details on this will be discussed on the project mail lists) >> * other thoughts? > > How about a merger? I think that's a brilliant idea. David and Peter, logistics aside, what do you think of (or how to you feel about) this suggestion? Or, if not a complete merger, unifying everything that is desired in the standard library. The code not included (e.g. EUI/MAC address stuff, vendor lookups, etc.) could continue its existence as a project, using the stdlib as a basis... d From facundobatista at gmail.com Mon Jan 5 19:42:54 2009 From: facundobatista at gmail.com (Facundo Batista) Date: Mon, 5 Jan 2009 16:42:54 -0200 Subject: [Python-Dev] Roundup version numbers Message-ID: Hi! To craete this issue compilation [0] I use roundup through its web interface. For example, to know which version names corresponds to each number, I consulted them through: http://bugs.python.org/version But since two weeks ago, this list was trimmed down. I think that it was to not be able to submit bugs for older Python versions, which is great, but there're some bugs assigned to older versions (for example, [1]). So, question: Should I use another way to query the version number-name relationship (to see them all)? Or those issues that point to older Python versions should be updated? Thank you!! [0] http://www.taniquetil.com.ar/cgi-bin/pytickets.py [1] http://bugs.python.org/issue?@search_text=&title=&@columns=title&id=&@columns=id&creation=&creator=&activity=&@columns=activity&@sort=activity&actor=&nosy=&type=&components=&versions=4&dependencies=&assignee=&keywords=&priority=&@group=priority&status=1&@columns=status&resolution=&@pagesize=50&@startwith=0&@queryname=&@old-queryname=&@action=search -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From guido at python.org Mon Jan 5 20:48:26 2009 From: guido at python.org (Guido van Rossum) Date: Mon, 5 Jan 2009 11:48:26 -0800 Subject: [Python-Dev] Why is there still a PRINT_EXPR opcode in Python 3? In-Reply-To: References: <18785.17945.480606.294889@montanaro.dyndns.org> <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> <18785.18438.876594.628144@montanaro.dyndns.org> <1afaf6160901041553x51f7573cq670088eed743b8dd@mail.gmail.com> Message-ID: On Mon, Jan 5, 2009 at 4:25 AM, Amaury Forgeot d'Arc wrote: > On Mon, Jan 5, 2009 at 00:53, Benjamin Peterson wrote: >> On Sun, Jan 4, 2009 at 5:36 PM, wrote: >>> >>> >> Since print is now a builtin function why is there still a PRINT_EXPR >>> >> opcode? >>> >>> Benjamin> I believe it's used in the interactive interpreter to display >>> Benjamin> the repr of an expression. >>> >>> Wouldn't it make more sense for the interactive interpreter to call >>> >>> print(repr(expr)) >> >> I'm not sure about the reasoning for keeping PRINT_EXPR alive. When I >> look at the code of PyRun_InteractiveOne, it seems it should be >> possible to kill it off. > > How would you display multiple lines, like: > >>>> for x in range(3): > ... x, x * x > ... > (0, 0) > (1, 1) > (2, 4) >>>> if 1: > ... "some line" > ... "another line" > ... > 'some line' > 'another line' > > OTOH this seems an obscure feature. "for" and "if" are statements after all. That feature may be obscure but should not be killed. It'd be a bit tricky to remove the PRINT_EXPR call since it doesn't invoke the print() function -- it invokes something more basic that goes through sys.displayhook. I don't care about the opcode, but the semantics should remain unchanged. Keeping the opcode is probably easiest. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From solipsis at pitrou.net Mon Jan 5 21:01:37 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 5 Jan 2009 20:01:37 +0000 (UTC) Subject: [Python-Dev] Small question about BufferedRandom spec References: <18785.17945.480606.294889@montanaro.dyndns.org> <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> <18785.18438.876594.628144@montanaro.dyndns.org> <1afaf6160901041553x51f7573cq670088eed743b8dd@mail.gmail.com> Message-ID: Hello, Amaury (mainly) and I are rewriting the IO stack in C, and there is a small thing in PEP 3116 about the BufferedRandom object that I'd like to clarify: ? Q: Do we want to mandate in the specification that switching between reading and writing on a read-write object implies a .flush()? Or is that an implementation convenience that users should not rely on? ? Is it ok if I assume that the answer is "it is an implementation convenience that users should not rely on"? The reason is that I'm overhauling BufferedRandom objects to use a single shared buffer, so as to optimize interleaved reads and writes. Thanks Antoine. From martin at v.loewis.de Mon Jan 5 21:36:36 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 05 Jan 2009 21:36:36 +0100 Subject: [Python-Dev] Roundup version numbers In-Reply-To: References: Message-ID: <49626F54.7010103@v.loewis.de> > But since two weeks ago, this list was trimmed down. I think that it > was to not be able to submit bugs for older Python versions, which is > great, but there're some bugs assigned to older versions (for example, > [1]). All true. > Should I use another way to query the version number-name relationship > (to see them all)? Or those issues that point to older Python versions > should be updated? All existing associations between versions and issues stay as they are. I don't quite understand what the problem is. Yes, the versions were "retired" (in roundup speak), and yes, issues that were originally associated with the retired versions stay associated. So what is the problem with that? If you want to find out all versions, including retired ones, you need to iteratively go through the list, asking for, say http://bugs.python.org/version4 (there might also be an XML-RPC interface to do that). OTOH, if you had downloaded the list of versions once, you can trust that each id continues to mean what it meant on creation, so you might want to look only for updates (i.e. newer ids). If you are suggesting that all such issues should be retargeted at newer versions: certainly not: Many issues were for old versions, and have long been closed. For the open issues, such retargetting would have to go along with a check whether the issue still exists in the newer versions. Regards, Martin From python at hda3.com Mon Jan 5 22:06:23 2009 From: python at hda3.com (Peter Moody) Date: Mon, 5 Jan 2009 13:06:23 -0800 Subject: [Python-Dev] address manipulation in the standard lib In-Reply-To: <4327dfbd0901051000h22fca39dnb7d25fbbc8e1999b@mail.gmail.com> References: <4327dfbd0901050910p58d7935fie494b3144dd09018@mail.gmail.com> <4327dfbd0901051000h22fca39dnb7d25fbbc8e1999b@mail.gmail.com> Message-ID: <8517e9350901051306l4bb490f0vb328f8046c67949a@mail.gmail.com> >> How about a merger? > > I think that's a brilliant idea. David and Peter, logistics aside, > what do you think of (or how to you feel about) this suggestion? the devil, as they say, is in the details :). I'd be interested to know what form this merger would take. WRT v4/v6 manipulation, it seems that ipaddr and netaddr do very similar things, though with different strategies. I've never worked on integrating code into a project like this before, so i'm not sure what a merger like this would end up looking like. having said that, I am fine with this in principal. Cheers, /peter From guido at python.org Mon Jan 5 22:21:53 2009 From: guido at python.org (Guido van Rossum) Date: Mon, 5 Jan 2009 13:21:53 -0800 Subject: [Python-Dev] Small question about BufferedRandom spec In-Reply-To: References: <18785.17945.480606.294889@montanaro.dyndns.org> <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> <18785.18438.876594.628144@montanaro.dyndns.org> <1afaf6160901041553x51f7573cq670088eed743b8dd@mail.gmail.com> Message-ID: On Mon, Jan 5, 2009 at 12:01 PM, Antoine Pitrou wrote: > Amaury (mainly) and I are rewriting the IO stack in C, Very cool! > and there is a small > thing in PEP 3116 about the BufferedRandom object that I'd like to clarify: > > ? Q: Do we want to mandate in the specification that switching between reading > and writing on a read-write object implies a .flush()? Or is that an > implementation convenience that users should not rely on? ? > > Is it ok if I assume that the answer is "it is an implementation convenience > that users should not rely on"? The reason is that I'm overhauling > BufferedRandom objects to use a single shared buffer, so as to optimize > interleaved reads and writes. I think it's fine if the flush to the file is optional, as long as this is clearly documented. However, the semantics of interleaving reads and writes, with and without seek calls in between, should be well-defined and correct/useful, so that it behaves the same regardless of the buffer size. Ditto for the flush call currently implied by a seek -- if you can satisfy the seek by moving where you are in the buffer without flushing, that's fine IMO, but it should be well documented. It should also be documented that a flush still *may* occur, of course. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From skippy.hammond at gmail.com Mon Jan 5 23:17:35 2009 From: skippy.hammond at gmail.com (Mark Hammond) Date: Tue, 06 Jan 2009 09:17:35 +1100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <4961F984.5060307@egenix.com> References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> <495E3B3F.7090603@egenix.com> <4961F984.5060307@egenix.com> Message-ID: <496286FF.1090107@gmail.com> On 5/01/2009 11:13 PM, M.-A. Lemburg wrote: > See above. Assertions are not meant to be checked in a production > build. You use debug builds for debugging such low-level things. Although ironically, assertions have been disabled in debug builds on Windows - http://bugs.python.org/issue4804 Cheers, Mark. From guido at python.org Mon Jan 5 23:33:13 2009 From: guido at python.org (Guido van Rossum) Date: Mon, 5 Jan 2009 14:33:13 -0800 Subject: [Python-Dev] Flushing email queue Message-ID: If there's anything (be it a python-dev issue, or something for python-committers, or a bug) that needs my attention, please resend. In order to start getting work done today, I am archiving all python-related email from the last 2.5 weeks unread that doesn't have me explicitly in the To: or CC: header. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From helmert at informatik.uni-freiburg.de Mon Jan 5 23:48:13 2009 From: helmert at informatik.uni-freiburg.de (Malte Helmert) Date: Mon, 05 Jan 2009 23:48:13 +0100 Subject: [Python-Dev] another Python Bug Day? Message-ID: Dear python-dev group, are their any plans to organize another Python Bug Day in the near future? It's been a while since the last one (last May). I might be misremembering, but I think at one time there was even talk of having one bug day every month. For people who are not core developers but would still like to contribute, the Bug Days are quite exciting events. It would be great if they could keep going. Malte From facundobatista at gmail.com Tue Jan 6 01:17:47 2009 From: facundobatista at gmail.com (Facundo Batista) Date: Mon, 5 Jan 2009 22:17:47 -0200 Subject: [Python-Dev] Roundup version numbers In-Reply-To: <49626F54.7010103@v.loewis.de> References: <49626F54.7010103@v.loewis.de> Message-ID: 2009/1/5 "Martin v. L?wis" : > All existing associations between versions and issues stay as they are. > I don't quite understand what the problem is. Yes, the versions were > "retired" (in roundup speak), and yes, issues that were originally > associated with the retired versions stay associated. So what is the > problem with that? The problem is that I don't have a way to find out the relation number-name for the versions (see below). > (there might also be an XML-RPC interface to do that). OTOH, if you had > downloaded the list of versions once, you can trust that each id I didn't download the list versions once, so far I got it everytime from the server (it was cheap, because it's short, and I don't have the issue of the local copy becoming obsolete if something changed). How I donwloaded it everytime? I went to http://bugs.python.org/version and parsed the id and name for each line. But I can not do this anymore. > continues to mean what it meant on creation, so you might want to look > only for updates (i.e. newer ids). I guess I could do this, if there's no other way to have all the (id, name) pairs for all the existing version values. Thank you! -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From drkjam at gmail.com Tue Jan 6 03:01:55 2009 From: drkjam at gmail.com (DrKJam) Date: Tue, 6 Jan 2009 02:01:55 +0000 Subject: [Python-Dev] address manipulation in the standard lib In-Reply-To: <8517e9350901051306l4bb490f0vb328f8046c67949a@mail.gmail.com> References: <4327dfbd0901050910p58d7935fie494b3144dd09018@mail.gmail.com> <4327dfbd0901051000h22fca39dnb7d25fbbc8e1999b@mail.gmail.com> <8517e9350901051306l4bb490f0vb328f8046c67949a@mail.gmail.com> Message-ID: <538a660a0901051801i3159e8fak32f8546351bbef8@mail.gmail.com> A merger sounds like a good way forward. It shouldn't be as painful as it might sound initially and there should be lots of room for some early big wins. Contentious Issues ------------------ *** Separate IP and CIDR classes The IP and CIDR object split in netaddr is going to require some further discussion. They are mostly related to what operations to keep and which to drop from each. More on this later on when I've had some time to think about it a bit more. *** Using the Stategy pattern I'd like to see us use the GoF strategy pattern in a combined solution with a single IP class for both v4 and v6, with separate strategy classes (like netaddr), rather than two separate IPv4 and IPv6 classes returned by a factory function (like ipaddr). Again this might require a bit of further discussion. Killer Features --------------- Here's a list of (hopefully uncontroversial) features for a combined module *** Maintain ipaddr speeds Impressive stuff - I like it! *** PEP-8 support *** Drop MAC and EUI support I'm happy to let the MAC (EUI-48) and EUI-64 support find a good home in a separate module. Guido's sense of this being something separate is spot on despite the apparent benefits of any code sharing. Where necessary, the separate module can import whatever it needs from our combined module. *** Pythonic behaving of IP objects IP address objects behave like standard Python types (ints, lists, tuples, etc) dependent on context. This is mainly achieved via copious amounts of operator overloading. For example, instead of :- >>> IP('192.168.0.0/24').exclude_addrs('192.168.0.15/32') ['192.168.0.0/29', '192.168.0.8/30', '192.168.0.12/31', '192.168.0.14/32'] you could just implement __sub__ (IP object subtraction) :- >>> IP('192.168.0.0/24', format=str) - IP('192.168.0.15/32') ['192.168.0.0/29', '192.168.0.8/30', '192.168.0.12/31', '192.168.0.14/32'] Achieving the same results but in a more Python friendly manner. Here's a list of operators I've so far found decent meanings for in netaddr :- __int__, __long__, __str__, __repr__, __hash__ __eq__, __ne__, __lt__, __le__, __gt__, __ge__ __iter__, __getitem__, __setitem__, __len__, __contains__ __add__, __sub__, __isub__, __iadd__ *** Constants for address type identification Identifying specific address types with a constant is essential. netaddr has the module level constants AT_INET and AT_INET6 for IPv4 and IPv6 respectively. I'll be the first to agree that AT_* is a bit quirky. As we are looking to something for the stdlib we should use something more, well, standard such as AF_INET and AF_INET6 from the socket module. Is AF_INET6 fairly widely available on most operating systems these days? Not sure how socket constants have fared in Google's App Engine socket module implementation for example. If not, we can always define some specifically for the module itself. *** Use the Python descriptor protocol to police IP objects attribute assignments This makes IP object properties read/writable rather than just read-only. I discovered this on the Python mailing list a while back in the early days of netaddr's development. They are excellent and open up a whole new world of possibilities for keeping control of your objects internal state once you allow users write access to your class properties. *** Formatter attributes on IP objects to controls return value representations Sometimes you just want the string or hex representation of an address instead of grokking IP objects the whole time. A useful trick when combined with descriptor protocol above. *** Use iterators I notice ipaddr doesn't currently use the 'yield' statement anywhere which is a real shame. netaddr uses iterators everywhere and also defines an nrange() function built as an xrange() work-a-like but for network addresses instead of integers values (very similar). *** Add support for IPv4 address abbreviations Based on 'old school' IP classful networking rules. Still useful and worth having. *** Use slices on IP objects! There's nothing quite like list slices on a network object ;-) I've got some horrendous issues trying to get this going with Python n-bit integers for IPv6 so I'd love to see this working correctly. *** Careful coding to avoid endianness bugs I spent a decent chunk of development time early on doing endian tests on all basic integer conversion operations. Any combined solution must be rock solid and robust in this area. It's all too make naive assumption and get this wrong. OK, so it's a pet hate of mine! I'm looking forward to Python stdlib buildbot support in this area ;-) *** Display of IP objects as human-readable binary strings Sometimes it's just nice to see the bit patterns! *** Python 'set' type operations for collections of IP objects Intersection, union etc between network objects and groups of network objects. More nice to have than essential but would be interesting to see working. I've spent time thinking about it but haven't really come up with a good implementation of (yet). Hopefully with a lot of talented people involved we can get something going here. *** Add support for epydoc in docstrings Is this post long enough to be a candidate for a PEP?! -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Jan 6 03:44:20 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 6 Jan 2009 02:44:20 +0000 (UTC) Subject: [Python-Dev] another Python Bug Day? References: Message-ID: Hello, Malte Helmert informatik.uni-freiburg.de> writes: > > are their any plans to organize another Python Bug Day in the near > future? It's been a while since the last one (last May). I might be > misremembering, but I think at one time there was even talk of having > one bug day every month. We must first release 3.0.1 (there are a few release blockers remaining), but I also think we should do a Bug Day afterwards. Regards Antoine. From benjamin at python.org Tue Jan 6 03:48:05 2009 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 5 Jan 2009 20:48:05 -0600 Subject: [Python-Dev] another Python Bug Day? In-Reply-To: References: Message-ID: <1afaf6160901051848x356b0cfdyb043c68fe13d18a2@mail.gmail.com> On Mon, Jan 5, 2009 at 8:44 PM, Antoine Pitrou wrote: > > Hello, > > Malte Helmert informatik.uni-freiburg.de> writes: >> >> are their any plans to organize another Python Bug Day in the near >> future? It's been a while since the last one (last May). I might be >> misremembering, but I think at one time there was even talk of having >> one bug day every month. > > We must first release 3.0.1 (there are a few release blockers remaining), but I > also think we should do a Bug Day afterwards. +1 It will be nice to deal with some of the bugs we had to put off during the RC phases of 2.6 and 3.0. -- Regards, Benjamin From tjreedy at udel.edu Tue Jan 6 05:13:14 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 05 Jan 2009 23:13:14 -0500 Subject: [Python-Dev] Small question about BufferedRandom spec In-Reply-To: References: <18785.17945.480606.294889@montanaro.dyndns.org> <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> <18785.18438.876594.628144@montanaro.dyndns.org> <1afaf6160901041553x51f7573cq670088eed743b8dd@mail.gmail.com> Message-ID: Guido van Rossum wrote: >> ? Q: Do we want to mandate in the specification that switching between reading >> and writing on a read-write object implies a .flush()? Or is that an >> implementation convenience that users should not rely on? ? >> >> Is it ok if I assume that the answer is "it is an implementation convenience >> that users should not rely on"? The reason is that I'm overhauling >> BufferedRandom objects to use a single shared buffer, so as to optimize >> interleaved reads and writes. > > I think it's fine if the flush to the file is optional, as long as > this is clearly documented. However, the semantics of interleaving > reads and writes, with and without seek calls in between, should be > well-defined and correct/useful, so that it behaves the same > regardless of the buffer size. I don't know how much of the stdio will be wrapped or replaced, but, FWIW, the C89 Standard, as described by Plauger & Brodie, requires a position-setting operation between writes and reads: one of fflush, fseek, fsetpos, or rewind. Same for reads and writes unless the read set EOF. > > Ditto for the flush call currently implied by a seek -- if you can > satisfy the seek by moving where you are in the buffer without > flushing, that's fine IMO, but it should be well documented. > > It should also be documented that a flush still *may* occur, of course. > From guido at python.org Tue Jan 6 05:46:50 2009 From: guido at python.org (Guido van Rossum) Date: Mon, 5 Jan 2009 20:46:50 -0800 Subject: [Python-Dev] Small question about BufferedRandom spec In-Reply-To: References: <18785.17945.480606.294889@montanaro.dyndns.org> <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> <18785.18438.876594.628144@montanaro.dyndns.org> <1afaf6160901041553x51f7573cq670088eed743b8dd@mail.gmail.com> Message-ID: On Mon, Jan 5, 2009 at 8:13 PM, Terry Reedy wrote: > Guido van Rossum wrote: > >>> ? Q: Do we want to mandate in the specification that switching between >>> reading >>> and writing on a read-write object implies a .flush()? Or is that an >>> implementation convenience that users should not rely on? ? >>> >>> Is it ok if I assume that the answer is "it is an implementation >>> convenience >>> that users should not rely on"? The reason is that I'm overhauling >>> BufferedRandom objects to use a single shared buffer, so as to optimize >>> interleaved reads and writes. >> >> I think it's fine if the flush to the file is optional, as long as >> this is clearly documented. However, the semantics of interleaving >> reads and writes, with and without seek calls in between, should be >> well-defined and correct/useful, so that it behaves the same >> regardless of the buffer size. > > I don't know how much of the stdio will be wrapped or replaced, but, FWIW, > the C89 Standard, as described by Plauger & Brodie, requires a > position-setting operation between writes and reads: one of fflush, fseek, > fsetpos, or rewind. Same for reads and writes unless the read set EOF. We're not wrapping *any* of stdio -- we're wrapping raw Unix syscalls (or Windows APIs). The problem with the C89 standard is that if you forget this operation, the behavior is undefined, and I have seen compliant implementations that would segfault in this case. That's unacceptable for Python, and one of the reasons to bypass stdio completely. (Other reasons include the absence of standardized APIs to inspect the buffer, change buffering after starting I/O, peek ahead in the buffer, seek within the buffer without flushing, etc.) >> Ditto for the flush call currently implied by a seek -- if you can >> satisfy the seek by moving where you are in the buffer without >> flushing, that's fine IMO, but it should be well documented. >> >> It should also be documented that a flush still *may* occur, of course. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From mark at mirell.org Tue Jan 6 06:37:02 2009 From: mark at mirell.org (Mark Miller) Date: Mon, 5 Jan 2009 23:37:02 -0600 Subject: [Python-Dev] [PATCH] Allow Python to build on MIPS Targets Message-ID: <46ACFBBF-DE27-4C0E-BFF4-3F0FEB71A58A@mirell.org> When the merging of the libffi3 branch took place in March, it broke the logic in configure and fficonfig.py.in to deal with MIPS architecture, specifically differentiating in which files to include for MIPS_IRIX versus MIPS_LINUX. I've re-added that logic based on the older code, and adjusted a few things to deal with the new format. I just tested this on my QEMU instance of a MIPSEL Linux install, and works successfully. Before, it died with the error noted here: http://bugs.python.org/issue4305 , when attempting to reference a non-existent array member in fficonfig.py.in, for the MIPS target. (Rather than MIPS_LINUX or MIPS_IRIX, like the file wants) I've attached the patch here and on the following bug. It's based off the following svn checkout: Path: . URL: http://svn.python.org/projects/python/trunk Repository Root: http://svn.python.org/projects Repository UUID: 6015fed2-1504-0410-9fe1-9d1591cc4771 Revision: 68358 Node Kind: directory Schedule: normal Last Changed Author: marc-andre.lemburg Last Changed Rev: 68344 Last Changed Date: 2009-01-05 13:43:35 -0600 (Mon, 05 Jan 2009) Thanks. -------------- next part -------------- A non-text attachment was scrubbed... Name: mips-ffi.patch Type: application/octet-stream Size: 3692 bytes Desc: not available URL: -------------- next part -------------- -- Mark Miller mark at mirell.org From hodgestar+pythondev at gmail.com Tue Jan 6 08:13:35 2009 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Tue, 6 Jan 2009 09:13:35 +0200 Subject: [Python-Dev] another Python Bug Day? In-Reply-To: References: Message-ID: If there's going to be another bug day, I'd like to see the problem of getting patches from the bug tracker into Python addressed in some way. It's kinda frustrating to work on things and not actually get to close any issues because there are not enough people with commit access taking part. It'd also be nice if there could be some committers around on IRC to have fast interactions with or perhaps to coordinate things (maybe asking for people to work on specific bugs, or letting people know whether a particular solution to an issue is likely to be accepted before they work on it). Schiavo Simon From doomster at knuut.de Tue Jan 6 08:51:34 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Tue, 6 Jan 2009 08:51:34 +0100 Subject: [Python-Dev] another Python Bug Day? In-Reply-To: References: Message-ID: <200901060851.34829.doomster@knuut.de> On Monday 05 January 2009 23:48:13 Malte Helmert wrote: > For people who are not core developers but would still like to > contribute, the Bug Days are quite exciting events. It would be great if > they could keep going. As a not core developer, I would like to know what exactly that means. ;) Uli From ncoghlan at gmail.com Tue Jan 6 09:27:24 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 06 Jan 2009 18:27:24 +1000 Subject: [Python-Dev] address manipulation in the standard lib In-Reply-To: <538a660a0901051801i3159e8fak32f8546351bbef8@mail.gmail.com> References: <4327dfbd0901050910p58d7935fie494b3144dd09018@mail.gmail.com> <4327dfbd0901051000h22fca39dnb7d25fbbc8e1999b@mail.gmail.com> <8517e9350901051306l4bb490f0vb328f8046c67949a@mail.gmail.com> <538a660a0901051801i3159e8fak32f8546351bbef8@mail.gmail.com> Message-ID: <496315EC.4090604@gmail.com> DrKJam wrote: > Is this post long enough to be a candidate for a PEP?! A PEP will likely be needed eventually for the actual addition to the standard library - while the respective parties are still working on the "best of both worlds" merger idea, a page on the Wiki is probably a better idea. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From dickinsm at gmail.com Tue Jan 6 11:26:13 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Tue, 6 Jan 2009 10:26:13 +0000 Subject: [Python-Dev] [Python-checkins] r68182 - in python/trunk: Lib/decimal.py Misc/NEWS In-Reply-To: References: <20090102230709.032011E4002@bag.python.org> Message-ID: <5c6f2a5d0901060226x29df4c40i3da447c88a9df404@mail.gmail.com> On Mon, Jan 5, 2009 at 3:12 PM, Jim Jewett wrote: > Our of curiousity, why are these constants for internal use only? I don't think anyone ever thought about deliberately making them public---I suspect they were introduced as a speed optimization. I can see that having things like NaN = Decimal('NaN') might be handy for some users (though I actually suspect that the intersection between Decimal users and those who care about NaNs is rather small...), but they don't belong in the decimal module, which is supposed to be kept very close to the standard and, exactly as you say, keep the beyond-the-spec API small. Maybe it's time for the "Add a decimal_utils module" PEP, which could contain such constants? There are many other definitions and conveniences that could go into such a module. One thing in particular that I miss is a sqrt() *function* (as opposed to Decimal method) that takes a Decimal, int or long and returns a Decimal; similarly for exp, log, log10, ... Another thing that has been requested recently on c.l.p. is good implementations of trig functions for Decimal, which are quite hard to do properly. Mark From mal at egenix.com Tue Jan 6 14:22:56 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 06 Jan 2009 14:22:56 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <496286FF.1090107@gmail.com> References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> <495E3B3F.7090603@egenix.com> <4961F984.5060307@egenix.com> <496286FF.1090107@gmail.com> Message-ID: <49635B30.9070608@egenix.com> On 2009-01-05 23:17, Mark Hammond wrote: > On 5/01/2009 11:13 PM, M.-A. Lemburg wrote: > >> See above. Assertions are not meant to be checked in a production >> build. You use debug builds for debugging such low-level things. > > Although ironically, assertions have been disabled in debug builds on > Windows - http://bugs.python.org/issue4804 Does this only affect asserts defined in the CRT or also ones defined in the Python C code ? (I was only referring to the latter) -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 06 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From solipsis at pitrou.net Tue Jan 6 14:31:05 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 6 Jan 2009 13:31:05 +0000 (UTC) Subject: [Python-Dev] another Python Bug Day? References: Message-ID: Simon Cross gmail.com> writes: > It'd also be nice if there could be some committers around on IRC to > have fast interactions with or perhaps to coordinate things I was going to suggest #python-dev but I see you're already there... From kristjan at ccpgames.com Tue Jan 6 15:15:02 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 6 Jan 2009 14:15:02 +0000 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <49635B30.9070608@egenix.com> References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> <495E3B3F.7090603@egenix.com> <4961F984.5060307@egenix.com> <496286FF.1090107@gmail.com> <49635B30.9070608@egenix.com> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D18841F8@exchis.ccp.ad.local> Only crt asserts, and those assertion features accessible through the file, such as _ASSERT and _ASSERTE. Kristj'an -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of M.-A. Lemburg Sent: 6. jan?ar 2009 13:23 To: mhammond at skippinet.com.au Cc: python-dev at python.org Subject: Re: [Python-Dev] #ifdef __cplusplus? On 2009-01-05 23:17, Mark Hammond wrote: > On 5/01/2009 11:13 PM, M.-A. Lemburg wrote: > >> See above. Assertions are not meant to be checked in a production >> build. You use debug builds for debugging such low-level things. > > Although ironically, assertions have been disabled in debug builds on > Windows - http://bugs.python.org/issue4804 Does this only affect asserts defined in the CRT or also ones defined in the Python C code ? (I was only referring to the latter) From solipsis at pitrou.net Tue Jan 6 15:17:09 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 6 Jan 2009 14:17:09 +0000 (UTC) Subject: [Python-Dev] Small question about BufferedRandom spec References: <18785.17945.480606.294889@montanaro.dyndns.org> <1afaf6160901041532j42fceaf7qeb22e68c4025a4ae@mail.gmail.com> <18785.18438.876594.628144@montanaro.dyndns.org> <1afaf6160901041553x51f7573cq670088eed743b8dd@mail.gmail.com> Message-ID: Guido van Rossum python.org> writes: > > However, the semantics of interleaving > reads and writes, with and without seek calls in between, should be > well-defined and correct/useful, so that it behaves the same > regardless of the buffer size. Yes, the goal is to have reasonably intuitive, and meaningful, semantics. > Ditto for the flush call currently implied by a seek -- if you can > satisfy the seek by moving where you are in the buffer without > flushing, that's fine IMO, but it should be well documented. That's also part of what I've tried to optimize. The documentation is currently in the limbs, though. Thanks! Antoine. From mal at egenix.com Tue Jan 6 15:42:51 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 06 Jan 2009 15:42:51 +0100 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D18841F8@exchis.ccp.ad.local> References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> <495E3B3F.7090603@egenix.com> <4961F984.5060307@egenix.com> <496286FF.1090107@gmail.com> <49635B30.9070608@egenix.com> <930F189C8A437347B80DF2C156F7EC7F04D18841F8@exchis.ccp.ad.local> Message-ID: <49636DEB.1080700@egenix.com> On 2009-01-06 15:15, Kristj?n Valur J?nsson wrote: > Only crt asserts, and those assertion features accessible through the file, such as _ASSERT and _ASSERTE. Thanks. In that case, I don't see much of a problem... after all, if someone runs a Python debug build, they won't be trying to debug the MS CRT, only Python ;-) > Kristj'an > > -----Original Message----- > From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of M.-A. Lemburg > Sent: 6. jan?ar 2009 13:23 > To: mhammond at skippinet.com.au > Cc: python-dev at python.org > Subject: Re: [Python-Dev] #ifdef __cplusplus? > > On 2009-01-05 23:17, Mark Hammond wrote: >> On 5/01/2009 11:13 PM, M.-A. Lemburg wrote: >> >>> See above. Assertions are not meant to be checked in a production >>> build. You use debug builds for debugging such low-level things. >> Although ironically, assertions have been disabled in debug builds on >> Windows - http://bugs.python.org/issue4804 > > Does this only affect asserts defined in the CRT or also ones defined > in the Python C code ? (I was only referring to the latter) -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 06 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From kristjan at ccpgames.com Tue Jan 6 15:45:53 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 6 Jan 2009 14:45:53 +0000 Subject: [Python-Dev] #ifdef __cplusplus? In-Reply-To: <49636DEB.1080700@egenix.com> References: <200901011630.38196.doomster@knuut.de> <495D57BE.6020904@gmail.com> <495E3B3F.7090603@egenix.com> <4961F984.5060307@egenix.com> <496286FF.1090107@gmail.com> <49635B30.9070608@egenix.com> <930F189C8A437347B80DF2C156F7EC7F04D18841F8@exchis.ccp.ad.local> <49636DEB.1080700@egenix.com> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D188420A@exchis.ccp.ad.local> Well, a lot of those asserts have to do with correct use of the crt (and cpprt) For example, all of the iterator debugging for STL was disabled in our product When run with python embedded, and I found some issues when I reenabled the crt assertions. Python messing with the crt behavior for the whole process isn't a particularly nice thing to do. Kristj?n -----Original Message----- From: M.-A. Lemburg [mailto:mal at egenix.com] Sent: 6. jan?ar 2009 14:43 To: Kristj?n Valur J?nsson Cc: mhammond at skippinet.com.au; python-dev at python.org Subject: Re: [Python-Dev] #ifdef __cplusplus? On 2009-01-06 15:15, Kristj?n Valur J?nsson wrote: > Only crt asserts, and those assertion features accessible through the file, such as _ASSERT and _ASSERTE. Thanks. In that case, I don't see much of a problem... after all, if someone runs a Python debug build, they won't be trying to debug the MS CRT, only Python ;-) From mlm at acm.org Tue Jan 6 16:46:45 2009 From: mlm at acm.org (Mitchell L Model) Date: Tue, 6 Jan 2009 10:46:45 -0500 Subject: [Python-Dev] Documentation of Slicings in 3.0 Message-ID: The section of the documentation on slicings in 3.0 is substantially different than in previous versions, including 2.6. In particular it states that "The primary must evaluate to a maping object." I've followed the grammar and the commentary around through a few paths and cannot convince myself that the docmentation ever covers sequence slicing. I'm not sufficiently confident of this to post as a bug, so I decided to post here first. -- --- Mitchell L Model From facundobatista at gmail.com Tue Jan 6 17:09:48 2009 From: facundobatista at gmail.com (Facundo Batista) Date: Tue, 6 Jan 2009 14:09:48 -0200 Subject: [Python-Dev] another Python Bug Day? In-Reply-To: References: Message-ID: 2009/1/6 Simon Cross : > It'd also be nice if there could be some committers around on IRC to All those who are working in the bug day, should be in #python-dev during the work... -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From helmert at informatik.uni-freiburg.de Tue Jan 6 18:27:41 2009 From: helmert at informatik.uni-freiburg.de (Malte Helmert) Date: Tue, 06 Jan 2009 18:27:41 +0100 Subject: [Python-Dev] another Python Bug Day? In-Reply-To: <200901060851.34829.doomster@knuut.de> References: <200901060851.34829.doomster@knuut.de> Message-ID: Ulrich Eckhardt wrote: > On Monday 05 January 2009 23:48:13 Malte Helmert wrote: >> For people who are not core developers but would still like to >> contribute, the Bug Days are quite exciting events. It would be great if >> they could keep going. > > As a not core developer, I would like to know what exactly that means. > > ;) Well, it's an opportunity to fix some bugs and, with some luck, get the patches committed the same day. Also, there are developers around to answer questions about the process, etc. Rapid feedback => good. Malte From guido at python.org Tue Jan 6 18:59:59 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 6 Jan 2009 09:59:59 -0800 Subject: [Python-Dev] address manipulation in the standard lib In-Reply-To: <496315EC.4090604@gmail.com> References: <4327dfbd0901050910p58d7935fie494b3144dd09018@mail.gmail.com> <4327dfbd0901051000h22fca39dnb7d25fbbc8e1999b@mail.gmail.com> <8517e9350901051306l4bb490f0vb328f8046c67949a@mail.gmail.com> <538a660a0901051801i3159e8fak32f8546351bbef8@mail.gmail.com> <496315EC.4090604@gmail.com> Message-ID: On Tue, Jan 6, 2009 at 12:27 AM, Nick Coghlan wrote: > DrKJam wrote: >> Is this post long enough to be a candidate for a PEP?! > > A PEP will likely be needed eventually for the actual addition to the > standard library - while the respective parties are still working on the > "best of both worlds" merger idea, a page on the Wiki is probably a > better idea. A PEP isn't necessary for addition of an existing 3rd party library to the stdlib. However a PEP would be useful if the plan is to design a new API (even if it's a merger of two existing ones). -- --Guido van Rossum (home page: http://www.python.org/~guido/) From brett at python.org Tue Jan 6 20:47:15 2009 From: brett at python.org (Brett Cannon) Date: Tue, 6 Jan 2009 11:47:15 -0800 Subject: [Python-Dev] another Python Bug Day? In-Reply-To: References: Message-ID: On Mon, Jan 5, 2009 at 23:13, Simon Cross wrote: > If there's going to be another bug day, I'd like to see the problem of > getting patches from the bug tracker into Python addressed in some > way. It's kinda frustrating to work on things and not actually get to > close any issues because there are not enough people with commit > access taking part. > This is a years-old problem that is not going to be fixed overnight (unfortunately). But it is known and is being worked on (moving to a DVCS, writing up docs on the development process to cut down on bad patches, etc.). -Brett From hodgestar+pythondev at gmail.com Wed Jan 7 00:18:46 2009 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Wed, 7 Jan 2009 01:18:46 +0200 Subject: [Python-Dev] another Python Bug Day? In-Reply-To: References: Message-ID: On Tue, Jan 6, 2009 at 9:47 PM, Brett Cannon wrote: > This is a years-old problem that is not going to be fixed overnight > (unfortunately). But it is known and is being worked on (moving to a > DVCS, writing up docs on the development process to cut down on bad > patches, etc.). It's encouraging to hear that it's been worked on. I assume the idea is that eventually leiutenanents will maintain their own Python trees in a similar way to what happens with the Linux kernel currently? An interim solution that occurred to me is to give a few more people enhanced access to the issue tracker and to create a ready-for-committing keyword that these new issue wranglers could apply to bugs that have patches and which they think are ready for committing. Actual committers could then come along and search for the given keyword to find things to examine for committing. This would also act as testing ground for potential developers -- once committers feel that the patches an issue wrangler approves really are consistently good enough, they can consider promoting the issue wrangler to a full developer. Schiavo Simon From barry at python.org Wed Jan 7 00:23:21 2009 From: barry at python.org (Barry Warsaw) Date: Tue, 6 Jan 2009 18:23:21 -0500 Subject: [Python-Dev] another Python Bug Day? In-Reply-To: References: Message-ID: <51E1C10C-6BAE-4B6B-8C2D-97D13D7A0C8E@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 6, 2009, at 6:18 PM, Simon Cross wrote: > On Tue, Jan 6, 2009 at 9:47 PM, Brett Cannon wrote: >> This is a years-old problem that is not going to be fixed overnight >> (unfortunately). But it is known and is being worked on (moving to a >> DVCS, writing up docs on the development process to cut down on bad >> patches, etc.). > > It's encouraging to hear that it's been worked on. I assume the idea > is that eventually leiutenanents will maintain their own Python trees > in a similar way to what happens with the Linux kernel currently? FWIW, this is possible today. http://www.python.org/dev/bazaar/ (This really should go in the wiki so it's easier to update and so we can add information for other DVCSes.) (Note that this is experimental, and you'll still have to convince a core developer to commit your branch.) (Still, please experiment!) parenthetical-ly y'rs, - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSWPn6XEjvBPtnXfVAQKq7QQAgWXtcm9zAhdnm11rAo9UhtDtEa1yBqi8 +Z7JYfUcKL+IQI0sCuCHzY6VNNoCMsbondtWavVH3/y9xO4ySq+HrylUzgSH6Gu/ b0E1UZiRQsV33hhhG/0WupEdBd18wTRLipesjNqY7DA1+iI8KbXYD7QwYjJYRXDv PDBI4DpWZWE= =yTlQ -----END PGP SIGNATURE----- From bcannon at gmail.com Wed Jan 7 00:40:32 2009 From: bcannon at gmail.com (bcannon at gmail.com) Date: Tue, 06 Jan 2009 23:40:32 +0000 Subject: [Python-Dev] another Python Bug Day? Message-ID: <0016e64bddfee330b4045fd8eee8@google.com> On Jan 6, 2009 3:18pm, Simon Cross wrote: > On Tue, Jan 6, 2009 at 9:47 PM, Brett Cannon brett at python.org> wrote: > > > This is a years-old problem that is not going to be fixed overnight > > > (unfortunately). But it is known and is being worked on (moving to a > > > DVCS, writing up docs on the development process to cut down on bad > > > patches, etc.). > > > > It's encouraging to hear that it's been worked on. I assume the idea > > is that eventually leiutenanents will maintain their own Python trees > > in a similar way to what happens with the Linux kernel currently? > No because Python is not developed with much sense of ownership like the Linux kernel; no one owns the dict object or all of the object code. And this is not about to change either. While some modules have obvious owners (eg I would defer to Raymond for itertools stuff if I wasn't sure of the best solution), the code base overall is considered "owned" by all of python-dev equally. > > > An interim solution that occurred to me is to give a few more people > > enhanced access to the issue tracker We have slowly started to do this although we could probably expand this more than we have. > and to create a > > ready-for-committing keyword that these new issue wranglers could > > apply to bugs that have patches and which they think are ready for > > committing. Already done; the Stage field takes care of this with the "commit review" stage. It also makes it more clear what is needed which could be helpful for Bug Days. If people feel comfortable writing tests, for instance, they could (theoretically) just look for issues at the Test Needed stage. But the field is so new that it is not consistently used yet. Probably going to need the docs on how the issue workflow is supposed to work before that happens. > Actual committers could then come along and search for the > > given keyword to find things to examine for committing. This would > > also act as testing ground for potential developers -- once committers > > feel that the patches an issue wrangler approves really are > > consistently good enough, they can consider promoting the issue > > wrangler to a full developer. Right, that is one of the hopes of having more people have the Developer role on the issue tracker. This process just needs to get written down (which I am slowly doing; see http://www.python.org/dev/setup/ as the start of the docs I plan to write to document all of this). -Brett -------------- next part -------------- An HTML attachment was scrubbed... URL: From doomster at knuut.de Wed Jan 7 12:30:35 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Wed, 7 Jan 2009 12:30:35 +0100 Subject: [Python-Dev] a few strdup() questions... Message-ID: <200901071230.35918.doomster@knuut.de> Greetings! MS Windows CE doesn't provide strdup(), so where should I put it? I guess I should just compile in Python/strdup.c, right? However, where should I declare it? My approach would be to declare it in PC/pyconfig.h. I see that RISCOS also seems to lack that function, which is why it is declared locally in _localemodule.c, but I guess this isn't really the best of all possible ways. Also, there is HAVE_STRDUP. I would actually expect that #undef HAVE_STRDUP would do the trick to at least declare this, but it doesn't. I guess that most modern OS have this so this will probably just be bitrot ... right? But where should I put the declaration? BTW: there is another implementation (called my_strdup) in Modules/_ctypes/_ctypes_test.c, why not use the one in Python/strdup.c there? Lastly: I would have written the thing a bit differently: char* strdup(char const* s) { char* res; size_t len; assert(s); len = strlen(s); res = malloc(len+1); if(res) memcpy(res, s, len+1); return res; } First difference is that I wouldn't accept NULL as valid input, e.g. the glibc implementation doesn't either and GCC even warns you if you call strdup(NULL). Secondly, I would have used memcpy(), since the length is already known and then potentially quicker. Should I write a patch? thanks Uli From solipsis at pitrou.net Wed Jan 7 14:39:12 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 7 Jan 2009 13:39:12 +0000 (UTC) Subject: [Python-Dev] Decoder functions accept str in py3k Message-ID: Hello, I've just noticed that in py3k, the decoding functions in the codecs module accept str objects as well as bytes: # import codecs # c = codecs.getdecoder('utf8') # c('aa') ('aa', 2) # c('??') ('??', 4) # c = codecs.getdecoder('latin1') # c('aa') ('aa', 2) # c('??') ('????', 4) Is it a bug? Regards Antoine. From daniel at stutzbachenterprises.com Wed Jan 7 16:30:23 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Wed, 7 Jan 2009 09:30:23 -0600 Subject: [Python-Dev] a few strdup() questions... In-Reply-To: <200901071230.35918.doomster@knuut.de> References: <200901071230.35918.doomster@knuut.de> Message-ID: On Wed, Jan 7, 2009 at 5:30 AM, Ulrich Eckhardt wrote: > MS Windows CE doesn't provide strdup(), so where should I put it? I guess I > should just compile in Python/strdup.c, right? > I'm not an expert on Windows CE, but I believe it calls the function "_strdup()": http://msdn.microsoft.com/en-us/library/ms861162.aspx -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Wed Jan 7 16:34:57 2009 From: guido at python.org (Guido van Rossum) Date: Wed, 7 Jan 2009 07:34:57 -0800 Subject: [Python-Dev] Decoder functions accept str in py3k In-Reply-To: References: Message-ID: Sounds like yet another remnant of the old philosophy, which indeed supported encode and decode operations on both string types. :-( On Wed, Jan 7, 2009 at 5:39 AM, Antoine Pitrou wrote: > Hello, > > I've just noticed that in py3k, the decoding functions in the codecs module > accept str objects as well as bytes: > > # import codecs > # c = codecs.getdecoder('utf8') > # c('aa') > ('aa', 2) > # c('??') > ('??', 4) > # c = codecs.getdecoder('latin1') > # c('aa') > ('aa', 2) > # c('??') > ('?(c)?(c)', 4) > > Is it a bug? > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From solipsis at pitrou.net Wed Jan 7 16:39:12 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 7 Jan 2009 15:39:12 +0000 (UTC) Subject: [Python-Dev] Decoder functions accept str in py3k References: Message-ID: Guido van Rossum python.org> writes: > > Sounds like yet another remnant of the old philosophy, which indeed > supported encode and decode operations on both string types. How do we go for fixing it? Is it ok to raise a TypeError in 3.0.1? From guido at python.org Wed Jan 7 16:46:40 2009 From: guido at python.org (Guido van Rossum) Date: Wed, 7 Jan 2009 07:46:40 -0800 Subject: [Python-Dev] Decoder functions accept str in py3k In-Reply-To: References: Message-ID: That depends a bit on how much code we find that breaks as a result. If you find you have to do a big cleanup in the stdlib after that change, it's likely that 3rd party code could have the same problem, and I'd be reluctant. I'd be okay with adding a warning in that case. OTOH if there's no cleanup to be done I'm fine with just deleting it. A -3 warning should be added to 2.6 about this too IMO. On Wed, Jan 7, 2009 at 7:39 AM, Antoine Pitrou wrote: > Guido van Rossum python.org> writes: >> >> Sounds like yet another remnant of the old philosophy, which indeed >> supported encode and decode operations on both string types. > > How do we go for fixing it? Is it ok to raise a TypeError in 3.0.1? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From solipsis at pitrou.net Wed Jan 7 16:54:42 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 7 Jan 2009 15:54:42 +0000 (UTC) Subject: [Python-Dev] Pybots link obsolete? Message-ID: Hello, In http://www.python.org/dev/buildbot/, there's a link suggesting to visit the pybots Web site for more information. However, http://www.pybots.org/ just says "Nothing here #". Regards Antoine. From benjamin at python.org Wed Jan 7 18:35:06 2009 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 7 Jan 2009 11:35:06 -0600 Subject: [Python-Dev] Decoder functions accept str in py3k In-Reply-To: References: Message-ID: <1afaf6160901070935u653d3fcexee809c679cc2232e@mail.gmail.com> On Wed, Jan 7, 2009 at 9:46 AM, Guido van Rossum wrote: > A -3 warning should be added to 2.6 about this too IMO. A Py3k warning when attempting to decode a unicode string? Wouldn't that open the door to adding warnings to everywhere a unicode string is used where a byte string is? I thought that unicode and str's compatibility was quite intentionally not being touched until 3.0. -- Regards, Benjamin From mal at egenix.com Wed Jan 7 19:26:38 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 07 Jan 2009 19:26:38 +0100 Subject: [Python-Dev] Decoder functions accept str in py3k In-Reply-To: References: Message-ID: <4964F3DE.9090909@egenix.com> On 2009-01-07 16:34, Guido van Rossum wrote: > Sounds like yet another remnant of the old philosophy, which indeed > supported encode and decode operations on both string types. :-( No, that's something I explicitly readded to Python 3k, since the codecs interface is independent of the input and output types (the codecs decide which combinations to support). The bytes and Unicode *methods* do guarantee that you get either Unicode or bytes as output. > On Wed, Jan 7, 2009 at 5:39 AM, Antoine Pitrou wrote: >> Hello, >> >> I've just noticed that in py3k, the decoding functions in the codecs module >> accept str objects as well as bytes: >> >> # import codecs >> # c = codecs.getdecoder('utf8') >> # c('aa') >> ('aa', 2) >> # c('??') >> ('??', 4) >> # c = codecs.getdecoder('latin1') >> # c('aa') >> ('aa', 2) >> # c('??') >> ('?(c)?(c)', 4) >> >> Is it a bug? >> >> Regards >> >> Antoine. >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 07 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From aahz at pythoncraft.com Wed Jan 7 19:30:34 2009 From: aahz at pythoncraft.com (Aahz) Date: Wed, 7 Jan 2009 10:30:34 -0800 Subject: [Python-Dev] Decoder functions accept str in py3k In-Reply-To: References: Message-ID: <20090107183034.GA7794@panix.com> On Wed, Jan 07, 2009, Antoine Pitrou wrote: > Guido van Rossum python.org> writes: >> >> Sounds like yet another remnant of the old philosophy, which indeed >> supported encode and decode operations on both string types. > > How do we go for fixing it? Is it ok to raise a TypeError in 3.0.1? This definitely cannot be changed for 3.0.1 -- there's plenty of time to discuss this for 3.1. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian W. Kernighan From solipsis at pitrou.net Wed Jan 7 19:32:11 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 7 Jan 2009 18:32:11 +0000 (UTC) Subject: [Python-Dev] Decoder functions accept str in py3k References: <4964F3DE.9090909@egenix.com> Message-ID: M.-A. Lemburg egenix.com> writes: > > No, that's something I explicitly readded to Python 3k, since the > codecs interface is independent of the input and output types (the > codecs decide which combinations to support). But why would the utf8 decoder accept unicode as input? From mal at egenix.com Wed Jan 7 19:57:23 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 07 Jan 2009 19:57:23 +0100 Subject: [Python-Dev] Decoder functions accept str in py3k In-Reply-To: References: <4964F3DE.9090909@egenix.com> Message-ID: <4964FB13.3060600@egenix.com> On 2009-01-07 19:32, Antoine Pitrou wrote: > M.-A. Lemburg egenix.com> writes: >> No, that's something I explicitly readded to Python 3k, since the >> codecs interface is independent of the input and output types (the >> codecs decide which combinations to support). > > But why would the utf8 decoder accept unicode as input? It shouldn't. Looks like the codecs module codec interfaces were not updated to only accept bytes on decode for the Unicode codecs. BTW: The _codecsmodule.c file is a 4 spaces indent file as well (just like all Unicode support source files). Someone apparently has added tabs when adding support for Py_buffers. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 07 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From martin at v.loewis.de Wed Jan 7 20:22:43 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 07 Jan 2009 20:22:43 +0100 Subject: [Python-Dev] a few strdup() questions... In-Reply-To: <200901071230.35918.doomster@knuut.de> References: <200901071230.35918.doomster@knuut.de> Message-ID: <49650103.6010206@v.loewis.de> > MS Windows CE doesn't provide strdup(), so where should I put it? I guess I > should just compile in Python/strdup.c, right? Right. > However, where should I declare it? I recommend pyport.h. > Also, there is HAVE_STRDUP. I would actually expect that #undef HAVE_STRDUP > would do the trick to at least declare this, but it doesn't. I guess that > most modern OS have this so this will probably just be bitrot ... right? Wrong, I think. The macro is a side effect of AC_REPLACE_FUNCS, which will a) add strdup.c to the list of files to compile, or b) define HAVE_STRDUP. > BTW: there is another implementation (called my_strdup) in > Modules/_ctypes/_ctypes_test.c, why not use the one in Python/strdup.c there? I guess that's historical, from the times when ctypes was still a separate package. > First difference is that I wouldn't accept NULL as valid input, e.g. the glibc > implementation doesn't either and GCC even warns you if you call > strdup(NULL). Secondly, I would have used memcpy(), since the length is > already known and then potentially quicker. Should I write a patch? Is that really worth it? It works as-is, doesn't it? Regards, Martin From guido at python.org Wed Jan 7 20:29:37 2009 From: guido at python.org (Guido van Rossum) Date: Wed, 7 Jan 2009 11:29:37 -0800 Subject: [Python-Dev] Decoder functions accept str in py3k In-Reply-To: <4964F3DE.9090909@egenix.com> References: <4964F3DE.9090909@egenix.com> Message-ID: OK, ignore my previous comment. Sounds like the inidividual codecs need to tighten their type checking though -- perhaps *that* can be fixed in 3.0.1? I really don't see why any codec used to convert between text and bytes should support its output type as input. --Guido On Wed, Jan 7, 2009 at 10:26 AM, M.-A. Lemburg wrote: > On 2009-01-07 16:34, Guido van Rossum wrote: >> Sounds like yet another remnant of the old philosophy, which indeed >> supported encode and decode operations on both string types. :-( > > No, that's something I explicitly readded to Python 3k, since the > codecs interface is independent of the input and output types (the > codecs decide which combinations to support). > > The bytes and Unicode *methods* do guarantee that you get either > Unicode or bytes as output. > >> On Wed, Jan 7, 2009 at 5:39 AM, Antoine Pitrou wrote: >>> Hello, >>> >>> I've just noticed that in py3k, the decoding functions in the codecs module >>> accept str objects as well as bytes: >>> >>> # import codecs >>> # c = codecs.getdecoder('utf8') >>> # c('aa') >>> ('aa', 2) >>> # c('??') >>> ('??', 4) >>> # c = codecs.getdecoder('latin1') >>> # c('aa') >>> ('aa', 2) >>> # c('??') >>> ('?(c)?(c)', 4) >>> >>> Is it a bug? >>> >>> Regards >>> >>> Antoine. >>> >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> http://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >> >> >> > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Jan 07 2009) >>>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > ________________________________________________________________________ > > ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: > > > eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg > Registered at Amtsgericht Duesseldorf: HRB 46611 > http://www.egenix.com/company/contact/ > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From python at rcn.com Wed Jan 7 21:48:58 2009 From: python at rcn.com (Raymond Hettinger) Date: Wed, 7 Jan 2009 12:48:58 -0800 Subject: [Python-Dev] Mathematica Message-ID: Does anyone here have access to Mathematica? I would like to know what it returns for: In[1]:= Permutations({a, b, c}, {5}) Knowing this will help resolve a feature request for itertools.permutations() and friends. Thanks, Raymond From lkcl at lkcl.net Wed Jan 7 21:40:23 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Wed, 7 Jan 2009 20:40:23 +0000 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: References: Message-ID: On Sat, Jan 3, 2009 at 9:22 PM, Luke Kenneth Casson Leighton wrote: > hey, has anyone investigated compiling python2.5 using winegcc, under wine? some people might find this kind of thing amusing. it's considered in very obtuse circles to be "progress"... :) lkcl at gonzalez:/mnt/src/python2.5-2.5.2/Lib$ ../build/python -v Could not find platform independent libraries Could not find platform dependent libraries Consider setting $PYTHONHOME to [:] # installing zipimport hook import zipimport # builtin # installed zipimport hook 'import site' failed; traceback: ImportError: No module named site Python 2.5.2 (r252:60911, Jan 7 2009, 20:33:53) [gcc] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import site fixme:msvcrt:MSVCRT__sopen : pmode 0x01b6 ignored fixme:msvcrt:MSVCRT__sopen : pmode 0x01b6 ignored fixme:msvcrt:MSVCRT__sopen : pmode 0x01b6 ignored [....] [....] [....] import sre_compile # from Z:\mnt\src\python2.5-2.5.2\Lib\sre_compile.py fixme:msvcrt:MSVCRT__sopen : pmode 0x01b6 ignored fixme:msvcrt:MSVCRT__sopen : pmode 0x01b6 ignored fixme:msvcrt:MSVCRT__sopen : pmode 0x01b6 ignored # wrote Z:\mnt\src\python2.5-2.5.2\Lib\sre_compile.pyc import _sre # builtin import sre_constants # from Z:\mnt\src\python2.5-2.5.2\Lib\sre_constants.py # wrote Z:\mnt\src\python2.5-2.5.2\Lib\sre_constants.pyc import sre_parse # from Z:\mnt\src\python2.5-2.5.2\Lib\sre_parse.py # wrote Z:\mnt\src\python2.5-2.5.2\Lib\sre_parse.pyc Traceback (most recent call last): File "", line 1, in File "site.py", line 415, in main() File "site.py", line 406, in main aliasmbcs() File "site.py", line 356, in aliasmbcs import locale, codecs File "Z:\mnt\src\python2.5-2.5.2\Lib\locale.py", line 167, in import re, operator File "Z:\mnt\src\python2.5-2.5.2\Lib\re.py", line 223, in _pattern_type = type(sre_compile.compile("", 0)) File "Z:\mnt\src\python2.5-2.5.2\Lib\sre_compile.py", line 530, in compile groupindex, indexgroup OverflowError: signed integer is less than minimum >>> From fredrik.johansson at gmail.com Wed Jan 7 21:57:33 2009 From: fredrik.johansson at gmail.com (Fredrik Johansson) Date: Wed, 7 Jan 2009 21:57:33 +0100 Subject: [Python-Dev] Mathematica In-Reply-To: References: Message-ID: <3d0cebfb0901071257tef4940fp6dd9b125bc4107ff@mail.gmail.com> On Wed, Jan 7, 2009 at 9:48 PM, Raymond Hettinger wrote: > Does anyone here have access to Mathematica? > I would like to know what it returns for: > > In[1]:= Permutations({a, b, c}, {5}) > > Knowing this will help resolve a feature request > for itertools.permutations() and friends. I assume you mean with square brackets: Mathematica 6.0 for Linux x86 (32-bit) Copyright 1988-2008 Wolfram Research, Inc. In[1]:= Permutations[{a, b, c}, {5}] Out[1]= {} Fredrik From tutufan at gmail.com Wed Jan 7 22:31:33 2009 From: tutufan at gmail.com (Mike Coleman) Date: Wed, 7 Jan 2009 15:31:33 -0600 Subject: [Python-Dev] error in doc for fcntl module Message-ID: <3c6c07c20901071331x3ebf14f9u3abd8eab7e736f12@mail.gmail.com> In the doc page for the fcntl module, the example below is given. This seems like an error, or at least very misleading, as the normal usage is to get the flags (F_GETFL), set or unset the bits you want to change, then set the flags (F_SETFL). A reader might think that the example below merely sets O_NDELAY, but it also stomps all of the other bits to zero. If someone can confirm my thinking, this ought to be changed. import struct, fcntl, os f = open(...) rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NDELAY) From brett at python.org Wed Jan 7 23:35:21 2009 From: brett at python.org (Brett Cannon) Date: Wed, 7 Jan 2009 14:35:21 -0800 Subject: [Python-Dev] Decoder functions accept str in py3k In-Reply-To: <4964FB13.3060600@egenix.com> References: <4964F3DE.9090909@egenix.com> <4964FB13.3060600@egenix.com> Message-ID: On Wed, Jan 7, 2009 at 10:57, M.-A. Lemburg wrote: [SNIP] > BTW: The _codecsmodule.c file is a 4 spaces indent file as well (just > like all Unicode support source files). Someone apparently has added > tabs when adding support for Py_buffers. > It looks like this formatting mix-up is just going to get worse for the next few years while the 2.x series is still being worked on. Should we just bite the bullet and start adding modelines for Vim and Emacs to .c/.h files that are written in the old 2.x style? For Vim I can then update the vimrc in Misc/Vim to then have 4-space indent be the default for C files. -Brett From tjreedy at udel.edu Thu Jan 8 00:38:16 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 07 Jan 2009 18:38:16 -0500 Subject: [Python-Dev] Decoder functions accept str in py3k In-Reply-To: References: <4964F3DE.9090909@egenix.com> Message-ID: Guido van Rossum wrote: > OK, ignore my previous comment. Sounds like the inidividual codecs > need to tighten their type checking though -- perhaps *that* can be > fixed in 3.0.1? I really don't see why any codec used to convert > between text and bytes should support its output type as input. > > --Guido > > On Wed, Jan 7, 2009 at 10:26 AM, M.-A. Lemburg wrote: >> On 2009-01-07 16:34, Guido van Rossum wrote: >>> Sounds like yet another remnant of the old philosophy, which indeed >>> supported encode and decode operations on both string types. :-( >> No, that's something I explicitly readded to Python 3k, since the >> codecs interface is independent of the input and output types (the >> codecs decide which combinations to support). My memory is that making decode = bytes -> str and encode = str-> bytes was considered until it was noticed that there are sensible same-type transforms that fit the encode/decode model and then decided that reusing that model would be better than adding a transcode module/model. The bug of Unicode de/encoders allowing wrong inputs and giving weird outputs confuses people and has come up on c.l.p, so I think fixing it soon would be good. tjr From collinw at gmail.com Thu Jan 8 01:01:55 2009 From: collinw at gmail.com (Collin Winter) Date: Wed, 7 Jan 2009 16:01:55 -0800 Subject: [Python-Dev] Decoder functions accept str in py3k In-Reply-To: References: <4964F3DE.9090909@egenix.com> <4964FB13.3060600@egenix.com> Message-ID: <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> On Wed, Jan 7, 2009 at 2:35 PM, Brett Cannon wrote: > On Wed, Jan 7, 2009 at 10:57, M.-A. Lemburg wrote: > [SNIP] >> BTW: The _codecsmodule.c file is a 4 spaces indent file as well (just >> like all Unicode support source files). Someone apparently has added >> tabs when adding support for Py_buffers. >> > > It looks like this formatting mix-up is just going to get worse for > the next few years while the 2.x series is still being worked on. > Should we just bite the bullet and start adding modelines for Vim and > Emacs to .c/.h files that are written in the old 2.x style? For Vim I > can then update the vimrc in Misc/Vim to then have 4-space indent be > the default for C files. Or better yet, really bite the bullet and just reindent everything to spaces. Not every one uses vim or emacs, nor do all tools understand their modelines. FYI, there are options to svn blame and git to skip whitespace-only changes. Just-spent-an-hour-fixing-screwed-up-indents-in-changes-to-Python/*.c-ly, Collin Winter From guido at python.org Thu Jan 8 01:36:20 2009 From: guido at python.org (Guido van Rossum) Date: Wed, 7 Jan 2009 16:36:20 -0800 Subject: [Python-Dev] error in doc for fcntl module In-Reply-To: <3c6c07c20901071331x3ebf14f9u3abd8eab7e736f12@mail.gmail.com> References: <3c6c07c20901071331x3ebf14f9u3abd8eab7e736f12@mail.gmail.com> Message-ID: Well my Linux man page says that the only flags supported are O_APPEND, O_ASYNC, O_DIRECT, O_NOATIME, and O_NONBLOCK; and all of those are typically off -- so I'm not sure that it's a mistake or need correcting. These APIs should only be used by people who know what they're doing anyways; the examples are meant to briefly show the call format. On Wed, Jan 7, 2009 at 1:31 PM, Mike Coleman wrote: > In the doc page for the fcntl module, the example below is given. > This seems like an error, or at least very misleading, as the normal > usage is to get the flags (F_GETFL), set or unset the bits you want to > change, then set the flags (F_SETFL). A reader might think that the > example below merely sets O_NDELAY, but it also stomps all of the > other bits to zero. > > If someone can confirm my thinking, this ought to be changed. > > import struct, fcntl, os > > f = open(...) > rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NDELAY) -- --Guido van Rossum (home page: http://www.python.org/~guido/) From victor.stinner at haypocalc.com Thu Jan 8 02:23:55 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Thu, 8 Jan 2009 02:23:55 +0100 Subject: [Python-Dev] [Py3k] curses module and libncursesw library Message-ID: <200901080223.55104.victor.stinner@haypocalc.com> Hi, Python2 and Python3 tries to link the Python _curses module to the libncusesw dynamic library, or fallback to libncurses (or another implementation). The problem of libncurses is that it doesn't support multibyte charsets like... utf-8. In the Python module, it's not possible to check if we are using libncursesw or libncurses. Would it possible to change the Python3 configure script to always use libncursesw instead of falling back to an alternate non-unicode library? Which means no _curses module if libncursesw is missing. Related bug: http://bugs.python.org/issue4787 It looks like libncursesw is available on Linux, *BSD, Mac OS X. About (Open)Solaris, a libncurses package has been created in septembre 2008, but no unicode version yet. On Windows, there is a Cygwin port of libncurses, but I don't know if it contains the unicode version. -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From daniel at stutzbachenterprises.com Thu Jan 8 04:28:48 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Wed, 7 Jan 2009 21:28:48 -0600 Subject: [Python-Dev] What's New in Python 2.6: no string exceptions Message-ID: After reading "What's New in Python 2.6" and then upgrading, I quickly noticed an omission: string exceptions are no longer supported and raise a TypeError. It seems like this should be mentioned in the "Porting to Python 2.6" section at minimum, or perhaps more prominently since this change will break code in many small projects (e.g., code from Python 2.5's tutorial). Were any other previously-deprecated features removed for 2.6? Also, it might be nice if whatever tool tests the code in the tutorial would treat Deprecation warnings as hard errors, so new users don't learn features slated for possible removal. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip at pobox.com Thu Jan 8 05:30:11 2009 From: skip at pobox.com (skip at pobox.com) Date: Wed, 7 Jan 2009 22:30:11 -0600 Subject: [Python-Dev] Can someone explain the fast_block_end manipulation? Message-ID: <18789.33107.980231.702304@montanaro.dyndns.org> Everybody seems to be doing stuff with the virtual machine all of a sudden. I thought I would get in on the fun. I am generating functions from the byte code which pretty much just inlines the C code implementing each opcode. The idea is to generate a C function that looks like a small version of PyEval_EvalFrameEx but without the for loop and switch statement. Instead it just contains the C code implementing the opcodes used in that function. For example, this function >>> def f(a): ... for i in range(a): ... x = i*i disassembles to this: 2 0 SETUP_LOOP 30 (to 33) 3 LOAD_GLOBAL 0 (range) 6 LOAD_FAST 0 (a) 9 CALL_FUNCTION 1 12 GET_ITER >> 13 FOR_ITER 16 (to 32) 16 STORE_FAST 1 (i) 3 19 LOAD_FAST 1 (i) 22 LOAD_FAST 1 (i) 25 BINARY_MULTIPLY 26 STORE_FAST 2 (x) 29 JUMP_ABSOLUTE 13 >> 32 POP_BLOCK >> 33 LOAD_CONST 0 (None) 36 RETURN_VALUE and compiles to this #include "opcode_mini.h" PyObject * _PyEval_EvalMiniFrameEx(PyFrameObject *f, int throwflag) { static int minime = 1; static int jitting = 1; /* most of the stuff at the start of PyEval_EvalFrameEx */ PyEval_EvalFrameEx_PROLOG(); /* code length=37 */ /* nlabels=3, offsets: 13, 32, 33, */ oparg = 30 SETUP_LOOP_IMPL(oparg); /* 0 */ oparg = 0 LOAD_GLOBAL_IMPL(oparg, 0); /* 3 */ oparg = 0 LOAD_FAST_IMPL(oparg); /* 6 */ oparg = 1 CALL_FUNCTION_IMPL(oparg); /* 9 */ GET_ITER_IMPL(); /* 12 */ __L13: FOR_ITER_IMPL(__L32); oparg = 1 STORE_FAST_IMPL(oparg); /* 16 */ oparg = 1 LOAD_FAST_IMPL(oparg); /* 19 */ oparg = 1 LOAD_FAST_IMPL(oparg); /* 22 */ BINARY_MULTIPLY_IMPL(); /* 25 */ oparg = 2 STORE_FAST_IMPL(oparg); /* 26 */ goto __L13; __L32: POP_BLOCK_IMPL(); /* 32 */ __L33: oparg = 0 LOAD_CONST_IMPL(oparg); /* 33 */ RETURN_VALUE_IMPL(); /* 36 */ /* most of the stuff at the end of PyEval_EvalFrameEx */ PyEval_EvalFrameEx_EPILOG(); } Besides eliminating opcode decoding I figure it might give the compiler lots of optimization opportunities. Time will tell though. I have just about everything implemented but I'm a bit stuck trying to figure out how to deal with the block manipulation code in PyEval_EvalFrameEx after the fast_block_end label. JUMP* opcodes in the interpreter turn into gotos in the generated code. It seems I will have to replace any JUMP instructions in the epilog with computed gotos. In particular, I am a little confused by this construct: if (b->b_type == SETUP_LOOP && why == WHY_CONTINUE) { /* For a continue inside a try block, don't pop the block for the loop. */ PyFrame_BlockSetup(f, b->b_type, b->b_handler, b->b_level); \ why = WHY_NOT; JUMPTO(PyLong_AS_LONG(retval)); Py_DECREF(retval); break; } The top of stack has been popped into retval. I think that value was maybe pushed here: if (b->b_type == SETUP_FINALLY) { if (why & (WHY_RETURN | WHY_CONTINUE)) PUSH(retval); PUSH(PyLong_FromLong((long)why)); why = WHY_NOT; JUMPTO(b->b_handler); break; } but I'm confused. I don't see anyplace obvious where a value resembling a jump offset or jump target was pushed onto the stack. What's with that first JUMPTO in the SETUP_LOOP/WHY_CONTINUE code? Is the stack/block cleanup code documented anywhere? Wiki? Pointers to python-dev threads? I found this brief thread from last July: http://mail.python.org/pipermail/python-dev/2008-July/thread.html#81480 A svn annotate suggests that much of the fun in this code began with a checkin by Jeremy Hylton (r19260). It references an old SF patch (102989) but I can't locate that in the current issue tracker to read the discussion. Is there some way I can retrieve that? The obvious http://bugs.python.org/issue102989 didn't work for me. Thx, Skip From skip at pobox.com Thu Jan 8 05:32:31 2009 From: skip at pobox.com (skip at pobox.com) Date: Wed, 7 Jan 2009 22:32:31 -0600 Subject: [Python-Dev] Can someone explain the fast_block_end manipulation? Message-ID: <18789.33247.174385.860654@montanaro.dyndns.org> > I don't see anyplace obvious where a value resembling a jump offset or > jump target was pushed onto the stack. Duh. Found it about one minute after sending... CONTINUE_LOOP. Skip From asmodai at in-nomine.org Thu Jan 8 07:35:55 2009 From: asmodai at in-nomine.org (Jeroen Ruigrok van der Werven) Date: Thu, 8 Jan 2009 07:35:55 +0100 Subject: [Python-Dev] [Py3k] curses module and libncursesw library In-Reply-To: <200901080223.55104.victor.stinner@haypocalc.com> References: <200901080223.55104.victor.stinner@haypocalc.com> Message-ID: <20090108063554.GI1009@nexus.in-nomine.org> -On [20090108 02:23], Victor Stinner (victor.stinner at haypocalc.com) wrote: >It looks like libncursesw is available on Linux, *BSD, Mac OS X. On FreeBSD I know it is for 7.x, but I am not sure about 6.x. -- Jeroen Ruigrok van der Werven / asmodai ????? ?????? ??? ?? ?????? http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B Earth to earth, ashes to ashes, dust to dust... From theller at ctypes.org Thu Jan 8 09:31:45 2009 From: theller at ctypes.org (Thomas Heller) Date: Thu, 08 Jan 2009 09:31:45 +0100 Subject: [Python-Dev] a few strdup() questions... In-Reply-To: <49650103.6010206@v.loewis.de> References: <200901071230.35918.doomster@knuut.de> <49650103.6010206@v.loewis.de> Message-ID: >> BTW: there is another implementation (called my_strdup) in >> Modules/_ctypes/_ctypes_test.c, why not use the one in Python/strdup.c there? > > I guess that's historical, from the times when ctypes was still a > separate package. my_strdup is an exported function in _ctypes_test.pyd (on Windows), it is only used in the ctypes tests. Thomas From doomster at knuut.de Thu Jan 8 10:39:54 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Thu, 8 Jan 2009 10:39:54 +0100 Subject: [Python-Dev] a few strdup() questions... In-Reply-To: References: <200901071230.35918.doomster@knuut.de> Message-ID: <200901081039.54836.doomster@knuut.de> On Wednesday 07 January 2009 16:30:23 Daniel Stutzbach wrote: > On Wed, Jan 7, 2009 at 5:30 AM, Ulrich Eckhardt wrote: > > MS Windows CE doesn't provide strdup(), so where should I put it? I guess > > I should just compile in Python/strdup.c, right? > > I'm not an expert on Windows CE, but I believe it calls the function > "_strdup()": > > http://msdn.microsoft.com/en-us/library/ms861162.aspx Search with "Look in VC Include Directories" yields nothing. You are right though, the CE6 SDK I have does declare _strdup in stdlib.h and also provides an implementation to link with. Summary: redefine strdup and work around the broken search feature. thanks Uli From mal at egenix.com Thu Jan 8 10:48:53 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 08 Jan 2009 10:48:53 +0100 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> References: <4964F3DE.9090909@egenix.com> <4964FB13.3060600@egenix.com> <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> Message-ID: <4965CC05.1070105@egenix.com> On 2009-01-08 01:01, Collin Winter wrote: > On Wed, Jan 7, 2009 at 2:35 PM, Brett Cannon wrote: >> On Wed, Jan 7, 2009 at 10:57, M.-A. Lemburg wrote: >> [SNIP] >>> BTW: The _codecsmodule.c file is a 4 spaces indent file as well (just >>> like all Unicode support source files). Someone apparently has added >>> tabs when adding support for Py_buffers. >>> >> It looks like this formatting mix-up is just going to get worse for >> the next few years while the 2.x series is still being worked on. >> Should we just bite the bullet and start adding modelines for Vim and >> Emacs to .c/.h files that are written in the old 2.x style? For Vim I >> can then update the vimrc in Misc/Vim to then have 4-space indent be >> the default for C files. > > Or better yet, really bite the bullet and just reindent everything to > spaces. Not every one uses vim or emacs, nor do all tools understand > their modelines. FYI, there are options to svn blame and git to skip > whitespace-only changes. +1... and this should be done for both trunk and the 3.x branch in a single checkin to resync them. svn blame -x "-b" will do the trick for SVN. Perhaps there's even some .subversion/config option to set this globally. The question really is: How often do Python developers use svn blame ? If this is only done for a file or two every now and then, I don't think that adding the above option to the command would be much to ask for. The question to put up against this is: How often do you get irritated by lines not being correctly indented ? -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 08 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From python at rcn.com Thu Jan 8 10:52:23 2009 From: python at rcn.com (Raymond Hettinger) Date: Thu, 8 Jan 2009 01:52:23 -0800 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) References: <4964F3DE.9090909@egenix.com> <4964FB13.3060600@egenix.com> <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> Message-ID: <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> From: "M.-A. Lemburg" > The question to put up against this is: How often do you get > irritated by lines not being correctly indented ? Basically never. Raymond From victor.stinner at haypocalc.com Thu Jan 8 11:27:58 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Thu, 8 Jan 2009 11:27:58 +0100 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: <4965CC05.1070105@egenix.com> References: <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> Message-ID: <200901081127.58216.victor.stinner@haypocalc.com> Le Thursday 08 January 2009 10:48:53 M.-A. Lemburg, vous avez ?crit?: > svn blame -x "-b" will do the trick for SVN. Perhaps there's even > some .subversion/config option to set this globally. > > The question really is: How often do Python developers use svn blame ? I use "svn blame" to find a revision number but then I read the commit. There are not only spaces changes, sometimes a newline is inserted, or a function is just moved, or... > The question to put up against this is: How often do you get > irritated by lines not being correctly indented ? Regulary when I work on patches. Some files in Modules/*c mix spaces and tabs :-/ I would prefer spaces everywhere or tabs everywhere, but please don't mix both. -- I also hate trailing spaces. My editor is configured to remove them and so I have to use >svn diff --diff-cmd="/usr/bin/diff" -x "-ub"< to ignore any space change, which break some patches :-/ So if you choose to change the indentation, i would be nice to run also >sed "s/[ \t]\+$//g -i/< ;-) -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From kristjan at ccpgames.com Thu Jan 8 12:36:49 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 8 Jan 2009 11:36:49 +0000 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: <4965CC05.1070105@egenix.com> References: <4964F3DE.9090909@egenix.com> <4964FB13.3060600@egenix.com> <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D18845C1@exchis.ccp.ad.local> Oh dear. C code indented by spaces? I'll give up programming then. Just set your editor tab size to 4 and all is well. K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of M.-A. Lemburg Sent: 8. jan?ar 2009 09:49 To: Collin Winter Cc: Antoine Pitrou; python-dev at python.org Subject: Re: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) > Or better yet, really bite the bullet and just reindent everything to > spaces. Not every one uses vim or emacs, nor do all tools understand > their modelines. FYI, there are options to svn blame and git to skip > whitespace-only changes. +1... and this should be done for both trunk and the 3.x branch in a single checkin to resync them. From mal at egenix.com Thu Jan 8 13:19:02 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 08 Jan 2009 13:19:02 +0100 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D18845C1@exchis.ccp.ad.local> References: <4964F3DE.9090909@egenix.com> <4964FB13.3060600@egenix.com> <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> <930F189C8A437347B80DF2C156F7EC7F04D18845C1@exchis.ccp.ad.local> Message-ID: <4965EF36.4040701@egenix.com> On 2009-01-08 12:36, Kristj?n Valur J?nsson wrote: > Oh dear. C code indented by spaces? > I'll give up programming then. > Just set your editor tab size to 4 and all is well. I know this is flame bait, but TABs are 8 spaces in Python land :-) and most C files in Python that contain TABs and mix them with spaces rely on this. BTW: I don't blame anyone for the mixup - some editors simple go ahead and convert 8 spaces leading whitespace into TABs without the user knowing about this... after all, white on white looks all white in the end ;-) (there are even some steganographic systems out there, applying this scheme to embed data into text files). In any case, I think I need to remind people of PEP 7: Style Guide for C Code ... http://www.python.org/dev/peps/pep-0007/ It already says: "At some point, the whole codebase may be converted to use only 4-space indents." > K > > -----Original Message----- > From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of M.-A. Lemburg > Sent: 8. jan?ar 2009 09:49 > To: Collin Winter > Cc: Antoine Pitrou; python-dev at python.org > Subject: Re: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) >> Or better yet, really bite the bullet and just reindent everything to >> spaces. Not every one uses vim or emacs, nor do all tools understand >> their modelines. FYI, there are options to svn blame and git to skip >> whitespace-only changes. > > +1... and this should be done for both trunk and the 3.x branch > in a single checkin to resync them. > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/mal%40egenix.com -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 08 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From hodgestar+pythondev at gmail.com Thu Jan 8 13:42:09 2009 From: hodgestar+pythondev at gmail.com (Simon Cross) Date: Thu, 8 Jan 2009 14:42:09 +0200 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: References: Message-ID: On Sat, Jan 3, 2009 at 11:22 PM, Luke Kenneth Casson Leighton wrote: > secondly, i want a python25.lib which i can use to cross-compile > modules for poor windows users _despite_ sticking to my principles and > keeping my integrity as a free software developer. If this eventually leads to being able to compile Python software for Windows under Wine (using for example, py2exe) it would make my life a lot easier. Schiavo Simon From cournape at gmail.com Thu Jan 8 14:11:28 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 8 Jan 2009 22:11:28 +0900 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: References: Message-ID: <5b8d13220901080511x2fe96845l8ad86e70908f0a3e@mail.gmail.com> On Thu, Jan 8, 2009 at 9:42 PM, Simon Cross wrote: > On Sat, Jan 3, 2009 at 11:22 PM, Luke Kenneth Casson Leighton > wrote: >> secondly, i want a python25.lib which i can use to cross-compile >> modules for poor windows users _despite_ sticking to my principles and >> keeping my integrity as a free software developer. > > If this eventually leads to being able to compile Python software for > Windows under Wine (using for example, py2exe) it would make my life a > lot easier. You can already do that: just install windows python under wine. It works quite well, actually. You need mingw, though, of course - Visual Studio is far from being usable on wine. cheers, David From lkcl at lkcl.net Thu Jan 8 14:53:57 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 8 Jan 2009 13:53:57 +0000 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: References: Message-ID: On Thu, Jan 8, 2009 at 12:42 PM, Simon Cross wrote: > On Sat, Jan 3, 2009 at 11:22 PM, Luke Kenneth Casson Leighton > wrote: >> secondly, i want a python25.lib which i can use to cross-compile >> modules for poor windows users _despite_ sticking to my principles and >> keeping my integrity as a free software developer. > > If this eventually leads to being able to compile Python software for > Windows under Wine (using for example, py2exe) it would make my life a > lot easier. that looks like being an accidental side-effect, yes. where i'm up to so far: * i'm using -I $(src_dir)/PC at the beginning of the includes, so that PC/pyconfig.h gets pulled in as a priority over-and-above the auto-generated pyconfig.h (yukkk - i know); this makes the job of building almost-exactly-like-the-visual-studio-build much easier. * i'm manually compiling-linking the Modules/*.c and PC/*modules.c as i also pulled in PC/config.c and left out Modules/config.c - that got me even further * as a result i've actually got a python.exe.so that.... damnit, it works! the winreg test actually passes for example! the fly in the ointment i'm presently trying to track down: len([1,2]) returns 1L which of course screws up sre_parse.py at line 515 with "TypeError: __nonzero__ should return an int" because duh "if subpattern" is returning a Long not an Int. tracking this down further, it would appear that there's some lovely logic in PyInt_FromSsize_t() which i believe is what's getting called from PyInt_AsSsize_t() which is what's getting called from slot_sq_length() (i think) - and, although in this case this build is _definitely_ returning a Long type when it shouldn't, if the value is ever over LONG_MAX then the result will be "if subpattern" will definitely fail. but... i mean... if ever anyone passes in over 2^^31 items into sre_parse then they _deserve_ to have their code fail, but that's not the point. anyway, i'm floundering around a bit and making a bit of a mess of the code, looking for where LONG_MAX is messing up. l. which of course means that there's a bug in From lkcl at lkcl.net Thu Jan 8 15:02:01 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 8 Jan 2009 14:02:01 +0000 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: <5b8d13220901080511x2fe96845l8ad86e70908f0a3e@mail.gmail.com> References: <5b8d13220901080511x2fe96845l8ad86e70908f0a3e@mail.gmail.com> Message-ID: On Thu, Jan 8, 2009 at 1:11 PM, David Cournapeau wrote: > On Thu, Jan 8, 2009 at 9:42 PM, Simon Cross > wrote: >> On Sat, Jan 3, 2009 at 11:22 PM, Luke Kenneth Casson Leighton >> wrote: >>> secondly, i want a python25.lib which i can use to cross-compile >>> modules for poor windows users _despite_ sticking to my principles and >>> keeping my integrity as a free software developer. >> >> If this eventually leads to being able to compile Python software for >> Windows under Wine (using for example, py2exe) it would make my life a >> lot easier. > > You can already do that: just install windows python under wine. i tried that a few months ago - the builder requires the MS installer, which segfaulted on my installation of wine (i installed it using winetricks) which left me flummoxed because other people report successful use of MSI. i also don't want "just" the python.exe, i want the libpython25.a, i want the libpython25.lib, so as to be able to build libraries such as pywekbit-gtk for win32 (cross-compiled using winegcc of course) unpacking the python installer .exe (which was, again, created with a proprietary program) i found that all of the contents were name-mangled and so were useless: i wasn't about to work my way through nearly a hundred files, manually, when i can just as well get python compiling under wine once and then stand a good chance of being able to repeat the exercise in the future, also for python 2.6. so, basically, i really don't want to use visual studio, i really don't want to install a proprietary MSI installer, i really don't want a proprietarily-built python25.exe, and i really don't want a proprietarily-packed installation. i'd just ... much rather be completely independent of proprietary software when it comes to building free software. .... onwards.... :) From cournape at gmail.com Thu Jan 8 15:05:54 2009 From: cournape at gmail.com (David Cournapeau) Date: Thu, 8 Jan 2009 23:05:54 +0900 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: References: <5b8d13220901080511x2fe96845l8ad86e70908f0a3e@mail.gmail.com> Message-ID: <5b8d13220901080605l6ecc7224ve7d579612f09abd1@mail.gmail.com> On Thu, Jan 8, 2009 at 11:02 PM, Luke Kenneth Casson Leighton wrote: > On Thu, Jan 8, 2009 at 1:11 PM, David Cournapeau wrote: >> On Thu, Jan 8, 2009 at 9:42 PM, Simon Cross >> wrote: >>> On Sat, Jan 3, 2009 at 11:22 PM, Luke Kenneth Casson Leighton >>> wrote: >>>> secondly, i want a python25.lib which i can use to cross-compile >>>> modules for poor windows users _despite_ sticking to my principles and >>>> keeping my integrity as a free software developer. >>> >>> If this eventually leads to being able to compile Python software for >>> Windows under Wine (using for example, py2exe) it would make my life a >>> lot easier. >> >> You can already do that: just install windows python under wine. > > i tried that a few months ago - the builder requires the MS > installer, which segfaulted on my installation of wine (i installed it > using winetricks) which left me flummoxed because other people report > successful use of MSI. > Hm, I could definitely install python - I have python in wine ATM. wine python -c 'import sys; print sys.version' -> 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] IIRC, I could build numpy on it, which is far from a trivial package from a build POV :) I think it crashes on wine, though - which I why I did not pursued it so far. But I believe python itself at least is usable in wine, depending on what you are trying to do. David From lkcl at lkcl.net Thu Jan 8 15:57:03 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 8 Jan 2009 14:57:03 +0000 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: References: Message-ID: > anyway, i'm floundering around a bit and making a bit of a mess of the > code, looking for where LONG_MAX is messing up. fixed with this: PyObject * PyInt_FromSsize_t(Py_ssize_t ival) { if ((long)ival >= (long)LONG_MIN && (long)ival <= (long)LONG_MAX) { return PyInt_FromLong((long)ival); } return _PyLong_FromSsize_t(ival); } raised as http://bugs.python.org/issue4880 next bug: distutils.sysconfig.get_config_var('srcdir') is returning None (!!) From lkcl at lkcl.net Thu Jan 8 16:18:03 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 8 Jan 2009 15:18:03 +0000 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: References: Message-ID: > next bug: distutils.sysconfig.get_config_var('srcdir') is returning None (!!) ok ... actually, that's correct. oops. sysconfig.get_config_vars() only returns these, on win32: {'EXE': '.exe', 'exec_prefix': 'Z:\\mnt\\src\\python2.5-2.5.2', 'LIBDEST': 'Z:\\mnt\\src\\python2.5-2.5.2\\Lib', 'prefix': 'Z:\\mnt\\src\\python2.5-2.5.2', 'SO': '.pyd', 'BINLIBDEST': 'Z:\\mnt\\src\\python2.5-2.5.2\\Lib', 'INCLUDEPY': 'Z:\\mnt\\src\\python2.5-2.5.2\\include'} ... aaaand, that means disabling setup.py or hacking it significantly to support a win32 build, e.g. to build pyexpat, detect which modules are left, etc. by examining the remaining vcproj files in PCbuild. .... ok - i'm done for now. the project's not complete, but can be regarded as successful so far. i think the best thing is being able to do "import _winreg" on a linux system. that absolutely tickles me silly :) been running a few tests - test_mmap.py is a hoot, esp. the Try opening a bad file descriptor... that causes a wine segfault. if anyone wants to play with this further, source is here: http://github.com/lkcl/pythonwine/tree/python_2.5.2_wine at some point - if i feel like taking this further, and if people offer some advice and hints on where to go (with e.g. setup.py) i'll continue. then once that's done i'll do python 2.6 as well. l. From ndbecker2 at gmail.com Thu Jan 8 17:00:52 2009 From: ndbecker2 at gmail.com (Neal Becker) Date: Thu, 08 Jan 2009 11:00:52 -0500 Subject: [Python-Dev] improvements for mmap Message-ID: I'd like to suggest some improvements from mmap 1) mmap assign to slice only accepts a string. This is unfortunate, because AFAIK a string can only be created by copying data, and this is wasteful for large data transfers. mmap should accept any object supporting buffer protocol as well as string. 2) buffer (mmap_obj) gives a read_only buffer. There should be a way to make this read_write. 3) mmap_obj does not support weak ref. From steve at holdenweb.com Thu Jan 8 17:29:52 2009 From: steve at holdenweb.com (Steve Holden) Date: Thu, 08 Jan 2009 11:29:52 -0500 Subject: [Python-Dev] improvements for mmap In-Reply-To: References: Message-ID: Neal Becker wrote: > I'd like to suggest some improvements from mmap > > 1) mmap assign to slice only accepts a string. This is unfortunate, because AFAIK a string can only be created by copying data, and this is wasteful for large data transfers. mmap should accept any object supporting buffer protocol as well as string. > > 2) buffer (mmap_obj) gives a read_only buffer. There should be a way to make this read_write. > > 3) mmap_obj does not support weak ref. > Can you add these to the tracker as a feature request, please? regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From aahz at pythoncraft.com Thu Jan 8 17:33:45 2009 From: aahz at pythoncraft.com (Aahz) Date: Thu, 8 Jan 2009 08:33:45 -0800 Subject: [Python-Dev] What's New in Python 2.6: no string exceptions In-Reply-To: References: Message-ID: <20090108163345.GA21056@panix.com> On Wed, Jan 07, 2009, Daniel Stutzbach wrote: > > After reading "What's New in Python 2.6" and then upgrading, I quickly > noticed an omission: string exceptions are no longer supported and raise a > TypeError. Please file a report on bugs.python.org so it doesn't get lost -- it's already Thursday with no response. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian W. Kernighan From daniel at stutzbachenterprises.com Thu Jan 8 17:46:39 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Thu, 8 Jan 2009 10:46:39 -0600 Subject: [Python-Dev] What's New in Python 2.6: no string exceptions In-Reply-To: <20090108163345.GA21056@panix.com> References: <20090108163345.GA21056@panix.com> Message-ID: On Thu, Jan 8, 2009 at 10:33 AM, Aahz wrote: > On Wed, Jan 07, 2009, Daniel Stutzbach wrote: > > After reading "What's New in Python 2.6" and then upgrading, I quickly > > noticed an omission: string exceptions are no longer supported and raise > a > > TypeError. > > Please file a report on bugs.python.org so it doesn't get lost -- it's > already Thursday with no response. Benjamin Peterson sent me a private email stating that he added some text and checked it in as r68388. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From tutufan at gmail.com Thu Jan 8 18:05:07 2009 From: tutufan at gmail.com (Mike Coleman) Date: Thu, 8 Jan 2009 11:05:07 -0600 Subject: [Python-Dev] error in doc for fcntl module In-Reply-To: References: <3c6c07c20901071331x3ebf14f9u3abd8eab7e736f12@mail.gmail.com> Message-ID: <3c6c07c20901080905t7427cbb3k7e357617cc6d54a1@mail.gmail.com> One problem is that API wrappers like this sometimes include extra functionality. When I ran across this example, I wondered whether the Python interface had been enhanced to work like this # set these three flags rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NDELAY) rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_APPEND) rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NOATIME) Something like this might be nice, but after staring at it for another minute, I realized that the Python interface itself was standard, and that it was the example itself that was confusing me. (I've been programming Unix/POSIX for over 20 years, so perhaps I simply outsmarted myself, or am an idiot. Still, I found it confusing.) One of the many virtues of Python is that it's oriented towards learning/teaching. It seems like it would be useful in this case to have an example that shows best practice (as in Stevens/Rago and other similar texts), rather than one that will merely usually work on present systems. If it makes any difference, I'd be happy to send a patch. Is there any reason not to change this? Mike On Wed, Jan 7, 2009 at 6:36 PM, Guido van Rossum wrote: > Well my Linux man page says that the only flags supported are > O_APPEND, O_ASYNC, O_DIRECT, O_NOATIME, and O_NONBLOCK; and all of > those are typically off -- so I'm not sure that it's a mistake or need > correcting. These APIs should only be used by people who know what > they're doing anyways; the examples are meant to briefly show the call > format. > > On Wed, Jan 7, 2009 at 1:31 PM, Mike Coleman wrote: >> In the doc page for the fcntl module, the example below is given. >> This seems like an error, or at least very misleading, as the normal >> usage is to get the flags (F_GETFL), set or unset the bits you want to >> change, then set the flags (F_SETFL). A reader might think that the >> example below merely sets O_NDELAY, but it also stomps all of the >> other bits to zero. >> >> If someone can confirm my thinking, this ought to be changed. >> >> import struct, fcntl, os >> >> f = open(...) >> rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NDELAY) > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > From guido at python.org Thu Jan 8 18:36:19 2009 From: guido at python.org (Guido van Rossum) Date: Thu, 8 Jan 2009 09:36:19 -0800 Subject: [Python-Dev] error in doc for fcntl module In-Reply-To: <3c6c07c20901080905t7427cbb3k7e357617cc6d54a1@mail.gmail.com> References: <3c6c07c20901071331x3ebf14f9u3abd8eab7e736f12@mail.gmail.com> <3c6c07c20901080905t7427cbb3k7e357617cc6d54a1@mail.gmail.com> Message-ID: Unless documented otherwise, the Python wrappers for system calls are as low-level as possible, sticking as close to the system call semantics as possible. I do think you may be reading too much into the whole thing. On Thu, Jan 8, 2009 at 9:05 AM, Mike Coleman wrote: > One problem is that API wrappers like this sometimes include extra > functionality. When I ran across this example, I wondered whether the > Python interface had been enhanced to work like this > > # set these three flags > rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NDELAY) > rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_APPEND) > rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NOATIME) > > Something like this might be nice, but after staring at it for another > minute, I realized that the Python interface itself was standard, and > that it was the example itself that was confusing me. (I've been > programming Unix/POSIX for over 20 years, so perhaps I simply > outsmarted myself, or am an idiot. Still, I found it confusing.) > > One of the many virtues of Python is that it's oriented towards > learning/teaching. It seems like it would be useful in this case to > have an example that shows best practice (as in Stevens/Rago and other > similar texts), rather than one that will merely usually work on > present systems. > > If it makes any difference, I'd be happy to send a patch. Is there > any reason not to change this? > > Mike > > > > On Wed, Jan 7, 2009 at 6:36 PM, Guido van Rossum wrote: >> Well my Linux man page says that the only flags supported are >> O_APPEND, O_ASYNC, O_DIRECT, O_NOATIME, and O_NONBLOCK; and all of >> those are typically off -- so I'm not sure that it's a mistake or need >> correcting. These APIs should only be used by people who know what >> they're doing anyways; the examples are meant to briefly show the call >> format. >> >> On Wed, Jan 7, 2009 at 1:31 PM, Mike Coleman wrote: >>> In the doc page for the fcntl module, the example below is given. >>> This seems like an error, or at least very misleading, as the normal >>> usage is to get the flags (F_GETFL), set or unset the bits you want to >>> change, then set the flags (F_SETFL). A reader might think that the >>> example below merely sets O_NDELAY, but it also stomps all of the >>> other bits to zero. >>> >>> If someone can confirm my thinking, this ought to be changed. >>> >>> import struct, fcntl, os >>> >>> f = open(...) >>> rv = fcntl.fcntl(f, fcntl.F_SETFL, os.O_NDELAY) >> >> -- >> --Guido van Rossum (home page: http://www.python.org/~guido/) >> > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From brett at python.org Thu Jan 8 19:29:39 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 10:29:39 -0800 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> References: <4964F3DE.9090909@egenix.com> <4964FB13.3060600@egenix.com> <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> Message-ID: On Thu, Jan 8, 2009 at 01:52, Raymond Hettinger wrote: > From: "M.-A. Lemburg" >> >> The question to put up against this is: How often do you get >> irritated by lines not being correctly indented ? > > Basically never. And of course I am the polar opposite: frequently enough that I want to see this fixed. -Brett From guido at python.org Thu Jan 8 19:41:49 2009 From: guido at python.org (Guido van Rossum) Date: Thu, 8 Jan 2009 10:41:49 -0800 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: References: <4964F3DE.9090909@egenix.com> <4964FB13.3060600@egenix.com> <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> Message-ID: On Thu, Jan 8, 2009 at 10:29 AM, Brett Cannon wrote: > On Thu, Jan 8, 2009 at 01:52, Raymond Hettinger wrote: >> From: "M.-A. Lemburg" >>> >>> The question to put up against this is: How often do you get >>> irritated by lines not being correctly indented ? >> >> Basically never. > > And of course I am the polar opposite: frequently enough that I want > to see this fixed. I'm in the middle -- I don't mind so much if some parts of a file are indented using a different style than other parts. But I am adamant that local misalignment is horrible. Since mixing tabs and spaces within one function is bound to lead to local misalignments (either for the folks who set their tabs at 4 or for the folks who set them at 8, as God intended), I want at least within each function the indentation to be all spaces or all tabs. (And yes, the convention of implementing 4-position indents using tabs followed by 4 spaces for odd indents is evil, as it looks horrible for folks whose tabs are set to 4.) Long term (sorry Kristj?n :-) I prefer 4 spaces per indent level, but not enough to reindent everything. svn blame may have a way to ignore whitespace changes, but it's still a pain to deal with. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From brett at python.org Thu Jan 8 19:48:31 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 10:48:31 -0800 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: References: <4964F3DE.9090909@egenix.com> <4964FB13.3060600@egenix.com> <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> Message-ID: On Thu, Jan 8, 2009 at 10:41, Guido van Rossum wrote: > On Thu, Jan 8, 2009 at 10:29 AM, Brett Cannon wrote: >> On Thu, Jan 8, 2009 at 01:52, Raymond Hettinger wrote: >>> From: "M.-A. Lemburg" >>>> >>>> The question to put up against this is: How often do you get >>>> irritated by lines not being correctly indented ? >>> >>> Basically never. >> >> And of course I am the polar opposite: frequently enough that I want >> to see this fixed. > > I'm in the middle -- I don't mind so much if some parts of a file are > indented using a different style than other parts. But I am adamant > that local misalignment is horrible. Since mixing tabs and spaces > within one function is bound to lead to local misalignments (either > for the folks who set their tabs at 4 or for the folks who set them at > 8, as God intended), I want at least within each function the > indentation to be all spaces or all tabs. (And yes, the convention of > implementing 4-position indents using tabs followed by 4 spaces for > odd indents is evil, as it looks horrible for folks whose tabs are set > to 4.) > Can we then all agree that a policy of re-indenting per function as changes are made to the code is acceptable but not required? -Brett From brett at python.org Thu Jan 8 20:06:53 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 11:06:53 -0800 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 Message-ID: My work rewriting import in pure Python code has reached beta. Basically the code is semantically complete and as backwards-compatible as I can make it short of widespread testing or running on a Windows box. There are still some tweaks here and there I want to make and an API to expose, but __import__ works as expected when run as the import implementation for all unit tests. Knowing how waiting for perfection leads to never finishing, I would like to start figuring out what it will take to get the code added to the standard library of 3.1 with hopes of getting the bootstrapping stuff done so that the C implementation of import can go away in 3.1 as well. I see basically three things that need to be decided upfront. One, does anyone have issues if I check in importlib? We have typically said code has to have been selected as best-of-breed by the community first, so I realize I am asking for a waiver on this one. Two, what should the final name be? I originally went with importlib since this code was developed outside of the trunk, but I can see some people suggesting using the imp name. That's fine although that does lead to the question of what to do with the current imp. It could be renamed _imp, but then that means what is currently named _importlib would have to be renamed to something else as well. Maybe imp._bootstrap? Plus I always viewed imp as the place where really low-level, C-based stuff lived. Otherwise importlib can slowly subsume the stuff in imp that is still useful. Three, there are still some structural changes to the code that I want to make. I can hold off on checking in the code until these changes are made, but as I said earlier, I know better than to wait forever for perfection. And because I know people will ask: no, I do not plan to backport all the code to 2.7. I want this to be a carrot to people to switch to 3.x. But I will backport the import_module function I wrote to 2.7 so people do have that oft-requested feature since it is a really simple bit of Python code. -Brett From brett at python.org Thu Jan 8 20:25:07 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 11:25:07 -0800 Subject: [Python-Dev] Is raising SystemError during relative import the best solution? Message-ID: So it turns out that if you try to do a relative import where a parent is not loaded, it raises a SystemError. This has been in there since Guido added package support back in the day. But this seems more like an ImportError than a SystemError to me. My guess is that the original purpose was to signify someone specified some relative import name without the proper stuff to make the name resolve to what it should be. But that to me is still an ImportError as the name came out wrong, not that the system did something incorrectly. So I would like to propose to remove the SystemError and make it an ImportError. Anyone object? -Brett From p.f.moore at gmail.com Thu Jan 8 20:26:39 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 8 Jan 2009 19:26:39 +0000 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: References: Message-ID: <79990c6b0901081126u2d6154c4qa12550402461f045@mail.gmail.com> 2009/1/8 Brett Cannon : > My work rewriting import in pure Python code has reached beta. > Basically the code is semantically complete and as > backwards-compatible as I can make it short of widespread testing or > running on a Windows box. I should have done this earlier, sorry. A quick test on Windows XP, using the released Python 3.0 installer and a hacked regrtest.bat (which is basically regrtest.sh converted to Windows bat file syntax) gives: >regrtest.bat \Apps\Python30\python.exe Traceback (most recent call last): File "_importlib.py", line 836, in _import_module loader = self._search_meta_path(name, path) File "_importlib.py", line 751, in _search_meta_path raise ImportError("No module named %s" % name) ImportError: No module named test During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 1, in File "_importlib.py", line 1047, in __call__ self._import_full_module(name) File "_importlib.py", line 887, in _import_full_module self._import_module(current_name, path_list) File "_importlib.py", line 840, in _import_module loader = self._search_std_path(name, path) File "_importlib.py", line 798, in _search_std_path importer = self._sys_path_importer(entry) File "_importlib.py", line 766, in _sys_path_importer return self.default_path_hook(path_entry) File "_importlib.py", line 245, in chained_fs_path_hook absolute_path = _path_absolute(path_entry) File "_importlib.py", line 112, in _path_absolute return _os._getfullpathname(path) WindowsError: [Error 2] The system cannot find the file specified: '' Looks like ntpath._getfullpathname doesn't like an empty string as an argument. The following patch seems to fix this: --- _importlib.py.orig 2009-01-03 19:50:22.121422900 +0000 +++ _importlib.py 2009-01-08 19:23:06.218750000 +0000 @@ -109,6 +109,8 @@ def _path_absolute(path): """Replacement for os.path.abspath.""" try: + if path == '': + return _os._getfullpathname(_os._getcwd()) return _os._getfullpathname(path) except AttributeError: if path.startswith('/'): I then get the following output: >regrtest.bat \Apps\Python30\python.exe test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_SimpleHTTPServer test___all__ test___future__ test__locale test__locale skipped -- cannot import name RADIXCHAR test_abc test_abstract_numbers test_array test_ast test_asynchat test_asyncore test_atexit test_audioop test_augassign test_base64 test_bigaddrspace test_bigmem test_binascii test test_binascii failed -- Traceback (most recent call last): File "C:\Apps\Python30\lib\test\test_binascii.py", line 177, in test_no_binary_strings self.assertRaises(TypeError, f, "test") AssertionError: TypeError not raised by crc32 test_binhex test_binop test_bisect test_bool test_bufio test_bytes test_bz2 test_calendar test_call test_capi test_cfgparser test_cgi test_charmapcodec test_class test_cmath test_cmd test_cmd_line test_cmd_line_script test_code test_codeccallbacks test_codecencodings_cn test_codecencodings_hk test_codecencodings_jp test_codecencodings_kr test_codecencodings_tw test_codecmaps_cn test_codecmaps_cn skipped -- Use of the `urlfetch' resource not enabled test_codecmaps_hk test_codecmaps_hk skipped -- Use of the `urlfetch' resource not enabled test_codecmaps_jp test_codecmaps_jp skipped -- Use of the `urlfetch' resource not enabled test_codecmaps_kr test_codecmaps_kr skipped -- Use of the `urlfetch' resource not enabled test_codecmaps_tw test_codecmaps_tw skipped -- Use of the `urlfetch' resource not enabled test_codecs test_codeop test_coding test_collections test_colorsys test_compare test_compile test_complex test_contains test_contextlib test_copy test_copyreg test_cprofile test_crypt test_crypt skipped -- No module named crypt test_csv test_ctypes test_curses test_curses skipped -- No module named _curses test_datetime test_dbm test_dbm_dumb test_dbm_gnu test_dbm_gnu skipped -- No module named _gdbm test_dbm_ndbm test_dbm_ndbm skipped -- No module named _dbm test_decimal test_decorators test_defaultdict test_deque test_descr test_descrtut test_dictcomps test_dictviews test_difflib test_dis test_docxmlrpc test_dummy_thread test_dummy_threading test_email test_enumerate test_eof test_epoll test_epoll skipped -- test works only on Linux 2.6 test_errno test_exception_variations test_extcall test_fcntl test_fcntl skipped -- No module named fcntl test_file test_filecmp test_fileinput test_fileio test_float test_fnmatch test_fork1 test_fork1 skipped -- os.fork not defined -- skipping test_fork1 test_format test_fractions test_frozen test_ftplib test_funcattrs test_functools test_future test_future3 test_future4 test_future5 test_gc test_generators test_genericpath test_genexps test_getargs2 test_getopt test_gettext test_glob test_global test_grp test_grp skipped -- No module named grp test_gzip test_hash test_hashlib test_heapq test_hmac test_htmlparser test_http_cookiejar test_http_cookies test_httplib test_httpservers test_imaplib test_imp test_importhooks test_index test_inspect test_int test_int_literal test_io Testing large file ops skipped on win32. It requires 2147483648 bytes and a long time. Use 'regrtest.py -u largefile test_io' to run it. test_ioctl test_ioctl skipped -- No fcntl or termios module test_isinstance test_iter test_iterlen test_itertools test_json test_keywordonlyarg test_kqueue test_kqueue skipped -- test works only on BSD test_largefile test_largefile skipped -- test requires 2500000000 bytes and a long time to run test_list test_listcomps test_locale test_logging test_long test_longexp test_macpath test_mailbox test_marshal test_math test_memoryio test_memoryview test_metaclass test_mimetypes test_minidom test_mmap test_module test_modulefinder test_multibytecodec test_multibytecodec_support test_multiprocessing test_multiprocessing skipped -- No module named test.test_support test_mutants test_netrc test_nis test_nis skipped -- No module named nis test_normalization test_normalization skipped -- Use of the `urlfetch' resource not enabled test_ntpath test_openpty test_openpty skipped -- No openpty() available. test_operator test_optparse test_os test_ossaudiodev test_ossaudiodev skipped -- Use of the `audio' resource not enabled test_parser Expecting 's_push: parser stack overflow' in next line s_push: parser stack overflow test_peepholer test_pep247 test_pep277 test_pep292 test_pep3131 test_pep352 test_pickle test_pickletools test_pipes test_pipes skipped -- pipes module only works on posix test_pkgimport test_pkgutil test_platform test_plistlib test_poll test_poll skipped -- select.poll not defined -- skipping test_poll test_popen test_poplib test_posix test_posix skipped -- posix is not available test_posixpath test_pow test_pprint test_print test_profile test_profilehooks test_property test_pstats test_pty test_pty skipped -- No module named fcntl test_pwd test_pwd skipped -- No module named pwd test_pyclbr test_pyexpat test_queue test_quopri test_raise test_random test_range test_re test_reprlib test_resource test_resource skipped -- No module named resource test_richcmp test_robotparser test_runpy test_sax test_scope test_select test_set test_setcomps test_shelve test_shutil test_signal test_signal skipped -- Can't test signal on win32 test_site test_slice test_smtplib test_socket test_socketserver test_socketserver skipped -- Use of the `network' resource not enabled test_sort test_sqlite test_ssl test_startfile test_strftime test_string test_stringprep test_strlit test_strptime test_struct test_structmembers test_structseq test_subprocess a DOS box should flash briefly ... . this bit of output is from a test of stdout in a different process ... test_sundry test_super test_symtable test_syntax test_sys test_syslog test_syslog skipped -- No module named syslog test_tarfile test_tcl test_telnetlib test_tempfile test_textwrap test_thread test_threaded_import test_threadedtempfile test_threading test_threading_local test_threadsignals test_threadsignals skipped -- Can't test signal on win32 test_time test_timeout test_timeout skipped -- Use of the `network' resource not enabled test_tokenize test_trace test_traceback test_tuple test_typechecks test_ucn test_unary test_unicode test_unicode_file test_unicodedata test_univnewlines test_unpack test_unpack_ex test_urllib test_urllib2 test_urllib2_localnet test_urllib2net test_urllib2net skipped -- Use of the `network' resource not enabled test_urllibnet test_urllibnet skipped -- Use of the `network' resource not enabled test_urlparse test_userdict test_userlist test_userstring test_uu test_uuid WARNING: uuid.getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._ifconfig_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. WARNING: uuid._unixdll_getnode is unreliable on many platforms. It is disabled until the code and/or test can be fixed properly. test_wait3 test_wait3 skipped -- os.fork not defined -- skipping test_wait3 test_wait4 test_wait4 skipped -- os.fork not defined -- skipping test_wait4 test_warnings test_wave test_weakref test_weakset test_winreg test_winsound test_winsound skipped -- Use of the `audio' resource not enabled test_with test_wsgiref test_xdrlib test_xml_etree test_xml_etree_c test_xmlrpc test_xmlrpc_net test_xmlrpc_net skipped -- Use of the `network' resource not enabled test_zipfile test_zipfile64 test_zipfile64 skipped -- test requires loads of disk-space bytes and a long time to run test_zipimport test_zlib 272 tests OK. 1 test failed: test_binascii 40 tests skipped: test__locale test_codecmaps_cn test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr test_codecmaps_tw test_crypt test_curses test_dbm_gnu test_dbm_ndbm test_epoll test_fcntl test_fork1 test_grp test_ioctl test_kqueue test_largefile test_multiprocessing test_nis test_normalization test_openpty test_ossaudiodev test_pipes test_poll test_posix test_pty test_pwd test_resource test_signal test_socketserver test_syslog test_threadsignals test_timeout test_urllib2net test_urllibnet test_wait3 test_wait4 test_winsound test_xmlrpc_net test_zipfile64 2 skips unexpected on win32: test_dbm_ndbm test_multiprocessing Hope this helps, Paul From guido at python.org Thu Jan 8 20:33:57 2009 From: guido at python.org (Guido van Rossum) Date: Thu, 8 Jan 2009 11:33:57 -0800 Subject: [Python-Dev] Is raising SystemError during relative import the best solution? In-Reply-To: References: Message-ID: On Thu, Jan 8, 2009 at 11:25 AM, Brett Cannon wrote: > So it turns out that if you try to do a relative import where a parent > is not loaded, it raises a SystemError. This has been in there since > Guido added package support back in the day. But this seems more like > an ImportError than a SystemError to me. My guess is that the original > purpose was to signify someone specified some relative import name > without the proper stuff to make the name resolve to what it should > be. But that to me is still an ImportError as the name came out wrong, > not that the system did something incorrectly. > > So I would like to propose to remove the SystemError and make it an > ImportError. Anyone object? Hm. The SystemError is because this is a logical impossibility -- how could you be doing an import (relative or otherwise) from P.M when P is not loaded? It could only happen if somebody has been removing stuff selectively from sys.modules. Why don't you want this to be a SystemError? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From brett at python.org Thu Jan 8 20:50:54 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 11:50:54 -0800 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: <79990c6b0901081126u2d6154c4qa12550402461f045@mail.gmail.com> References: <79990c6b0901081126u2d6154c4qa12550402461f045@mail.gmail.com> Message-ID: On Thu, Jan 8, 2009 at 11:26, Paul Moore wrote: > 2009/1/8 Brett Cannon : >> My work rewriting import in pure Python code has reached beta. >> Basically the code is semantically complete and as >> backwards-compatible as I can make it short of widespread testing or >> running on a Windows box. > > I should have done this earlier, sorry. A quick test on Windows XP, > using the released Python 3.0 installer and a hacked regrtest.bat > (which is basically regrtest.sh converted to Windows bat file syntax) > gives: > >>regrtest.bat \Apps\Python30\python.exe > Traceback (most recent call last): > File "_importlib.py", line 836, in _import_module > loader = self._search_meta_path(name, path) > File "_importlib.py", line 751, in _search_meta_path > raise ImportError("No module named %s" % name) > ImportError: No module named test > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): > File "", line 1, in > File "_importlib.py", line 1047, in __call__ > self._import_full_module(name) > File "_importlib.py", line 887, in _import_full_module > self._import_module(current_name, path_list) > File "_importlib.py", line 840, in _import_module > loader = self._search_std_path(name, path) > File "_importlib.py", line 798, in _search_std_path > importer = self._sys_path_importer(entry) > File "_importlib.py", line 766, in _sys_path_importer > return self.default_path_hook(path_entry) > File "_importlib.py", line 245, in chained_fs_path_hook > absolute_path = _path_absolute(path_entry) > File "_importlib.py", line 112, in _path_absolute > return _os._getfullpathname(path) > WindowsError: [Error 2] The system cannot find the file specified: '' > > Looks like ntpath._getfullpathname doesn't like an empty string as an > argument. The following patch seems to fix this: > > --- _importlib.py.orig 2009-01-03 19:50:22.121422900 +0000 > +++ _importlib.py 2009-01-08 19:23:06.218750000 +0000 > @@ -109,6 +109,8 @@ > def _path_absolute(path): > """Replacement for os.path.abspath.""" > try: > + if path == '': > + return _os._getfullpathname(_os._getcwd()) > return _os._getfullpathname(path) > except AttributeError: > if path.startswith('/'): > Thanks, Paul! I changed it to _os.getcwd() since that's what nt exposes. -Brett From brett at python.org Thu Jan 8 21:03:10 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 12:03:10 -0800 Subject: [Python-Dev] Is raising SystemError during relative import the best solution? In-Reply-To: References: Message-ID: On Thu, Jan 8, 2009 at 11:33, Guido van Rossum wrote: > On Thu, Jan 8, 2009 at 11:25 AM, Brett Cannon wrote: >> So it turns out that if you try to do a relative import where a parent >> is not loaded, it raises a SystemError. This has been in there since >> Guido added package support back in the day. But this seems more like >> an ImportError than a SystemError to me. My guess is that the original >> purpose was to signify someone specified some relative import name >> without the proper stuff to make the name resolve to what it should >> be. But that to me is still an ImportError as the name came out wrong, >> not that the system did something incorrectly. >> >> So I would like to propose to remove the SystemError and make it an >> ImportError. Anyone object? > > Hm. The SystemError is because this is a logical impossibility -- how > could you be doing an import (relative or otherwise) from P.M when P > is not loaded? It could only happen if somebody has been removing > stuff selectively from sys.modules. Why don't you want this to be a > SystemError? > Doesn't fit my mental model as nicely as raising ImportError. And as an aside this also makes testing a little bit more of a pain, but that is not the reason I brought this up. But the way you phrased it makes sense for me not to care enough to press this any farther. -Brett From ncoghlan at gmail.com Thu Jan 8 21:35:25 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 09 Jan 2009 06:35:25 +1000 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: References: Message-ID: <4966638D.3010703@gmail.com> Brett Cannon wrote: > One, does anyone have issues if I check in importlib? We have > typically said code has to have been selected as best-of-breed by the > community first, so I realize I am asking for a waiver on this one. That rule has never really applied to things that are part of the interpreter itself though (how could it?). My main question would be how it relates to the existing import machinery emulation in pkgutil. If adding importlib lets us largely drop that emulation (instead replacing it with invocations of importlib), then that seems like a big win to me. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Thu Jan 8 21:43:57 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 09 Jan 2009 06:43:57 +1000 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: References: <4964F3DE.9090909@egenix.com> <4964FB13.3060600@egenix.com> <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> Message-ID: <4966658D.8060206@gmail.com> Brett Cannon wrote: > Can we then all agree that a policy of re-indenting per function as > changes are made to the code is acceptable but not required? Such a rule would certainly make *my* life a lot easier - the reason I find the tabs annoying is because I have my editor set to switch everything to 4 space indents by default, and I have to fiddle with it to get it to keep the tabs when I'm editing functions/files that previously used tabs for indenting. Even if we do adopt such a rule, C patches posted to the tracker should still try to avoid including pure whitespace changes though - leaving the whitespace changes in the patch tends to lead to patches that look like "remove function body, add different function body" when only a couple of lines have actually had significant changes. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From brett at python.org Thu Jan 8 21:52:44 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 12:52:44 -0800 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: <4966658D.8060206@gmail.com> References: <4964FB13.3060600@egenix.com> <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> <4966658D.8060206@gmail.com> Message-ID: On Thu, Jan 8, 2009 at 12:43, Nick Coghlan wrote: > Brett Cannon wrote: >> Can we then all agree that a policy of re-indenting per function as >> changes are made to the code is acceptable but not required? > > Such a rule would certainly make *my* life a lot easier - the reason I > find the tabs annoying is because I have my editor set to switch > everything to 4 space indents by default, and I have to fiddle with it > to get it to keep the tabs when I'm editing functions/files that > previously used tabs for indenting. > > Even if we do adopt such a rule, C patches posted to the tracker should > still try to avoid including pure whitespace changes though - leaving > the whitespace changes in the patch tends to lead to patches that look > like "remove function body, add different function body" when only a > couple of lines have actually had significant changes. > That's fine with me. Correcting whitespace can be considered a committer's job. -Brett From p.f.moore at gmail.com Thu Jan 8 21:57:29 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 8 Jan 2009 20:57:29 +0000 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: References: <79990c6b0901081126u2d6154c4qa12550402461f045@mail.gmail.com> Message-ID: <79990c6b0901081257q4c949913s91dc7f874783e876@mail.gmail.com> 2009/1/8 Brett Cannon : > Thanks, Paul! I changed it to _os.getcwd() since that's what nt exposes. Ta. I wasn't sure _os.getcwd() returned a full pathname. The only difference between the importlib results and the normal ones seems to be that with importlib, test_multiprocessing is skipped, whereas with the normal import, it fails. The importlib result is test_multiprocessing skipped -- No module named test.test_support where the normal result is test_multiprocessing Traceback (most recent call last): File "", line 1, in File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main Traceback (most recent call last): File "", line 1, in File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main prepare(preparation_data) prepare(preparation_data) File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare file, path_name, etc = imp.find_module(main_name, dirs) ImportError: No module named File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare file, path_name, etc = imp.find_module(main_name, dirs) ImportError: No module named Traceback (most recent call last): File "", line 1, in File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main prepare(preparation_data) File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare file, path_name, etc = imp.find_module(main_name, dirs) ImportError: No module named Traceback (most recent call last): File "", line 1, in File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main prepare(preparation_data) File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare file, path_name, etc = imp.find_module(main_name, dirs) ImportError: No module named Traceback (most recent call last): File "", line 1, in File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main prepare(preparation_data) File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare file, path_name, etc = imp.find_module(main_name, dirs) ImportError: No module named test test_multiprocessing crashed -- : My command line was \Apps\Python30\python.exe -c "import sys; sys.argv = ['', 'test_pkg', 'test_pydoc', 'test_shlex', 'test_pep263', 'test_distutils', 'test_lib2to3', 'test_pep3120', 'test_import']; from test.regrtest import main; main(exclude=True)" Paul. From brett at python.org Thu Jan 8 22:00:07 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 13:00:07 -0800 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: <4966638D.3010703@gmail.com> References: <4966638D.3010703@gmail.com> Message-ID: On Thu, Jan 8, 2009 at 12:35, Nick Coghlan wrote: > Brett Cannon wrote: >> One, does anyone have issues if I check in importlib? We have >> typically said code has to have been selected as best-of-breed by the >> community first, so I realize I am asking for a waiver on this one. > > That rule has never really applied to things that are part of the > interpreter itself though (how could it?). > Well, it's not part of the interpreter yet. That can be viewed as a separate step. > My main question would be how it relates to the existing import > machinery emulation in pkgutil. If adding importlib lets us largely drop > that emulation (instead replacing it with invocations of importlib), > then that seems like a big win to me. You mean stuff like pkgutil.ImpImporter? importlib will be fully modular with meta_path importers for everything short of sys.modules (and even that could be done if people care, but I would rather keep sys.modules stuff on the fast path). So there will be a meta_path importer that handles sys.path/sys.path_hooks/sys.path_importer_cache. That work is part of the "importlib is semantically done, but there are some things I want to imrpove" todo list. If you are more talking about something else I am not sure what you are after. Every module will have a proper loader with importlib. -Brett From brett at python.org Thu Jan 8 22:02:28 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 13:02:28 -0800 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: <79990c6b0901081257q4c949913s91dc7f874783e876@mail.gmail.com> References: <79990c6b0901081126u2d6154c4qa12550402461f045@mail.gmail.com> <79990c6b0901081257q4c949913s91dc7f874783e876@mail.gmail.com> Message-ID: On Thu, Jan 8, 2009 at 12:57, Paul Moore wrote: > 2009/1/8 Brett Cannon : >> Thanks, Paul! I changed it to _os.getcwd() since that's what nt exposes. > > Ta. I wasn't sure _os.getcwd() returned a full pathname. > > The only difference between the importlib results and the normal ones > seems to be that with importlib, test_multiprocessing is skipped, > whereas with the normal import, it fails. The importlib result is > > test_multiprocessing skipped -- No module named test.test_support > Well, that should fail since test.test_support doesn't exist in Python 3.0. > where the normal result is > > test_multiprocessing > Traceback (most recent call last): > File "", line 1, in > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main > Traceback (most recent call last): > File "", line 1, in > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main > prepare(preparation_data) > prepare(preparation_data) > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare > file, path_name, etc = imp.find_module(main_name, dirs) > ImportError: No module named > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare > file, path_name, etc = imp.find_module(main_name, dirs) > ImportError: No module named > Traceback (most recent call last): > File "", line 1, in > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main > prepare(preparation_data) > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare > file, path_name, etc = imp.find_module(main_name, dirs) > ImportError: No module named > Traceback (most recent call last): > File "", line 1, in > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main > prepare(preparation_data) > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare > file, path_name, etc = imp.find_module(main_name, dirs) > ImportError: No module named > Traceback (most recent call last): > File "", line 1, in > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main > prepare(preparation_data) > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare > file, path_name, etc = imp.find_module(main_name, dirs) > ImportError: No module named > test test_multiprocessing crashed -- : > > My command line was > > \Apps\Python30\python.exe -c "import sys; sys.argv = ['', > 'test_pkg', 'test_pydoc', 'test_shlex', 'test_pep263', > 'test_distutils', 'test_lib2to3', 'test_pep3120', 'test_import']; from > test.regrtest import main; main(exclude=True)" > This looks like test_subprocessing tries to snag some detail about what module __main__ is based on the first value of sys.argv. -Brett From martin at v.loewis.de Thu Jan 8 22:07:03 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 08 Jan 2009 22:07:03 +0100 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: References: <5b8d13220901080511x2fe96845l8ad86e70908f0a3e@mail.gmail.com> Message-ID: <49666AF7.9010804@v.loewis.de> > i'd just ... much rather be completely independent of proprietary > software when it comes to building free software. I guess my question is then: why do you want to use Windows in the first place? Regards, Martin From martin at v.loewis.de Thu Jan 8 22:10:20 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 08 Jan 2009 22:10:20 +0100 Subject: [Python-Dev] error in doc for fcntl module In-Reply-To: <3c6c07c20901080905t7427cbb3k7e357617cc6d54a1@mail.gmail.com> References: <3c6c07c20901071331x3ebf14f9u3abd8eab7e736f12@mail.gmail.com> <3c6c07c20901080905t7427cbb3k7e357617cc6d54a1@mail.gmail.com> Message-ID: <49666BBC.80309@v.loewis.de> > Is there any reason not to change this? Apart from the effort it makes to talk about it, and to review and apply the patch? No. Regards, Martin P.S. You really do need to trust that the system calls get forwarded by Python to the system as-is, with no additional trickery. If there is additional trickery, it has a different name. From jnoller at gmail.com Thu Jan 8 22:12:33 2009 From: jnoller at gmail.com (Jesse Noller) Date: Thu, 8 Jan 2009 16:12:33 -0500 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: <79990c6b0901081257q4c949913s91dc7f874783e876@mail.gmail.com> References: <79990c6b0901081126u2d6154c4qa12550402461f045@mail.gmail.com> <79990c6b0901081257q4c949913s91dc7f874783e876@mail.gmail.com> Message-ID: <4222a8490901081312i681be9fbqdce3167c56c5d8f9@mail.gmail.com> On Thu, Jan 8, 2009 at 3:57 PM, Paul Moore wrote: > 2009/1/8 Brett Cannon : >> Thanks, Paul! I changed it to _os.getcwd() since that's what nt exposes. > > Ta. I wasn't sure _os.getcwd() returned a full pathname. > > The only difference between the importlib results and the normal ones > seems to be that with importlib, test_multiprocessing is skipped, > whereas with the normal import, it fails. The importlib result is > Zuh? > test_multiprocessing skipped -- No module named test.test_support > This isn't occurring on the build bots. > where the normal result is > > test_multiprocessing > Traceback (most recent call last): > File "", line 1, in > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main > Traceback (most recent call last): > File "", line 1, in > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main > prepare(preparation_data) > prepare(preparation_data) > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare > file, path_name, etc = imp.find_module(main_name, dirs) > ImportError: No module named > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare > file, path_name, etc = imp.find_module(main_name, dirs) > ImportError: No module named > Traceback (most recent call last): > File "", line 1, in > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main > prepare(preparation_data) > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare > file, path_name, etc = imp.find_module(main_name, dirs) > ImportError: No module named > Traceback (most recent call last): > File "", line 1, in > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main > prepare(preparation_data) > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare > file, path_name, etc = imp.find_module(main_name, dirs) > ImportError: No module named > Traceback (most recent call last): > File "", line 1, in > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 342, in main > prepare(preparation_data) > File "C:\Apps\Python30\lib\multiprocessing\forking.py", line 451, in prepare > file, path_name, etc = imp.find_module(main_name, dirs) > ImportError: No module named > test test_multiprocessing crashed -- : > > My command line was > > \Apps\Python30\python.exe -c "import sys; sys.argv = ['', > 'test_pkg', 'test_pydoc', 'test_shlex', 'test_pep263', > 'test_distutils', 'test_lib2to3', 'test_pep3120', 'test_import']; from > test.regrtest import main; main(exclude=True)" This shouldn't be happening (obviously) and doesn't seem to be occurring on the buildbots. From solipsis at pitrou.net Thu Jan 8 22:14:12 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 8 Jan 2009 21:14:12 +0000 (UTC) Subject: [Python-Dev] Getting importlib into the standard library for 3.1 References: Message-ID: Brett Cannon python.org> writes: > > One, does anyone have issues if I check in importlib? We have > typically said code has to have been selected as best-of-breed by the > community first, so I realize I am asking for a waiver on this one. Have you tried to assess its interaction with setuptools? (somebody has done a patch to port setuptools to 3.x, see http://mail.python.org/pipermail/distutils-sig/2009-January/010659.html) I'm not saying that it's a showstopper btw, just trying to suggest experimentation possibilities. Regards Antoine. From ncoghlan at gmail.com Thu Jan 8 22:21:06 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 09 Jan 2009 07:21:06 +1000 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: References: <4966638D.3010703@gmail.com> Message-ID: <49666E42.8050802@gmail.com> Brett Cannon wrote: > On Thu, Jan 8, 2009 at 12:35, Nick Coghlan wrote: >> Brett Cannon wrote: >>> One, does anyone have issues if I check in importlib? We have >>> typically said code has to have been selected as best-of-breed by the >>> community first, so I realize I am asking for a waiver on this one. >> That rule has never really applied to things that are part of the >> interpreter itself though (how could it?). > > Well, it's not part of the interpreter yet. That can be viewed as a > separate step. True, but what you're doing here can be viewed as the continuation of the original implementation plan for PEP 302 - it was always intended that every module would eventually get a __loader__ attribute, and that's one of the things an import system based on your importlib would provide. >> My main question would be how it relates to the existing import >> machinery emulation in pkgutil. If adding importlib lets us largely drop >> that emulation (instead replacing it with invocations of importlib), >> then that seems like a big win to me. > > You mean stuff like pkgutil.ImpImporter? importlib will be fully > modular with meta_path importers for everything short of sys.modules > (and even that could be done if people care, but I would rather keep > sys.modules stuff on the fast path). So there will be a meta_path > importer that handles sys.path/sys.path_hooks/sys.path_importer_cache. > That work is part of the "importlib is semantically done, but there > are some things I want to imrpove" todo list. If you are more talking > about something else I am not sure what you are after. Every module > will have a proper loader with importlib. I'm talking about the fact that imp.get_loader doesn't exist, hence the existence of pkgutil.get_loader and its supporting machinery. My question is whether or not it is possible to replace the current emulation code in pkgutil with appropriate imports from importlib and thus get rid of the current semantic differences that exist between the real import machinery and the pkgutil emulation (mainly in the area of non-PEP 302 module loaders, such as the special case for the Windows directory information). Upgrading the pkgutil interface to match the *actual* semantics of the import system instead of only approximating them would be a decent win in its own right, even if there turn out to be other issues that keep us from switching to importlib as the sole import mechanism. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From brett at python.org Thu Jan 8 22:24:36 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 13:24:36 -0800 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: References: Message-ID: On Thu, Jan 8, 2009 at 13:14, Antoine Pitrou wrote: > Brett Cannon python.org> writes: >> >> One, does anyone have issues if I check in importlib? We have >> typically said code has to have been selected as best-of-breed by the >> community first, so I realize I am asking for a waiver on this one. > > Have you tried to assess its interaction with setuptools? > (somebody has done a patch to port setuptools to 3.x, see > http://mail.python.org/pipermail/distutils-sig/2009-January/010659.html) > > I'm not saying that it's a showstopper btw, just trying to suggest > experimentation possibilities. Beyond Python's standard library, nope; I am not a setuptools user. -Brett From r.schwebel at pengutronix.de Thu Jan 8 22:20:15 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:15 +0100 Subject: [Python-Dev] [patch 8/8] hand --host and --build over to libffi References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212008.091012788@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-libff-config_args.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:09 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:09 +0100 Subject: [Python-Dev] [patch 2/8] add _FOR_BUILD infrastructure References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.385357263@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-for-build.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:14 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:14 +0100 Subject: [Python-Dev] [patch 7/8] make setup.py cross compilation aware References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.972660690@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-setup-py.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:13 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:13 +0100 Subject: [Python-Dev] [patch 6/8] fix lchflags test References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.856344450@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-lchflags.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:08 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:08 +0100 Subject: [Python-Dev] [patch 1/8] distutils need to care about cross compiling References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.269226909@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-so-from-env.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:12 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:12 +0100 Subject: [Python-Dev] [patch 5/8] fix chflags test References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.738705620@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-chflags.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:11 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:11 +0100 Subject: [Python-Dev] [patch 4/8] configure.in fixes References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.621194189@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-cross-configure-in.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:10 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:10 +0100 Subject: [Python-Dev] [patch 3/8] add readme References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.502709469@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-cross-readme.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:07 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:07 +0100 Subject: [Python-Dev] [patch 0/8] [RFC] cross compiling python 3.0 Message-ID: <20090108212007.094443645@pengutronix.de> Embedded people have cross compiled python for quite some time now, with more or less success. These activities have taken place in various embedded build systems, such as PTXdist, OpenEmbedded and others. I suppose instead of wasting the time over and over again, without proper review by the Python core developers, I would like to find out if it is possible to get cross compilation support integrated in the upstream tree. This patch series reflects what we currently have in PTXdist. Please see it as an RFC. It is probably not perfect yet, but I would like to see some feedback from you Python guys out there. Do you see issues with these patches? Would it be possible in general to get something similar to this series into the Python mainline? Robert -- Pengutronix e.K. | Dipl.-Ing. Robert Schwebel | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | From solipsis at pitrou.net Thu Jan 8 22:48:58 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 8 Jan 2009 21:48:58 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?=5Bpatch_1/8=5D_distutils_need_to_care_abo?= =?utf-8?q?ut_cross=09compiling?= References: <20090108212007.094443645@pengutronix.de> <20090108212007.269226909@pengutronix.de> Message-ID: Robert Schwebel pengutronix.de> writes: > > If cross compiling it must be possible to overwrite the so_ext from the > outside. Thanks for those patches, but please post them to the issue tracker instead (http://bugs.python.org/). If each patch is for a distinct purpose, then open separate issues, otherwise please merge the patches into a single one. Antoine. From brett at python.org Thu Jan 8 22:50:21 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 13:50:21 -0800 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: <49666E42.8050802@gmail.com> References: <4966638D.3010703@gmail.com> <49666E42.8050802@gmail.com> Message-ID: On Thu, Jan 8, 2009 at 13:21, Nick Coghlan wrote: > Brett Cannon wrote: >> On Thu, Jan 8, 2009 at 12:35, Nick Coghlan wrote: >>> Brett Cannon wrote: >>>> One, does anyone have issues if I check in importlib? We have >>>> typically said code has to have been selected as best-of-breed by the >>>> community first, so I realize I am asking for a waiver on this one. >>> That rule has never really applied to things that are part of the >>> interpreter itself though (how could it?). >> >> Well, it's not part of the interpreter yet. That can be viewed as a >> separate step. > > True, but what you're doing here can be viewed as the continuation of > the original implementation plan for PEP 302 - it was always intended > that every module would eventually get a __loader__ attribute, and > that's one of the things an import system based on your importlib would > provide. > True. I am just trying to be diplomatic and not force importlib down anyone's throats. =) >>> My main question would be how it relates to the existing import >>> machinery emulation in pkgutil. If adding importlib lets us largely drop >>> that emulation (instead replacing it with invocations of importlib), >>> then that seems like a big win to me. >> >> You mean stuff like pkgutil.ImpImporter? importlib will be fully >> modular with meta_path importers for everything short of sys.modules >> (and even that could be done if people care, but I would rather keep >> sys.modules stuff on the fast path). So there will be a meta_path >> importer that handles sys.path/sys.path_hooks/sys.path_importer_cache. >> That work is part of the "importlib is semantically done, but there >> are some things I want to imrpove" todo list. If you are more talking >> about something else I am not sure what you are after. Every module >> will have a proper loader with importlib. > > I'm talking about the fact that imp.get_loader doesn't exist, hence the > existence of pkgutil.get_loader and its supporting machinery. > Ah, OK. > My question is whether or not it is possible to replace the current > emulation code in pkgutil with appropriate imports from importlib and > thus get rid of the current semantic differences that exist between the > real import machinery and the pkgutil emulation (mainly in the area of > non-PEP 302 module loaders, such as the special case for the Windows > directory information). > Looking at pkgutil.get_loader it would not be hard to expose the same thing in importlib. Since one of the motivating factors in this is to redo the import machinery from a PEP 302 standpoint the need for pkgutil should quickly go or easily shift to importlib as needed. It's just a matter of exposing an API. -Brett From p.f.moore at gmail.com Thu Jan 8 23:40:35 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 8 Jan 2009 22:40:35 +0000 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: <49666E42.8050802@gmail.com> References: <4966638D.3010703@gmail.com> <49666E42.8050802@gmail.com> Message-ID: <79990c6b0901081440y1cc8ae19t9fc6b0eddb5a6be@mail.gmail.com> 2009/1/8 Nick Coghlan : >> Well, it's not part of the interpreter yet. That can be viewed as a >> separate step. > > True, but what you're doing here can be viewed as the continuation of > the original implementation plan for PEP 302 - it was always intended > that every module would eventually get a __loader__ attribute, and > that's one of the things an import system based on your importlib would > provide. FWIW, this is certainly the ultimate direction *I* intended PEP 302 to take. At the time the key deliverable (which was provided by Just) was the zip importer - but we intended the mechanism to be applicable for everything, and ultimately to be *used* for eveything. Sadly, neither Just nor I ever took that extra step - and I'm extremely happy that Brett has now done so. Paul. From tseaver at palladion.com Fri Jan 9 00:11:21 2009 From: tseaver at palladion.com (Tres Seaver) Date: Thu, 08 Jan 2009 18:11:21 -0500 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: <49666AF7.9010804@v.loewis.de> References: <5b8d13220901080511x2fe96845l8ad86e70908f0a3e@mail.gmail.com> <49666AF7.9010804@v.loewis.de> Message-ID: <49668819.6000404@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Martin v. L?wis wrote: >> i'd just ... much rather be completely independent of proprietary >> software when it comes to building free software. > > I guess my question is then: why do you want to use Windows in the > first place? My guess is that Luke wants to cross-compile bdist-win distributions for the benefit of tool-deprived Windows users. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD4DBQFJZogZ+gerLs4ltQ4RAoJoAJjw/vCaCo5yTtErbvhx1pndac/kAJ9ttT+d qtLscKp1Imf2pRFtKE+Wsg== =JqqK -----END PGP SIGNATURE----- From aahz at pythoncraft.com Fri Jan 9 02:30:25 2009 From: aahz at pythoncraft.com (Aahz) Date: Thu, 8 Jan 2009 17:30:25 -0800 Subject: [Python-Dev] OSCON 2009: Call For Participation Message-ID: <20090109013025.GA25001@panix.com> The O'Reilly Open Source Convention has opened up the Call For Participation -- deadline for proposals is Tuesday Feb 3. OSCON will be held July 20-24 in San Jose, California. For more information, see http://conferences.oreilly.com/oscon http://en.oreilly.com/oscon2009/public/cfp/57 -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian W. Kernighan From benjamin at python.org Fri Jan 9 02:31:12 2009 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 8 Jan 2009 19:31:12 -0600 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: References: <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> <4966658D.8060206@gmail.com> Message-ID: <1afaf6160901081731x39042c73t70f60e6e6daa3c0c@mail.gmail.com> On Thu, Jan 8, 2009 at 2:52 PM, Brett Cannon wrote: > On Thu, Jan 8, 2009 at 12:43, Nick Coghlan wrote: >> >> Even if we do adopt such a rule, C patches posted to the tracker should >> still try to avoid including pure whitespace changes though - leaving >> the whitespace changes in the patch tends to lead to patches that look >> like "remove function body, add different function body" when only a >> couple of lines have actually had significant changes. >> > > That's fine with me. Correcting whitespace can be considered a committer's job. Maybe a rule could be added to Tools/scripts/patchcheck.py? -- Regards, Benjamin From brett at python.org Fri Jan 9 02:39:51 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 17:39:51 -0800 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: <1afaf6160901081731x39042c73t70f60e6e6daa3c0c@mail.gmail.com> References: <43aa6ff70901071601h6b73868cl10ce23312f29af84@mail.gmail.com> <4965CC05.1070105@egenix.com> <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> <4966658D.8060206@gmail.com> <1afaf6160901081731x39042c73t70f60e6e6daa3c0c@mail.gmail.com> Message-ID: On Thu, Jan 8, 2009 at 17:31, Benjamin Peterson wrote: > On Thu, Jan 8, 2009 at 2:52 PM, Brett Cannon wrote: >> On Thu, Jan 8, 2009 at 12:43, Nick Coghlan wrote: >>> >>> Even if we do adopt such a rule, C patches posted to the tracker should >>> still try to avoid including pure whitespace changes though - leaving >>> the whitespace changes in the patch tends to lead to patches that look >>> like "remove function body, add different function body" when only a >>> couple of lines have actually had significant changes. >>> >> >> That's fine with me. Correcting whitespace can be considered a committer's job. > > Maybe a rule could be added to Tools/scripts/patchcheck.py? To do what? Re-indent automatically? Or notify the person that there seems to be a need to re-indent some code? -Brett From benjamin at python.org Fri Jan 9 02:42:32 2009 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 8 Jan 2009 19:42:32 -0600 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: References: <4965CC05.1070105@egenix.com> <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> <4966658D.8060206@gmail.com> <1afaf6160901081731x39042c73t70f60e6e6daa3c0c@mail.gmail.com> Message-ID: <1afaf6160901081742w326ec641h7fc34051a7a4f80f@mail.gmail.com> On Thu, Jan 8, 2009 at 7:39 PM, Brett Cannon wrote: > On Thu, Jan 8, 2009 at 17:31, Benjamin Peterson wrote: >> On Thu, Jan 8, 2009 at 2:52 PM, Brett Cannon wrote: >>> On Thu, Jan 8, 2009 at 12:43, Nick Coghlan wrote: >>>> >>>> Even if we do adopt such a rule, C patches posted to the tracker should >>>> still try to avoid including pure whitespace changes though - leaving >>>> the whitespace changes in the patch tends to lead to patches that look >>>> like "remove function body, add different function body" when only a >>>> couple of lines have actually had significant changes. >>>> >>> >>> That's fine with me. Correcting whitespace can be considered a committer's job. >> >> Maybe a rule could be added to Tools/scripts/patchcheck.py? > > To do what? Re-indent automatically? Or notify the person that there > seems to be a need to re-indent some code? I was thinking about notifying the person that their indentation was wrong or they had trailing whitespace. Fixing it is bonus. :) -- Regards, Benjamin From skip at pobox.com Fri Jan 9 05:53:53 2009 From: skip at pobox.com (skip at pobox.com) Date: Thu, 8 Jan 2009 22:53:53 -0600 Subject: [Python-Dev] Needless assert in ceval.c? Message-ID: <18790.55393.379302.979733@montanaro.dyndns.org> I realize assert() is compiled out except in debug builds, but the assert in the while loop following the fast_block_end label in ceval.c seems misleading. It looks like it should be hoisted out of the loop and only checked before entering the loop. There are no jumps into the loop. why is not assigned WHY_YIELD within the loop. If you assert before the loop once I think that will be sufficient and more clearly state the intent. fast_block_end: assert(why != WHY_YIELD); /* move the assert here */ while (why != WHY_NOT && f->f_iblock > 0) { PyTryBlock *b = PyFrame_BlockPop(f); /* get rid of the assert here */ if (b->b_type == SETUP_LOOP && why == WHY_CONTINUE) { ... } ... } /* unwind stack */ Reported to the tracker: http://bugs.python.org/issue4888 Skip From brett at python.org Fri Jan 9 07:13:08 2009 From: brett at python.org (Brett Cannon) Date: Thu, 8 Jan 2009 22:13:08 -0800 Subject: [Python-Dev] Fixing incorrect indentations in C files (Decoder functions accept str in py3k) In-Reply-To: <1afaf6160901081742w326ec641h7fc34051a7a4f80f@mail.gmail.com> References: <7A5B31D35B0345DF8B97A52BBBF4642E@RaymondLaptop1> <4966658D.8060206@gmail.com> <1afaf6160901081731x39042c73t70f60e6e6daa3c0c@mail.gmail.com> <1afaf6160901081742w326ec641h7fc34051a7a4f80f@mail.gmail.com> Message-ID: On Thu, Jan 8, 2009 at 17:42, Benjamin Peterson wrote: > On Thu, Jan 8, 2009 at 7:39 PM, Brett Cannon wrote: >> On Thu, Jan 8, 2009 at 17:31, Benjamin Peterson wrote: >>> On Thu, Jan 8, 2009 at 2:52 PM, Brett Cannon wrote: >>>> On Thu, Jan 8, 2009 at 12:43, Nick Coghlan wrote: >>>>> >>>>> Even if we do adopt such a rule, C patches posted to the tracker should >>>>> still try to avoid including pure whitespace changes though - leaving >>>>> the whitespace changes in the patch tends to lead to patches that look >>>>> like "remove function body, add different function body" when only a >>>>> couple of lines have actually had significant changes. >>>>> >>>> >>>> That's fine with me. Correcting whitespace can be considered a committer's job. >>> >>> Maybe a rule could be added to Tools/scripts/patchcheck.py? >> >> To do what? Re-indent automatically? Or notify the person that there >> seems to be a need to re-indent some code? > > I was thinking about notifying the person that their indentation was > wrong or they had trailing whitespace. Fixing it is bonus. :) Supporting C and header files was a plan of mine from the beginning. We will see when I get to it. =) -Brett From v+python at g.nevcal.com Fri Jan 9 09:07:05 2009 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 09 Jan 2009 00:07:05 -0800 Subject: [Python-Dev] http://bugs.python.org/issue3628 Message-ID: <496705A9.5000709@g.nevcal.com> I'm getting an error similar to that in http://bugs.python.org/issue3628 when I try to run python2.6 and cherrypy 3.1.1. I'm too new to see any connection between the symptom and the cure described in the above issue... I'd guess that somehow threads imply an extra parameter? It also seems that the SetDaemon call simply does what the replacement code does, so I don't understand how the fix fixes anything, much less how it fixes a parameter count in a seemingly unrelated function. In any case, the issue is against 3.0, where it claims to be fixed. I don't know enough about the tracker to find if it was fixed in 2.6 concurrently, but the symptom appears there. I tried hacking all the references I could find to XXX.SetDaemon(True) to XXX.daemon = True but it didn't seem to help. So can you help? -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From r.schwebel at pengutronix.de Thu Jan 8 22:20:07 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:07 +0100 Subject: [Python-Dev] [patch 0/8] [RFC] cross compiling python 3.0 Message-ID: <20090108212007.094443645@pengutronix.de> Embedded people have cross compiled python for quite some time now, with more or less success. These activities have taken place in various embedded build systems, such as PTXdist, OpenEmbedded and others. I suppose instead of wasting the time over and over again, without proper review by the Python core developers, I would like to find out if it is possible to get cross compilation support integrated in the upstream tree. This patch series reflects what we currently have in PTXdist. Please see it as an RFC. It is probably not perfect yet, but I would like to see some feedback from you Python guys out there. Do you see issues with these patches? Would it be possible in general to get something similar to this series into the Python mainline? Robert -- Pengutronix e.K. | Dipl.-Ing. Robert Schwebel | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | From r.schwebel at pengutronix.de Thu Jan 8 22:20:08 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:08 +0100 Subject: [Python-Dev] [patch 1/8] distutils need to care about cross compiling References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.269226909@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-so-from-env.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:09 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:09 +0100 Subject: [Python-Dev] [patch 2/8] add _FOR_BUILD infrastructure References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.385357263@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-for-build.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:10 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:10 +0100 Subject: [Python-Dev] [patch 3/8] add readme References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.502709469@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-cross-readme.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:11 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:11 +0100 Subject: [Python-Dev] [patch 4/8] configure.in fixes References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.621194189@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-cross-configure-in.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:12 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:12 +0100 Subject: [Python-Dev] [patch 5/8] fix chflags test References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.738705620@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-chflags.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:13 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:13 +0100 Subject: [Python-Dev] [patch 6/8] fix lchflags test References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.856344450@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-lchflags.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:14 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:14 +0100 Subject: [Python-Dev] [patch 7/8] make setup.py cross compilation aware References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212007.972660690@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-setup-py.diff URL: From r.schwebel at pengutronix.de Thu Jan 8 22:20:15 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Thu, 08 Jan 2009 22:20:15 +0100 Subject: [Python-Dev] [patch 8/8] hand --host and --build over to libffi References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090108212008.091012788@pengutronix.de> An embedded and charset-unspecified text was scrubbed... Name: Python-3.0rc2-libff-config_args.diff URL: From r.schwebel at pengutronix.de Fri Jan 9 09:24:47 2009 From: r.schwebel at pengutronix.de (Robert Schwebel) Date: Fri, 9 Jan 2009 09:24:47 +0100 Subject: [Python-Dev] [patch 0/8] [RFC] cross compiling python 3.0 In-Reply-To: <20090108212007.094443645@pengutronix.de> References: <20090108212007.094443645@pengutronix.de> Message-ID: <20090109082446.GJ1947@pengutronix.de> Hi Antoine, [sorry for the double post, the mails didn't show up in the archive and my procmail had missing slash at the end of the rule...] > Thanks for those patches, but please post them to the issue tracker instead > (http://bugs.python.org/). If each patch is for a distinct purpose, then open > separate issues, otherwise please merge the patches into a single one. Yup, will do. I suspect that some of the design decisions need discussions; should that also take place in the issue tracker, or here on the mailing list? rsc -- Pengutronix e.K. | | Industrial Linux Solutions | http://www.pengutronix.de/ | Peiner Str. 6-8, 31137 Hildesheim, Germany | Phone: +49-5121-206917-0 | Amtsgericht Hildesheim, HRA 2686 | Fax: +49-5121-206917-5555 | From ziade.tarek at gmail.com Fri Jan 9 09:30:02 2009 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Fri, 9 Jan 2009 09:30:02 +0100 Subject: [Python-Dev] [patch 0/8] [RFC] cross compiling python 3.0 In-Reply-To: <20090109082446.GJ1947@pengutronix.de> References: <20090108212007.094443645@pengutronix.de> <20090109082446.GJ1947@pengutronix.de> Message-ID: <94bdd2610901090030j7b6bcb30jfe49eddbdf7c6ca6@mail.gmail.com> On Fri, Jan 9, 2009 at 9:24 AM, Robert Schwebel wrote: > > Yup, will do. > > I suspect that some of the design decisions need discussions; should > that also take place in the issue tracker, or here on the mailing list? For the distutils part, you can use the distutils mailing list if you wish, http://www.python.org/community/sigs/current/distutils-sig/ Regards Tarek From amauryfa at gmail.com Fri Jan 9 10:41:35 2009 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Fri, 9 Jan 2009 10:41:35 +0100 Subject: [Python-Dev] http://bugs.python.org/issue3628 In-Reply-To: <496705A9.5000709@g.nevcal.com> References: <496705A9.5000709@g.nevcal.com> Message-ID: Hello, On Fri, Jan 9, 2009 at 09:07, Glenn Linderman wrote: > I'm getting an error similar to that in http://bugs.python.org/issue3628 > when I try to run python2.6 and cherrypy 3.1.1. Please use the issue tracker for this. For help with python, you should ask on the comp.lang.python newsgroup. Be prepared to provide the complete error output: I fail to see how a crash in IDLE 3.0b3 would be similar to some problem with cherrypy! -- Amaury Forgeot d'Arc From rasky at develer.com Fri Jan 9 10:48:56 2009 From: rasky at develer.com (Giovanni Bajo) Date: Fri, 9 Jan 2009 09:48:56 +0000 (UTC) Subject: [Python-Dev] I would like an svn account References: <200812310155.40206.victor.stinner@haypocalc.com> <85b5c3130901030020q50ba925p7a1bb40abf06b640@mail.gmail.com> <200901031652.56991.victor.stinner@haypocalc.com> <200901031713.05985.doomster@knuut.de> <495F93F4.6080007@v.loewis.de> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> <87vdsvv1ri.fsf@xemacs.org> Message-ID: On Sun, 04 Jan 2009 18:50:09 +0900, Stephen J. Turnbull wrote: > "Martin v. L?wis" writes: > > > If "switching to a modern DVCS" means that users now need to start > > compiling their VCS before they can check out Python, > > It doesn't mean that. All of the DVCS contenders have Windows and Mac > OS installers (usually from 3rd parties, but working closely with the > core). I'll notice that git-win32 is totally bad for any serious Windows developers. At least 4 months ago which is the last time I tried it. You'll have a very hard time persauding the experienced Windows developers in this list that git-win32 is a good thing to use. -- Giovanni Bajo Develer S.r.l. http://www.develer.com From lkcl at lkcl.net Fri Jan 9 11:51:13 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Fri, 9 Jan 2009 10:51:13 +0000 Subject: [Python-Dev] compiling python2.5 on linux under wine In-Reply-To: <49666AF7.9010804@v.loewis.de> References: <5b8d13220901080511x2fe96845l8ad86e70908f0a3e@mail.gmail.com> <49666AF7.9010804@v.loewis.de> Message-ID: On Thu, Jan 8, 2009 at 9:07 PM, "Martin v. L?wis" wrote: >> i'd just ... much rather be completely independent of proprietary >> software when it comes to building free software. > > I guess my question is then: why do you want to use Windows in the > first place? ha ha :) the same question was asked when i started the nt domains reverse-engineering for samba, in 1996. the answer is: i don't. but there are a lot of users and developers who feel that they don't have a choice. or haven't been given one. so if it's possible for me, as one of the "under 1% of computer users i.e. linux" to compile stuff that will work on the "over 95% of computers used by everyone else i.e. windows" _and_ i get to stick to free software principles, that's gotta be good. take pywebkit-gtk as an example. the first-level (and some of the second-level) dependencies for pywebkit-gtk are roughly as follows: * libstdc++ * cairo, pango, gdk, fontconfig, gtk * libxml2 (which is dodgy) * libxslt1 (which is so dodgy and dependent on incompatible versions of libxml2 it can't be compiled on win32) * libicu38 * libcurl * libssl * webkit * python2.5 * python-gobect * python-gtk that's a *big* xxxxing list that comes in at a whopping 40mb of _binaries_. webkit itself comes in at 10mb alone. libicu38 fails _miserably_ to cross-compile with mingw32. i was damn lucky to have beaten it into submission: it took two days and i couldn't run any of the tests, but actually managed to get at least some .libs, .dlls and .a's out of the mess. libxslt1 and libxml2 have compile errors in mutually incompatible versions on win32, plus, unfortunately, the versions that _do_ compile correctly (really old versions like libxslt-1.12 + libxml2-18 or something) are not the ones that can be used on webkit! i had to get the source code for gcc (4.4) because when linking webkit against the MSVC-compiled libicu38 gcc actually segfaulted (!). and that was tracked down to exception handling across process / thread boundaries in libstdc++-6 which had only literally been fixed/patched a few days before i started the monster-compile-process. i tried hunting down python-gobject and python-gtk for win32, but there is a dependency needed before you get to that: python25.lib. as i mentioned previously i tried hunting down a .lib for python25 but of course that would be useless unless i also have a libtool-compiled .a so there wasn't any point. so, all the hard work that i did cross-compiling up webkit for win32 was completely wasted because python itself could not be compiled on linux for a win32 platform. hence my interest in making sure that it can be. _then_ i can go back and revisit the monster compile process and finally come up with the goods, on win32, on the gobject-based DOM-model manipulation stuff i've added to pywebkit-gtk. i've got linux covered, i've got macosx covered. win32 is the last one. l. From ondrej at certik.cz Fri Jan 9 16:17:14 2009 From: ondrej at certik.cz (Ondrej Certik) Date: Fri, 9 Jan 2009 07:17:14 -0800 Subject: [Python-Dev] I would like an svn account In-Reply-To: References: <200812310155.40206.victor.stinner@haypocalc.com> <495FE410.9060409@v.loewis.de> <495FEB07.3090604@v.loewis.de> <495FF0D7.5000501@v.loewis.de> <87vdsvv1ri.fsf@xemacs.org> Message-ID: <85b5c3130901090717l74b1bc16r8005861acb0d956c@mail.gmail.com> On Fri, Jan 9, 2009 at 1:48 AM, Giovanni Bajo wrote: > On Sun, 04 Jan 2009 18:50:09 +0900, Stephen J. Turnbull wrote: > >> "Martin v. L?wis" writes: >> >> > If "switching to a modern DVCS" means that users now need to start >> > compiling their VCS before they can check out Python, >> >> It doesn't mean that. All of the DVCS contenders have Windows and Mac >> OS installers (usually from 3rd parties, but working closely with the >> core). > > I'll notice that git-win32 is totally bad for any serious Windows > developers. At least 4 months ago which is the last time I tried it. > You'll have a very hard time persauding the experienced Windows > developers in this list that git-win32 is a good thing to use. Do you mean this one: http://code.google.com/p/msysgit/ or is git-win32 something else? Ondrej From status at bugs.python.org Fri Jan 9 18:06:45 2009 From: status at bugs.python.org (Python tracker) Date: Fri, 9 Jan 2009 18:06:45 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20090109170645.9337578530@psf.upfronthosting.co.za> ACTIVITY SUMMARY (01/02/09 - 01/09/09) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 2304 open (+65) / 14418 closed (+24) / 16722 total (+89) Open issues with patches: 797 Average duration of open issues: 698 days. Median duration of open issues: 1 days. Open Issues Breakdown open 2279 (+65) pending 25 ( +0) Issues Created Or Reopened (90) _______________________________ wsgiref package totally broken 01/03/09 http://bugs.python.org/issue4718 reopened pitrou patch invalid reST markup in several documents 01/02/09 CLOSED http://bugs.python.org/issue4811 created gagenellina patch Junk in the decimals namespace 01/02/09 CLOSED http://bugs.python.org/issue4812 created rhettinger On OS-X the directories searched by setup.py for Tk are in the w 01/02/09 http://bugs.python.org/issue4813 created MLModel ftplib does not honour "timeout" parameter for active data conne 01/03/09 http://bugs.python.org/issue4814 created giampaolo.rodola patch idle 3.1a1 utf8 01/03/09 http://bugs.python.org/issue4815 created geon patch, needs review Patch of itertools.{combinations,permutations} for empty combina 01/03/09 CLOSED http://bugs.python.org/issue4816 created TFinley patch PyOS_GetLastModificationTime is unused 01/03/09 CLOSED http://bugs.python.org/issue4817 created eckhardt patch Patch for thread-support in md5module.c 01/03/09 http://bugs.python.org/issue4818 created ebfe patch Misc/cheatsheet needs updating 01/03/09 http://bugs.python.org/issue4819 created marketdickinson ctypes.util.find_library incorrectly documented 01/03/09 http://bugs.python.org/issue4820 created beazley Patches for thread-support in built-in SHA modules 01/03/09 http://bugs.python.org/issue4821 created ebfe patch Fix indentation in memoryobject.c 01/03/09 CLOSED http://bugs.python.org/issue4822 created pitrou patch idle height and place 01/03/09 CLOSED http://bugs.python.org/issue4823 created geon test_cmd_line failure on Mac OS X for py3k 01/03/09 CLOSED http://bugs.python.org/issue4824 created skip.montanaro TypeError with complex.real() and complex.imag() 01/04/09 CLOSED http://bugs.python.org/issue4825 created MagnetoHydroDynamics exec() docstring bug about file objects 01/04/09 CLOSED http://bugs.python.org/issue4826 created xverify optparse: Callback example 1 is confusing 01/04/09 http://bugs.python.org/issue4827 created jkankiewicz patch suggestion for webbrowser 01/04/09 http://bugs.python.org/issue4828 created Yinon patch confusing error for file("foo", "w++") 01/04/09 http://bugs.python.org/issue4829 created eckhardt patch regrtest.py -u largefile test_io fails on OS X 10.5.6 01/04/09 CLOSED http://bugs.python.org/issue4830 created ironsmith exec() behavior - revisited 01/04/09 http://bugs.python.org/issue4831 created beazley idle filename extension 01/04/09 http://bugs.python.org/issue4832 created geon Explicit directories for zipfiles 01/04/09 http://bugs.python.org/issue4833 created schuppenies Trouble configuring with icc on Mac OS X 10.5 01/04/09 http://bugs.python.org/issue4834 created skip.montanaro SIZEOF_SOCKET_T not defined 01/04/09 http://bugs.python.org/issue4835 created skip.montanaro Idle Hangs on exit Button 01/04/09 CLOSED http://bugs.python.org/issue4836 created skillybob Omits MACHINE and DEBUG when building tix8.4.3 01/04/09 http://bugs.python.org/issue4837 created ocean-city patch md_state is not released 01/04/09 http://bugs.python.org/issue4838 created loewis Reminder: Please Respond to Manas's Invitation 01/05/09 CLOSED http://bugs.python.org/issue4839 created gravitywarrior1 Compile dbm in Ubuntu 01/05/09 CLOSED http://bugs.python.org/issue4840 created cwhan io's close() not handling errors correctly 01/05/09 http://bugs.python.org/issue4841 created eckhardt patch int('3L') still valid in Python 3.0 01/05/09 http://bugs.python.org/issue4842 created marketdickinson patch make distutils use shutil 01/05/09 http://bugs.python.org/issue4843 created tarek ZipFile doesn't range check in _EndRecData() 01/05/09 http://bugs.python.org/issue4844 created ymgve patch warnings system and inspect module disagree about 01/05/09 CLOSED http://bugs.python.org/issue4845 created exarkun Py_UNICODE_ISSPACE causes linker error 01/05/09 CLOSED http://bugs.python.org/issue4846 created ishimoto 26backport csv fails when file is opened in binary mode 01/05/09 http://bugs.python.org/issue4847 created jaywalker patch MacPython build script uses Carbon and MacOS modules slated for 01/05/09 http://bugs.python.org/issue4848 created janssen instantiating and populating xml.dom.minidom.Element is cumberso 01/05/09 http://bugs.python.org/issue4849 created exarkun Change type and add _Py_ prefix to COUNT_ALLOCS variables 01/05/09 CLOSED http://bugs.python.org/issue4850 created belopolsky patch xml.dom.minidom.Element.cloneNode fails with AttributeError 01/05/09 http://bugs.python.org/issue4851 created exarkun Cleanup old stuff from pythread.h 01/05/09 http://bugs.python.org/issue4852 created amaury.forgeotdarc patch, patch, needs review I/O operation on closed socket: improve the error message 01/06/09 http://bugs.python.org/issue4853 created haypo patch gnu_get_libc_version() returns bad number on Ubuntu 64 bits 01/06/09 CLOSED http://bugs.python.org/issue4854 created smartini Popen(..., shell=True,...) should allow simple access to the com 01/06/09 CLOSED http://bugs.python.org/issue4855 created wolfy Remove checks for win NT 01/06/09 http://bugs.python.org/issue4856 created eckhardt patch syntax: no unpacking in augassign 01/06/09 http://bugs.python.org/issue4857 created jura05 Deprecation of MD5 01/06/09 CLOSED http://bugs.python.org/issue4858 created ebfe pwd, spwd, grp functions vulnerable to denial of service 01/06/09 http://bugs.python.org/issue4859 created baikie patch js_output wrong for cookies with " characters 01/06/09 http://bugs.python.org/issue4860 created noufal patch fix problems with ctypes.util.find_library 01/06/09 http://bugs.python.org/issue4861 created doko patch, patch utf-16 BOM is not skipped after seek(0) 01/07/09 http://bugs.python.org/issue4862 created amaury.forgeotdarc patch deprecate/delete distutils.mwerkscompiler... 01/07/09 http://bugs.python.org/issue4863 created skip.montanaro test_msvc9compiler fails on VC6 01/07/09 CLOSED http://bugs.python.org/issue4864 created ocean-city patch, easy system wide site-packages dir not used on Mac OS X 01/07/09 http://bugs.python.org/issue4865 created kapet patch Code to remove in parsetok? 01/07/09 CLOSED http://bugs.python.org/issue4866 created ganderson crash in ctypes when passing a string to a function without defi 01/07/09 CLOSED http://bugs.python.org/issue4867 created jice Faster utf-8 decoding 01/08/09 http://bugs.python.org/issue4868 reopened pitrou patch random.expovariate(0.0) 01/07/09 CLOSED http://bugs.python.org/issue4869 created kbriggs ssl module is missing SSL_OP_NO_SSLv2 01/07/09 http://bugs.python.org/issue4870 created giampaolo.rodola zipfile can't decrypt 01/07/09 http://bugs.python.org/issue4871 created gladed Python will not co-exist with MFC (memory leak) 01/07/09 http://bugs.python.org/issue4872 created nqiang Refcount error and file descriptor leaks in pwd, grp modules 01/07/09 http://bugs.python.org/issue4873 created baikie patch decoding functions in _codecs module accept str arguments 01/07/09 http://bugs.python.org/issue4874 created pitrou patch find_library can return directories instead of files 01/08/09 http://bugs.python.org/issue4875 created rfk Incorrect detection of module as local 01/08/09 CLOSED http://bugs.python.org/issue4876 created loewis xml.parsers.expat ParseFile() causes segmentation fault when pas 01/08/09 http://bugs.python.org/issue4877 created showard patch post installer script's message is not shown to user with bdist_ 01/08/09 http://bugs.python.org/issue4878 created rantanen Allow buffering for HTTPResponse 01/08/09 http://bugs.python.org/issue4879 created krisvale patch, patch PyInt_FromSsize_t LONG_MIN and LONG_MAX typecasts needed 01/08/09 http://bugs.python.org/issue4880 created lkcl Python's timezon handling: daylight saving option 01/08/09 http://bugs.python.org/issue4881 created earendili510 Behavior of backreferences to named groups in regular expression 01/08/09 http://bugs.python.org/issue4882 created aresnick Compiling python 2.5.2 under Wine on linux. 01/08/09 http://bugs.python.org/issue4883 created lkcl Work around gethostbyaddr_r bug 01/08/09 http://bugs.python.org/issue4884 created jyasskin patch mmap enhancement request 01/08/09 http://bugs.python.org/issue4885 created ndbecker test/regrtest.py contains error on __import__ 01/08/09 http://bugs.python.org/issue4886 created msyang environment inspection and manipulation API is buggy, inconsiste 01/08/09 http://bugs.python.org/issue4887 created exarkun misplaced (or misleading) assert in ceval.c 01/09/09 http://bugs.python.org/issue4888 created skip.montanaro patch difflib 01/09/09 http://bugs.python.org/issue4889 created pratik.potnis handling empty text search pattern in tkinter 01/09/09 http://bugs.python.org/issue4890 created mkiever patch formatwarning function signature change breaks code 01/09/09 http://bugs.python.org/issue4891 created v+python Sending Connection-objects over multiprocessing connections fail 01/09/09 http://bugs.python.org/issue4892 created gsson Use separate thread support code under MS Windows CE 01/09/09 http://bugs.python.org/issue4893 created eckhardt patch documentaion doesn't include parameter in urllib.request.HTTPRed 01/09/09 http://bugs.python.org/issue4894 created mroman Missing strdup() under MS Windows CE 01/09/09 http://bugs.python.org/issue4895 created eckhardt patch Faster why variable manipulation in ceval.c 01/09/09 http://bugs.python.org/issue4896 created skip.montanaro PyIter_Next documentation inconsistent with implementation 01/09/09 http://bugs.python.org/issue4897 created garcia {context,unified}_diff add spurious trailing whitespace if fromf 01/09/09 http://bugs.python.org/issue4898 created dato patch doctest should support fixtures 01/09/09 http://bugs.python.org/issue4899 created dalloliogm Issues Now Closed (49) ______________________ Add PendingDeprecationWarning for % formatting 245 days http://bugs.python.org/issue2772 rhettinger patch, patch IDLE 3.0a5 cannot handle UTF-8 236 days http://bugs.python.org/issue2827 loewis Remove module level functions in _tkinter that depend on TkappOb 136 days http://bugs.python.org/issue3638 haypo patch UNICODE macro in cPickle conflicts with Windows define 88 days http://bugs.python.org/issue4051 loewis patch Use WCHAR variant of OutputDebugString 86 days http://bugs.python.org/issue4075 loewis patch set timestamp in gzip stream 59 days http://bugs.python.org/issue4272 pitrou patch make the storage of the password optional in .pypirc (using the 46 days http://bugs.python.org/issue4394 tarek patch slicing of memoryviews when itemsize != 1 is wrong 27 days http://bugs.python.org/issue4580 pitrou patch Document PyModule_Create() 26 days http://bugs.python.org/issue4614 georg.brandl patch, needs review de-duping function in itertools 24 days http://bugs.python.org/issue4615 rhettinger Patch to make zlib-objects better support threads 9 days http://bugs.python.org/issue4738 haypo patch ftplib.retrlines('LIST') hangs at the end of listing (SocketIO.c 6 days http://bugs.python.org/issue4791 haypo patch Glossary incorrectly describes a decorator as "merely syntactic 2 days http://bugs.python.org/issue4793 tjreedy garbage collector blocks and takes worst-case linear time wrt nu 6 days http://bugs.python.org/issue4794 darrenr Decimal to receive from_float method 4 days http://bugs.python.org/issue4796 rhettinger needs review doc issue for threading module (name/daemon properties) 0 days http://bugs.python.org/issue4808 georg.brandl 2.5.4 release missing from python.org/downloads 0 days http://bugs.python.org/issue4809 benjamin.peterson invalid reST markup in several documents 2 days http://bugs.python.org/issue4811 georg.brandl patch Junk in the decimals namespace 1 days http://bugs.python.org/issue4812 marketdickinson Patch of itertools.{combinations,permutations} for empty combina 5 days http://bugs.python.org/issue4816 rhettinger patch PyOS_GetLastModificationTime is unused 0 days http://bugs.python.org/issue4817 loewis patch Fix indentation in memoryobject.c 0 days http://bugs.python.org/issue4822 pitrou patch idle height and place 0 days http://bugs.python.org/issue4823 gpolo test_cmd_line failure on Mac OS X for py3k 1 days http://bugs.python.org/issue4824 skip.montanaro TypeError with complex.real() and complex.imag() 0 days http://bugs.python.org/issue4825 georg.brandl exec() docstring bug about file objects 0 days http://bugs.python.org/issue4826 benjamin.peterson regrtest.py -u largefile test_io fails on OS X 10.5.6 0 days http://bugs.python.org/issue4830 pitrou Idle Hangs on exit Button 3 days http://bugs.python.org/issue4836 skillybob Reminder: Please Respond to Manas's Invitation 0 days http://bugs.python.org/issue4839 benjamin.peterson Compile dbm in Ubuntu 1 days http://bugs.python.org/issue4840 gpolo warnings system and inspect module disagree about 0 days http://bugs.python.org/issue4845 exarkun Py_UNICODE_ISSPACE causes linker error 0 days http://bugs.python.org/issue4846 eckhardt 26backport Change type and add _Py_ prefix to COUNT_ALLOCS variables 2 days http://bugs.python.org/issue4850 loewis patch gnu_get_libc_version() returns bad number on Ubuntu 64 bits 0 days http://bugs.python.org/issue4854 benjamin.peterson Popen(..., shell=True,...) should allow simple access to the com 0 days http://bugs.python.org/issue4855 georg.brandl Deprecation of MD5 1 days http://bugs.python.org/issue4858 gvanrossum test_msvc9compiler fails on VC6 0 days http://bugs.python.org/issue4864 ocean-city patch, easy Code to remove in parsetok? 0 days http://bugs.python.org/issue4866 amaury.forgeotdarc crash in ctypes when passing a string to a function without defi 1 days http://bugs.python.org/issue4867 theller random.expovariate(0.0) 0 days http://bugs.python.org/issue4869 rhettinger Incorrect detection of module as local 1 days http://bugs.python.org/issue4876 benjamin.peterson 'import Tkinter' causes windows missing-DLL popup 1927 days http://bugs.python.org/issue814654 gpolo term.h present but cannot be compiled 1923 days http://bugs.python.org/issue816929 amaury.forgeotdarc Module to dynamically generate structseq objects 1652 days http://bugs.python.org/issue980098 rhettinger patch broken pyc files 1367 days http://bugs.python.org/issue1180193 pitrou patch Shift+Backspace exhibits odd behavior 974 days http://bugs.python.org/issue1482122 gpolo Python 2.5b2 fails to build on Solaris 10 (GCC Compiler) 893 days http://bugs.python.org/issue1529269 jprante Tcl/Tk auto-expanding window 744 days http://bugs.python.org/issue1622010 gpolo distutils sdist does not exclude SVN/CVS files on Windows 627 days http://bugs.python.org/issue1702551 tarek Top Issues Most Discussed (10) ______________________________ 29 Faster opcode dispatch on gcc 14 days open http://bugs.python.org/issue4753 22 idle 3.1a1 utf8 7 days open http://bugs.python.org/issue4815 22 Patch for better thread support in hashlib 14 days pending http://bugs.python.org/issue4751 21 Patch of itertools.{combinations,permutations} for empty combin 5 days closed http://bugs.python.org/issue4816 18 Remove module level functions in _tkinter that depend on TkappO 136 days closed http://bugs.python.org/issue3638 15 Change type and add _Py_ prefix to COUNT_ALLOCS variables 2 days closed http://bugs.python.org/issue4850 14 ftplib.retrlines('LIST') hangs at the end of listing (SocketIO. 6 days closed http://bugs.python.org/issue4791 13 Thread Safe Py_AddPendingCall 60 days open http://bugs.python.org/issue4293 12 Faster utf-8 decoding 2 days open http://bugs.python.org/issue4868 11 Deprecation of MD5 1 days closed http://bugs.python.org/issue4858 From fumanchu at aminus.org Fri Jan 9 18:23:34 2009 From: fumanchu at aminus.org (Robert Brewer) Date: Fri, 9 Jan 2009 09:23:34 -0800 Subject: [Python-Dev] http://bugs.python.org/issue3628 In-Reply-To: <496705A9.5000709@g.nevcal.com> References: <496705A9.5000709@g.nevcal.com> Message-ID: Glenn Linderman wrote: > I'm getting an error similar to that in > http://bugs.python.org/issue3628 > when I try to run python2.6 and cherrypy 3.1.1. > > I'm too new to see any connection between the symptom and the cure > described in the above issue... I'd guess that somehow threads imply an > extra parameter? > > It also seems that the SetDaemon call simply does what the replacement > code does, so I don't understand how the fix fixes anything, much less > how it fixes a parameter count in a seemingly unrelated function. > > In any case, the issue is against 3.0, where it claims to be fixed. I > don't know enough about the tracker to find if it was fixed in 2.6 > concurrently, but the symptom appears there. > > I tried hacking all the references I could find to XXX.SetDaemon(True) > to XXX.daemon = True but it didn't seem to help. Fixed in http://www.cherrypy.org/changeset/2096. Robert Brewer fumanchu at aminus.org From benjamin at python.org Fri Jan 9 18:50:18 2009 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 9 Jan 2009 11:50:18 -0600 Subject: [Python-Dev] new pep index Message-ID: <1afaf6160901090950i79bca86frce03168143f29345@mail.gmail.com> This is just a note that the PEP index (PEP 0) is now automatically generated, so you need not bother to update any more. -- Regards, Benjamin From brett at python.org Fri Jan 9 20:05:04 2009 From: brett at python.org (Brett Cannon) Date: Fri, 9 Jan 2009 11:05:04 -0800 Subject: [Python-Dev] new pep index In-Reply-To: <1afaf6160901090950i79bca86frce03168143f29345@mail.gmail.com> References: <1afaf6160901090950i79bca86frce03168143f29345@mail.gmail.com> Message-ID: On Fri, Jan 9, 2009 at 09:50, Benjamin Peterson wrote: > This is just a note that the PEP index (PEP 0) is now automatically > generated, so you need not bother to update any more. Thanks for getting this done! -Brett From v+python at g.nevcal.com Fri Jan 9 22:34:36 2009 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 09 Jan 2009 13:34:36 -0800 Subject: [Python-Dev] exec documentation Message-ID: <4967C2EC.8060108@g.nevcal.com> in 2.6 and before execfile is listed in builtin functions, and is not marked deprecated, and exec is in the simple statements, and is not marked deprecated. in 3.0 execfile is not listed in builtin functions, exec is. exec is not listed in simple statements. I guess this is an intended 3.0 change, but is this the proper way to document it? What I was really trying to figure out is how I could specify the encoding of a file to be execfile'd in 2.6... but didn't find it so thought I'd try 3.0 to see if it would assume UTF-8, but had forgotten execfile doesn't exist in 3.0 (if I knew it; I'm new here). -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From barry at python.org Fri Jan 9 23:23:39 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 9 Jan 2009 17:23:39 -0500 Subject: [Python-Dev] Meet your next release manager: Benjamin Peterson Message-ID: <41BFC5BD-80BF-4F6D-941B-8EDFCDF30767@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Now that Python 2.6 and 3.0 are in maintenance mode, it's time to start thinking about Python 2.7 and 3.1. While I've enjoyed my redux service as your release manager for 2.6 and 3.0, I believe it's time to get some new blood in here. To that end, I'm happy to say that Benjamin Peterson will be the release manager for Python 2.7 and 3.1. I will be mentoring him through the process, but it'll be his ball of snake wax. Please join me in helping him make the 2.7 and 3.1 releases as great as 2.6 and 3.0! I will continue to RM 2.6 and 3.0, and I want to start planning for a 3.0.1 release this month. Cheers Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSWfObXEjvBPtnXfVAQI8IgQAqIcJf5SogGu7uYVU7esbZ7osXmYhy0Nx m2hr1r+1/ohzfTlty0VyfwbKsFjoGAjn9X5feMNpFQ/5Kwv3JO3s217rrqCgTeeH CPhefuQAMeZ7lZs/hg/uzK48L2r/KdFMCD0Xuj7ewqT0xbtopR2P9OgLiwj8p8H8 //OgcOxFAeE= =t1tg -----END PGP SIGNATURE----- From tjreedy at udel.edu Sat Jan 10 00:40:08 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 09 Jan 2009 18:40:08 -0500 Subject: [Python-Dev] exec documentation In-Reply-To: <4967C2EC.8060108@g.nevcal.com> References: <4967C2EC.8060108@g.nevcal.com> Message-ID: Glenn Linderman wrote: > in 2.6 and before execfile is listed in builtin functions, and is not > marked deprecated, and exec is in the simple statements, and is not > marked deprecated. Because they are not going away in 2.7. > in 3.0 execfile is not listed in builtin functions, exec is. exec is > not listed in simple statements. All as appropriate. > I guess this is an intended 3.0 change, but is this the proper way to > document it? This is really a python-list/c.l.p question: Anyway... What's new 3.0: "exec() is no longer a keyword; it remains as a function."..."Removed execfile(). Instead of execfile(fn) use exec(open(fn).read()). " ...Yes. > What I was really trying to figure out is how I could specify the > encoding of a file to be execfile'd in 2.6... but didn't find it so > thought I'd try 3.0 to see if it would assume UTF-8, but had forgotten > execfile doesn't exist in 3.0 (if I knew it; I'm new here). Ditto - how to use current 3.0, not how to develop 3.0.1/3.1. Anyway, specify encoding in the open function. tjr From v+python at g.nevcal.com Sat Jan 10 01:26:46 2009 From: v+python at g.nevcal.com (Glenn Linderman) Date: Fri, 09 Jan 2009 16:26:46 -0800 Subject: [Python-Dev] exec documentation In-Reply-To: References: <4967C2EC.8060108@g.nevcal.com> Message-ID: <4967EB46.7080706@g.nevcal.com> On approximately 1/9/2009 3:40 PM, came the following characters from the keyboard of Terry Reedy: > Glenn Linderman wrote: >> in 2.6 and before execfile is listed in builtin functions, and is not >> marked deprecated, and exec is in the simple statements, and is not >> marked deprecated. > > Because they are not going away in 2.7. Ah, that's the missing piece! I keep thinking 2.5, 2.6, 3.0, and forgetting that someone might make a 2.7 :) I bet I wasn't the first one to be confused by this, nor am I likely to be the last. >> in 3.0 execfile is not listed in builtin functions, exec is. exec is >> not listed in simple statements. > > All as appropriate. Sure, given a 2.7 >> I guess this is an intended 3.0 change, but is this the proper way to >> document it? > > This is really a python-list/c.l.p question: Anyway... What's new 3.0: > "exec() is no longer a keyword; it remains as a function."..."Removed > execfile(). Instead of execfile(fn) use exec(open(fn).read()). " ...Yes. > >> What I was really trying to figure out is how I could specify the >> encoding of a file to be execfile'd in 2.6... but didn't find it so >> thought I'd try 3.0 to see if it would assume UTF-8, but had forgotten >> execfile doesn't exist in 3.0 (if I knew it; I'm new here). > > Ditto - how to use current 3.0, not how to develop 3.0.1/3.1. Anyway, > specify encoding in the open function. execfile( "file.py" ) Where is the open function? I have it working under 3.0, not sure how to specify the encoding for 2.6, though, and this question is now off-topic for Python-Dev. -- Glenn -- http://nevcal.com/ =========================== A protocol is complete when there is nothing left to remove. -- Stuart Cheshire, Apple Computer, regarding Zero Configuration Networking From ncoghlan at gmail.com Sat Jan 10 01:48:59 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 10 Jan 2009 10:48:59 +1000 Subject: [Python-Dev] exec documentation In-Reply-To: <4967EB46.7080706@g.nevcal.com> References: <4967C2EC.8060108@g.nevcal.com> <4967EB46.7080706@g.nevcal.com> Message-ID: <4967F07B.7000201@gmail.com> Glenn Linderman wrote: > On approximately 1/9/2009 3:40 PM, came the following characters from > the keyboard of Terry Reedy: >> Glenn Linderman wrote: >>> in 2.6 and before execfile is listed in builtin functions, and is not >>> marked deprecated, and exec is in the simple statements, and is not >>> marked deprecated. >> >> Because they are not going away in 2.7. > > > Ah, that's the missing piece! I keep thinking 2.5, 2.6, 3.0, and > forgetting that someone might make a 2.7 :) I bet I wasn't the first > one to be confused by this, nor am I likely to be the last. Not might - there *is* going to be a 2.7 (that will probably come out at the same time as 3.1) and we're already working on it: http://docs.python.org/dev/whatsnew/2.7.html Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From aahz at pythoncraft.com Sat Jan 10 02:21:52 2009 From: aahz at pythoncraft.com (Aahz) Date: Fri, 9 Jan 2009 17:21:52 -0800 Subject: [Python-Dev] Meet your next release manager: Benjamin Peterson In-Reply-To: <41BFC5BD-80BF-4F6D-941B-8EDFCDF30767@python.org> References: <41BFC5BD-80BF-4F6D-941B-8EDFCDF30767@python.org> Message-ID: <20090110012152.GC14392@panix.com> On Fri, Jan 09, 2009, Barry Warsaw wrote: > > To that end, I'm happy to say that Benjamin Peterson will be the release > manager for Python 2.7 and 3.1. I will be mentoring him through the > process, but it'll be his ball of snake wax. Please join me in helping > him make the 2.7 and 3.1 releases as great as 2.6 and 3.0! Great news! How many cases of beer did you feed him before he agreed? -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian W. Kernighan From brett at python.org Sat Jan 10 05:37:45 2009 From: brett at python.org (Brett Cannon) Date: Fri, 9 Jan 2009 20:37:45 -0800 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: References: Message-ID: OK, since no one has really said anything, I am going to assume no one has issues with importlib in terms of me checking it in or choosing a name for it (I like importlib more than imp so I will probably stick with that). So I will do some file renaming and reorganization, get the code set up to be run by regrtest, and then check the code in! I am going to set PyCon as a hard deadline such that no matter how much more file churn I have left I will still check it into py3k by then (along with importlib.import_module() into 2.7). On Thu, Jan 8, 2009 at 11:06, Brett Cannon wrote: > My work rewriting import in pure Python code has reached beta. > Basically the code is semantically complete and as > backwards-compatible as I can make it short of widespread testing or > running on a Windows box. There are still some tweaks here and there I > want to make and an API to expose, but __import__ works as expected > when run as the import implementation for all unit tests. > > Knowing how waiting for perfection leads to never finishing, I would > like to start figuring out what it will take to get the code added to > the standard library of 3.1 with hopes of getting the bootstrapping > stuff done so that the C implementation of import can go away in 3.1 > as well. I see basically three things that need to be decided upfront. > > One, does anyone have issues if I check in importlib? We have > typically said code has to have been selected as best-of-breed by the > community first, so I realize I am asking for a waiver on this one. > > Two, what should the final name be? I originally went with importlib > since this code was developed outside of the trunk, but I can see some > people suggesting using the imp name. That's fine although that does > lead to the question of what to do with the current imp. It could be > renamed _imp, but then that means what is currently named _importlib > would have to be renamed to something else as well. Maybe > imp._bootstrap? Plus I always viewed imp as the place where really > low-level, C-based stuff lived. Otherwise importlib can slowly subsume > the stuff in imp that is still useful. > > Three, there are still some structural changes to the code that I want > to make. I can hold off on checking in the code until these changes > are made, but as I said earlier, I know better than to wait forever > for perfection. > > And because I know people will ask: no, I do not plan to backport all > the code to 2.7. I want this to be a carrot to people to switch to > 3.x. But I will backport the import_module function I wrote to 2.7 so > people do have that oft-requested feature since it is a really simple > bit of Python code. > > -Brett > From kristjan at ccpgames.com Sat Jan 10 12:29:47 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Sat, 10 Jan 2009 11:29:47 +0000 Subject: [Python-Dev] Py_END_ALLOW_THREADS and GetLastError() Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D78821E6@exchis.ccp.ad.local> Currently on Windows, Py_END_ALLOW_THREADS can have the side effect of resetting the windows error code returned by GetLastError(). There is a number of cases, particularly in posixmodule, with a pattern like: Py_BEGIN_ALLOW_THREADS result = FindNextFile(hFindFile, &FileData); Py_END_ALLOW_THREADS /* FindNextFile sets error to ERROR_NO_MORE_FILES if it got to the end of the directory. */ if (!result && GetLastError() != ERROR_NO_MORE_FILES) { That doesn?t work. (This particular site is where I noticed the problem, running the testsuite in a debug build). Now, the thread swith macro does take care to preserve "errno", but not the windows system error. This is easy to add, but it requires that windows.h be included by ceval.c and pystate.c The alternative fix is to find all these cases and manually preserve the error state, or query it right after the function call if needed. Any preferences? Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Sat Jan 10 12:38:58 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sat, 10 Jan 2009 11:38:58 +0000 Subject: [Python-Dev] Getting importlib into the standard library for 3.1 In-Reply-To: References: Message-ID: <496888D2.1010301@voidspace.org.uk> Brett Cannon wrote: > OK, since no one has really said anything, I am going to assume no one > has issues with importlib in terms of me checking it in or choosing a > name for it (I like importlib more than imp so I will probably stick > with that). > > So I will do some file renaming and reorganization, get the code set > up to be run by regrtest, and then check the code in! I am going to > set PyCon as a hard deadline such that no matter how much more file > churn I have left I will still check it into py3k by then (along with > importlib.import_module() into 2.7). > +1 :-) Michael > On Thu, Jan 8, 2009 at 11:06, Brett Cannon wrote: > >> My work rewriting import in pure Python code has reached beta. >> Basically the code is semantically complete and as >> backwards-compatible as I can make it short of widespread testing or >> running on a Windows box. There are still some tweaks here and there I >> want to make and an API to expose, but __import__ works as expected >> when run as the import implementation for all unit tests. >> >> Knowing how waiting for perfection leads to never finishing, I would >> like to start figuring out what it will take to get the code added to >> the standard library of 3.1 with hopes of getting the bootstrapping >> stuff done so that the C implementation of import can go away in 3.1 >> as well. I see basically three things that need to be decided upfront. >> >> One, does anyone have issues if I check in importlib? We have >> typically said code has to have been selected as best-of-breed by the >> community first, so I realize I am asking for a waiver on this one. >> >> Two, what should the final name be? I originally went with importlib >> since this code was developed outside of the trunk, but I can see some >> people suggesting using the imp name. That's fine although that does >> lead to the question of what to do with the current imp. It could be >> renamed _imp, but then that means what is currently named _importlib >> would have to be renamed to something else as well. Maybe >> imp._bootstrap? Plus I always viewed imp as the place where really >> low-level, C-based stuff lived. Otherwise importlib can slowly subsume >> the stuff in imp that is still useful. >> >> Three, there are still some structural changes to the code that I want >> to make. I can hold off on checking in the code until these changes >> are made, but as I said earlier, I know better than to wait forever >> for perfection. >> >> And because I know people will ask: no, I do not plan to backport all >> the code to 2.7. I want this to be a carrot to people to switch to >> 3.x. But I will backport the import_module function I wrote to 2.7 so >> people do have that oft-requested feature since it is a really simple >> bit of Python code. >> >> -Brett >> >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog From ocean-city at m2.ccsnet.ne.jp Sat Jan 10 12:41:34 2009 From: ocean-city at m2.ccsnet.ne.jp (Hirokazu Yamamoto) Date: Sat, 10 Jan 2009 20:41:34 +0900 Subject: [Python-Dev] Py_END_ALLOW_THREADS and GetLastError() In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D78821E6@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D78821E6@exchis.ccp.ad.local> Message-ID: <4968896E.1060108@m2.ccsnet.ne.jp> Kristj?n Valur J?nsson wrote: > Currently on Windows, Py_END_ALLOW_THREADS can have the side effect of > resetting the windows error code returned by GetLastError(). > > There is a number of cases, particularly in posixmodule, with a pattern > like: > > Py_BEGIN_ALLOW_THREADS > > result = FindNextFile(hFindFile, &FileData); > > Py_END_ALLOW_THREADS > > /* FindNextFile sets error to ERROR_NO_MORE_FILES if > > it got to the end of the directory. */ > > if (!result && GetLastError() != ERROR_NO_MORE_FILES) { > > > > That doesn?t work. (This particular site is where I noticed the > problem, running the testsuite in a debug build). > > Now, the thread swith macro does take care to preserve ?errno?, but not > the windows system error. This is easy to add, but it requires that > windows.h be included by ceval.c and pystate.c > > The alternative fix is to find all these cases and manually preserve the > error state, or query it right after the function call if needed. > > Any preferences? Please see http://bugs.python.org/issue4906 :-) From doomster at knuut.de Sat Jan 10 13:40:10 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Sat, 10 Jan 2009 13:40:10 +0100 Subject: [Python-Dev] Py_END_ALLOW_THREADS and GetLastError() In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D78821E6@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D78821E6@exchis.ccp.ad.local> Message-ID: <200901101340.11079.doomster@knuut.de> On Saturday 10 January 2009 12:29:47 Kristj?n Valur J?nsson wrote: > Currently on Windows, Py_END_ALLOW_THREADS can have the side effect of > resetting the windows error code returned by GetLastError(). There is a > number of cases, particularly in posixmodule, with a pattern like: > Py_BEGIN_ALLOW_THREADS > result = FindNextFile(hFindFile, &FileData); > Py_END_ALLOW_THREADS > /* FindNextFile sets error to ERROR_NO_MORE_FILES if > it got to the end of the directory. */ > if (!result && GetLastError() != ERROR_NO_MORE_FILES) { > > That doesn?t work. (This particular site is where I noticed the problem, > running the testsuite in a debug build). Now, the thread swith macro does > take care to preserve "errno", but not the windows system error. This is > easy to add, but it requires that windows.h be included by ceval.c and > pystate.c The alternative fix is to find all these cases and manually > preserve the error state, or query it right after the function call if > needed. Any preferences? Well, that's what you get for using globals and exactly that is the reason why their use is generally discouraged. My preference would be to fix all cases where there is an intervening call after the call that set errno to first preserve that state. As a short term fix, I would add a workaround to Py_END_ALLOW_THREADS though, both for errno and win32's GetLastError(). Generally, I would discourage non-local errno use. My motivation is that MS Windows CE simply doesn't have errno and MS Windows in general often uses different ways to signal errors, so not using it would restrict the conditionally compiled code further. Especially in the math module, I have no idea yet how to port that in a way that is still at least remotely clean and maintainable because errno is used everywhere there. Translating the errno value to a Python error directly after the failed call would help immensely. just my two embedded cents Uli From martin at v.loewis.de Sat Jan 10 15:11:16 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 10 Jan 2009 15:11:16 +0100 Subject: [Python-Dev] Py_END_ALLOW_THREADS and GetLastError() In-Reply-To: <200901101340.11079.doomster@knuut.de> References: <930F189C8A437347B80DF2C156F7EC7F04D78821E6@exchis.ccp.ad.local> <200901101340.11079.doomster@knuut.de> Message-ID: <4968AC84.5020407@v.loewis.de> > Well, that's what you get for using globals Please do take a look at the issue at hand before pointing fingers. First, GetLastError() isn't a really a global (and neither is errno); they are both thread-local. Next, there is really no choice to use or not use errno - if you want to find out what the error is that has occurred, you *have* to look at errno. Finally, in the case of Py_END_ALLOW_THREADS, errno/GetLastError is typically read right after the system call. However, you can't raise the Python exception before Py_END_ALLOW_THREADS (which you seem to suggest as a solution), since we must not call Python APIs without holding the GIL. > Generally, I would discourage non-local errno use. My motivation is that MS > Windows CE simply doesn't have errno and MS Windows in general often uses > different ways to signal errors, so not using it would restrict the > conditionally compiled code further. That sounds like an unrelated issue to the one at hand. Regards, Martin From doomster at knuut.de Sat Jan 10 16:49:56 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Sat, 10 Jan 2009 16:49:56 +0100 Subject: [Python-Dev] Py_END_ALLOW_THREADS and GetLastError() In-Reply-To: <4968AC84.5020407@v.loewis.de> References: <930F189C8A437347B80DF2C156F7EC7F04D78821E6@exchis.ccp.ad.local> <200901101340.11079.doomster@knuut.de> <4968AC84.5020407@v.loewis.de> Message-ID: <200901101649.56961.doomster@knuut.de> On Saturday 10 January 2009 15:11:16 Martin v. L?wis wrote: > > Well, that's what you get for using globals > > Please do take a look at the issue at hand before pointing fingers. I'm really sorry if this sounded like I was accusing someone, that was not my intention. Uli From barry at python.org Sun Jan 11 00:08:40 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 10 Jan 2009 18:08:40 -0500 Subject: [Python-Dev] Meet your next release manager: Benjamin Peterson In-Reply-To: <20090110012152.GC14392@panix.com> References: <41BFC5BD-80BF-4F6D-941B-8EDFCDF30767@python.org> <20090110012152.GC14392@panix.com> Message-ID: <9D91D763-05B8-4D7C-87F1-C717FD00CEC1@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 9, 2009, at 8:21 PM, Aahz wrote: > On Fri, Jan 09, 2009, Barry Warsaw wrote: >> >> To that end, I'm happy to say that Benjamin Peterson will be the >> release >> manager for Python 2.7 and 3.1. I will be mentoring him through the >> process, but it'll be his ball of snake wax. Please join me in >> helping >> him make the 2.7 and 3.1 releases as great as 2.6 and 3.0! > > Great news! How many cases of beer did you feed him before he agreed? Anthony and I have just had so much fun whitewashing the last few releases, we just couldn't in good conscious keep it to ourselves! Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSWkqeHEjvBPtnXfVAQLOVgP7BKxYMplrBEu71AqwrhdBcYdFNV+936/7 WUUm4FXwkB2sTnfuoEkQH/N195PDUIAi2MAE6sbfkVRyl/qrVnAEsa7bQ9Ss8a6J wu9ouCDQzAqqlyXzxjPUKOdJTpteALE6IhYBMDUDIJDt4BKX/CBXctFCZpEoE50Z lQVY28m21Xg= =WkhB -----END PGP SIGNATURE----- From aahz at pythoncraft.com Sun Jan 11 00:59:17 2009 From: aahz at pythoncraft.com (Aahz) Date: Sat, 10 Jan 2009 15:59:17 -0800 Subject: [Python-Dev] Meet your next release manager: Benjamin Peterson In-Reply-To: <9D91D763-05B8-4D7C-87F1-C717FD00CEC1@python.org> References: <41BFC5BD-80BF-4F6D-941B-8EDFCDF30767@python.org> <20090110012152.GC14392@panix.com> <9D91D763-05B8-4D7C-87F1-C717FD00CEC1@python.org> Message-ID: <20090110235917.GA13874@panix.com> On Sat, Jan 10, 2009, Barry Warsaw wrote: > On Jan 9, 2009, at 8:21 PM, Aahz wrote: >> On Fri, Jan 09, 2009, Barry Warsaw wrote: >>> >>> To that end, I'm happy to say that Benjamin Peterson will be the >>> release manager for Python 2.7 and 3.1. I will be mentoring him >>> through the process, but it'll be his ball of snake wax. Please >>> join me in helping him make the 2.7 and 3.1 releases as great as 2.6 >>> and 3.0! >> >> Great news! How many cases of beer did you feed him before he >> agreed? > > Anthony and I have just had so much fun whitewashing the last few > releases, we just couldn't in good conscious keep it to ourselves! ^^^^^^^^^ That proves my point, I think. ;-) -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ f u cn rd ths, u cn gt a gd jb n nx prgrmmng. From steve at holdenweb.com Sun Jan 11 03:47:11 2009 From: steve at holdenweb.com (Steve Holden) Date: Sat, 10 Jan 2009 21:47:11 -0500 Subject: [Python-Dev] Meet your next release manager: Benjamin Peterson In-Reply-To: <20090110235917.GA13874@panix.com> References: <41BFC5BD-80BF-4F6D-941B-8EDFCDF30767@python.org> <20090110012152.GC14392@panix.com> <9D91D763-05B8-4D7C-87F1-C717FD00CEC1@python.org> <20090110235917.GA13874@panix.com> Message-ID: Aahz wrote: > On Sat, Jan 10, 2009, Barry Warsaw wrote: >> On Jan 9, 2009, at 8:21 PM, Aahz wrote: >>> On Fri, Jan 09, 2009, Barry Warsaw wrote: >>>> To that end, I'm happy to say that Benjamin Peterson will be the >>>> release manager for Python 2.7 and 3.1. I will be mentoring him >>>> through the process, but it'll be his ball of snake wax. Please >>>> join me in helping him make the 2.7 and 3.1 releases as great as 2.6 >>>> and 3.0! >>> Great news! How many cases of beer did you feed him before he >>> agreed? >> Anthony and I have just had so much fun whitewashing the last few >> releases, we just couldn't in good conscious keep it to ourselves! > ^^^^^^^^^ > That proves my point, I think. ;-) No, that demonstrates that while he's happy to share the release management around Barry has been keeping the beer all to himself. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From fiorix at gmail.com Sun Jan 11 17:02:17 2009 From: fiorix at gmail.com (Alexandre Fiori) Date: Sun, 11 Jan 2009 14:02:17 -0200 Subject: [Python-Dev] operator.itemgetter with a callback method Message-ID: <8ce799460901110802i2f76d6c3y8f43147196b9f748@mail.gmail.com> hello i was thinking about a possible improvement for the itemgetter the documentation page shows simple examples like sorting a dictionary by its integer values, like this: >>> inventory = [('apple', 3), ('banana', 2), ('pear', 5), ('orange', 1)] >>> getcount = itemgetter(1) >>> map(getcount, inventory) [3, 2, 5, 1] >>> sorted(inventory, key=getcount) [('orange', 1), ('banana', 2), ('apple', 3), ('pear', 5)] let's suppose i have dictionary where items are lists (instead of integers), and i want to sort it by the size of each list: >>> friends = { ... 'alex': ['bob', 'jane'], ... 'mary': ['steve', 'linda', 'foo bar'], ... 'john': ['max'] ... } >>> sorted(friends.items(), key=itemgetter(1)) [('alex', ['bob', 'jane']), ('john', ['max']), ('mary', ['steve', 'linda', 'foo bar'])] that doesn't work since itemgetter(1) will return a list, and that's not useful for sorting. i didn't look at the code, but i suppose itemgetter is something like this: class itemgetter: def __init__(self, index): self.index = index def __call__(self, item): return tem[self.index] in order for that sort (and possibly a lot of other things) to work properly, we could add a callback method for itemgetter, like this: class itemgetter: def __init__(self, index, callback=None): self.index = index self.callback = callback def __call__(self, item): return self.callback and self.callback(item[self.index]) or item[self.index] so, we could easly sort by the amount of data in each list, like this: >>> sorted(friends.items(), key=itemgetter(1, callback=len)) [('john', ['max']), ('alex', ['bob', 'jane']), ('foo', ['bar', 'steve', 'linda'])] what do you guys think about it? please correct me if i'm wrong. -- Ship ahoy! Hast seen the While Whale? - Melville's Captain Ahab -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggpolo at gmail.com Sun Jan 11 17:12:05 2009 From: ggpolo at gmail.com (Guilherme Polo) Date: Sun, 11 Jan 2009 14:12:05 -0200 Subject: [Python-Dev] operator.itemgetter with a callback method In-Reply-To: <8ce799460901110802i2f76d6c3y8f43147196b9f748@mail.gmail.com> References: <8ce799460901110802i2f76d6c3y8f43147196b9f748@mail.gmail.com> Message-ID: On Sun, Jan 11, 2009 at 2:02 PM, Alexandre Fiori wrote: > > hello > > i was thinking about a possible improvement for the itemgetter > the documentation page shows simple examples like sorting a dictionary by > its integer values Hi, Sorry for starting like this but ideas are supposed to be emailed to the python-ideas maillist. > . > . > > in order for that sort (and possibly a lot of other things) to work > properly, we could add > a callback method for itemgetter, like this: > > class itemgetter: > def __init__(self, index, callback=None): > self.index = index > self.callback = callback > > def __call__(self, item): > return self.callback and self.callback(item[self.index]) or > item[self.index] > > so, we could easly sort by the amount of data in each list, like this: > >>>> sorted(friends.items(), key=itemgetter(1, callback=len)) > [('john', ['max']), ('alex', ['bob', 'jane']), ('foo', ['bar', 'steve', > 'linda'])] > > > what do you guys think about it? please correct me if i'm wrong. > > You are not forced to use itemgetter as a key in sorted, you can provide your own key method, like this: def x(item): return len(item[1]) sorted(friends.items(), key=x) Also, your idea ruins the name "itemgetter" since it is no longer a itemgetter. -- -- Guilherme H. Polo Goncalves From fiorix at gmail.com Sun Jan 11 17:21:26 2009 From: fiorix at gmail.com (Alexandre Fiori) Date: Sun, 11 Jan 2009 14:21:26 -0200 Subject: [Python-Dev] operator.itemgetter with a callback method In-Reply-To: References: <8ce799460901110802i2f76d6c3y8f43147196b9f748@mail.gmail.com> Message-ID: <8ce799460901110821j5489c232p970af47daa131aba@mail.gmail.com> thanks! On Sun, Jan 11, 2009 at 2:12 PM, Guilherme Polo wrote: > On Sun, Jan 11, 2009 at 2:02 PM, Alexandre Fiori wrote: > > > > hello > > > > i was thinking about a possible improvement for the itemgetter > > the documentation page shows simple examples like sorting a dictionary by > > its integer values > > Hi, > > Sorry for starting like this but ideas are supposed to be emailed to > the python-ideas maillist. > > > . > > . > > > > in order for that sort (and possibly a lot of other things) to work > > properly, we could add > > a callback method for itemgetter, like this: > > > > class itemgetter: > > def __init__(self, index, callback=None): > > self.index = index > > self.callback = callback > > > > def __call__(self, item): > > return self.callback and self.callback(item[self.index]) or > > item[self.index] > > > > so, we could easly sort by the amount of data in each list, like this: > > > >>>> sorted(friends.items(), key=itemgetter(1, callback=len)) > > [('john', ['max']), ('alex', ['bob', 'jane']), ('foo', ['bar', 'steve', > > 'linda'])] > > > > > > what do you guys think about it? please correct me if i'm wrong. > > > > > > You are not forced to use itemgetter as a key in sorted, you can > provide your own key method, like this: > > def x(item): > return len(item[1]) > > sorted(friends.items(), key=x) > > Also, your idea ruins the name "itemgetter" since it is no longer a > itemgetter. > > -- > -- Guilherme H. Polo Goncalves > -- Ship ahoy! Hast seen the While Whale? - Melville's Captain Ahab -------------- next part -------------- An HTML attachment was scrubbed... URL: From doomster at knuut.de Sun Jan 11 18:13:02 2009 From: doomster at knuut.de (Ulrich Eckhardt) Date: Sun, 11 Jan 2009 18:13:02 +0100 Subject: [Python-Dev] How should I handle unsupported features? Message-ID: <200901111813.02663.doomster@knuut.de> Hi! Porting to MS Windows CE, I find that e.g. signals or environment vars are not supported. How should I handle that? In particular, I'm talking about PyOS_getsig() and PyOS_setsig(). Should I just #ifdef them out completely or should I implement them by setting an error? Which error should I set? cheers Uli From dickinsm at gmail.com Sun Jan 11 18:49:05 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Sun, 11 Jan 2009 17:49:05 +0000 Subject: [Python-Dev] __long__ method still exists in Python 3.x Message-ID: <5c6f2a5d0901110949i631b59e6rc058f3a054d6dd13@mail.gmail.com> I noticed that the builtin numeric types (int, float, complex) all still have a __long__ method in 3.x. Shouldn't this have disappeared as part of the int/long unification? Is there any reason not to remove this (by setting the nb_long entry to 0 in all three cases)? Mark From martin at v.loewis.de Sun Jan 11 18:53:13 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 11 Jan 2009 18:53:13 +0100 Subject: [Python-Dev] How should I handle unsupported features? In-Reply-To: <200901111813.02663.doomster@knuut.de> References: <200901111813.02663.doomster@knuut.de> Message-ID: <496A3209.8090502@v.loewis.de> > Porting to MS Windows CE, I find that e.g. signals or environment vars are not > supported. How should I handle that? So that scripts that try to make use of these features operate in a reasonable way. > In particular, I'm talking about > PyOS_getsig() and PyOS_setsig(). Should I just #ifdef them out completely or > should I implement them by setting an error? Which error should I set? My proposal would be to actually implement signal handlers for CE. Try to minimize the amount of code change that you need to perform (*). Not sure what exactly that means, but probably, you need to provide at least the symbolic constants mandated by C. E.g. define NSIG, SIG_IGN, SIG_DFL, SIGINT, and perhaps a few others the same way the the VS9 CRT defines them, then implement PyOS_setsig so that it operates on an array of NSIG function pointers. None of the signal handlers will ever be called - which essentially means that the signals just don't arise. Alternatively, if you regret the storage for the signal handlers, you might a) make some or all of the signals not supported; signal(2) is defined to return SIG_ERR in that case, and set errno to EINVAL. Not sure what will break if Python can't even successfully set a SIGINT handler. b) cheat on setsig, not actually recording the signal handler. Not sure whether any code relies on getsig(k, setsig(k, f)) == f Regards, Martin (*) This is a general advise. If some feature is not supported on a minority platform, it would be a pity if a lot of code needs to be written to work around. From martin at v.loewis.de Sun Jan 11 19:40:47 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 11 Jan 2009 19:40:47 +0100 Subject: [Python-Dev] __long__ method still exists in Python 3.x In-Reply-To: <5c6f2a5d0901110949i631b59e6rc058f3a054d6dd13@mail.gmail.com> References: <5c6f2a5d0901110949i631b59e6rc058f3a054d6dd13@mail.gmail.com> Message-ID: <496A3D2F.1030206@v.loewis.de> > I noticed that the builtin numeric types (int, float, complex) all still > have a __long__ method in 3.x. Shouldn't this have disappeared as > part of the int/long unification? Is there any reason not to remove this > (by setting the nb_long entry to 0 in all three cases)? There are, apparently, still callers of the nb_long slot, so I would be cautious. Regards, Martin From benjamin at python.org Sun Jan 11 20:43:41 2009 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 11 Jan 2009 13:43:41 -0600 Subject: [Python-Dev] __long__ method still exists in Python 3.x In-Reply-To: <496A3D2F.1030206@v.loewis.de> References: <5c6f2a5d0901110949i631b59e6rc058f3a054d6dd13@mail.gmail.com> <496A3D2F.1030206@v.loewis.de> Message-ID: <1afaf6160901111143t41bf456bi5b23409c654c0e9c@mail.gmail.com> On Sun, Jan 11, 2009 at 12:40 PM, "Martin v. L?wis" wrote: >> I noticed that the builtin numeric types (int, float, complex) all still >> have a __long__ method in 3.x. Shouldn't this have disappeared as >> part of the int/long unification? Is there any reason not to remove this >> (by setting the nb_long entry to 0 in all three cases)? > > There are, apparently, still callers of the nb_long slot, so I would be > cautious. We should remove all usage of it and rename it to nb_reserved. -- Regards, Benjamin From dickinsm at gmail.com Mon Jan 12 10:20:33 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Mon, 12 Jan 2009 09:20:33 +0000 Subject: [Python-Dev] __long__ method still exists in Python 3.x In-Reply-To: <1afaf6160901111143t41bf456bi5b23409c654c0e9c@mail.gmail.com> References: <5c6f2a5d0901110949i631b59e6rc058f3a054d6dd13@mail.gmail.com> <496A3D2F.1030206@v.loewis.de> <1afaf6160901111143t41bf456bi5b23409c654c0e9c@mail.gmail.com> Message-ID: <5c6f2a5d0901120120r2fa21276p51b639261cb4d0ce@mail.gmail.com> On Sun, Jan 11, 2009 at 7:43 PM, Benjamin Peterson wrote: > On Sun, Jan 11, 2009 at 12:40 PM, "Martin v. L?wis" wrote: >> There are, apparently, still callers of the nb_long slot, so I would be >> cautious. > > We should remove all usage of it and rename it to nb_reserved. I see uses of nb_long in Object/abstract.c and Modules/_struct.c, but no others in the core. I think the first can be removed, and the second changed to nb_int. Patch at http://bugs.python.org/issue4910 Thanks, Mark From steve at holdenweb.com Tue Jan 13 08:38:54 2009 From: steve at holdenweb.com (Steve Holden) Date: Tue, 13 Jan 2009 02:38:54 -0500 Subject: [Python-Dev] Python 3.0 Porting information Message-ID: <496C450E.8000607@holdenweb.com> I think we need to make sure that Google becomes better aware of the solutions to the problem expressed in http://mcjeff.blogspot.com/2009/01/python-30-porting-efforts.html While the author didn't, in my opinion, exercise due Google diligence there are some valid points in the post. Unfortunately the direct descendants of http://wiki.python.org/moin/PortingToPy3k don't seem to be receiving much attention (i.e. they remain a framework rather than solid information). Do we need to recruit community support to get this stuff moving? Experience suggests that "if we build it" they will not come unless and until they are led by the nose. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From martin at v.loewis.de Tue Jan 13 09:29:17 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 13 Jan 2009 09:29:17 +0100 Subject: [Python-Dev] [Pydotorg] Python 3.0 Porting information In-Reply-To: <496C450E.8000607@holdenweb.com> References: <496C450E.8000607@holdenweb.com> Message-ID: <496C50DD.1020400@v.loewis.de> > Do we need to recruit community support > to get this stuff moving? Experience suggests that "if we build it" they > will not come unless and until they are led by the nose. There is http://pypi.python.org/pypi?:action=browse&c=533 (Programming Language :: Python :: 3) It currently lists some 20 packages. It might be worthwhile to blog about that, explaining that if your package isn't listed there even though it works with Python 3, then you should classify it correctly right away. There is also http://wiki.python.org/moin/Early2to3Migrations although I'm not sure how useful this is (if the "upstream" package supports 3.0, and is listed in PyPI, then the PyPI listing is better. This page might help to collect "forking ports", i.e. where the porter is not the author). Regards, Martin From steve at holdenweb.com Tue Jan 13 12:37:24 2009 From: steve at holdenweb.com (Steve Holden) Date: Tue, 13 Jan 2009 06:37:24 -0500 Subject: [Python-Dev] [Pydotorg] Python 3.0 Porting information In-Reply-To: <496C50DD.1020400@v.loewis.de> References: <496C450E.8000607@holdenweb.com> <496C50DD.1020400@v.loewis.de> Message-ID: Martin v. L?wis wrote: >> Do we need to recruit community support >> to get this stuff moving? Experience suggests that "if we build it" they >> will not come unless and until they are led by the nose. > > There is > > http://pypi.python.org/pypi?:action=browse&c=533 > (Programming Language :: Python :: 3) > > It currently lists some 20 packages. It might be worthwhile to blog > about that, explaining that if your package isn't listed there even > though it works with Python 3, then you should classify it correctly > right away. > > There is also > > http://wiki.python.org/moin/Early2to3Migrations > > although I'm not sure how useful this is (if the "upstream" package > supports 3.0, and is listed in PyPI, then the PyPI listing is better. > This page might help to collect "forking ports", i.e. where the porter > is not the author). > Thanks, Martin. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From kristjan at ccpgames.com Tue Jan 13 15:09:25 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Tue, 13 Jan 2009 14:09:25 +0000 Subject: [Python-Dev] testsuite with tmp/@test Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882739@exchis.ccp.ad.local> By accident i had a dir called @test in my PCBuild directory when I was running the testsuite. This caused the test_support to define TESTFN as tmp/@test. This again caused a number of tests to fail. One issue I have already covered in http://bugs.python.org/issue4927 Another issue is test_import which doesn't like importing with filename. But a lot of tests fail because of incorrect path name delimeters and such. Shouldn't we try to make this work as well as possible even with a temp file that is in a subdirectory? And, Oh, I'm using windows which aggreviates the issue with the slash/backslash confusion. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Jan 13 19:24:09 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 13 Jan 2009 18:24:09 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Add_Py=5Foff=5Ft_and_related_APIs=3F?= Message-ID: Hello, Python currently has an API to deal with size_t-like numbers (Py_ssize_t, PyNumber_AsSsize_t), but it doesn't have similar facilities for off_t. Is it ok to add the following: * a Py_off_t type which is typedef'd to either Py_LONG_LONG (Windows) or off_t (others) * three C API functions: PyNumber_AsOff_t, PyLong_AsOff_t, PyLong_FromOff_t * an additional type code for PyArg_ParseTuple and friends; I suggest 'N' since 'n' currently means Py_ssize_t, and a Py_off_t should always be at least as wide as a Py_ssize_t ? (the motivation is systems where Py_ssize_t is 32-bits wide, but large file support makes off_t 64 bits wide) Thanks Antoine. From brett at python.org Tue Jan 13 19:30:11 2009 From: brett at python.org (Brett Cannon) Date: Tue, 13 Jan 2009 10:30:11 -0800 Subject: [Python-Dev] testsuite with tmp/@test In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882739@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882739@exchis.ccp.ad.local> Message-ID: On Tue, Jan 13, 2009 at 06:09, Kristj?n Valur J?nsson wrote: > By accident i had a dir called @test in my PCBuild directory when I was > running the testsuite. > > This caused the test_support to define TESTFN as tmp/@test. > > > > This again caused a number of tests to fail. One issue I have already > covered in http://bugs.python.org/issue4927 > > Another issue is test_import which doesn't like importing with filename. > > But a lot of tests fail because of incorrect path name delimeters and such. > Shouldn't we try to make this > > work as well as possible even with a temp file that is in a subdirectory? > Yes.Use of TESTFN should assume it is a file but not know exactly where that file is kept. -Brett From martin at v.loewis.de Tue Jan 13 20:07:32 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 13 Jan 2009 20:07:32 +0100 Subject: [Python-Dev] Add Py_off_t and related APIs? In-Reply-To: References: Message-ID: <496CE674.2040702@v.loewis.de> > Is it ok to add the following: > * a Py_off_t type which is typedef'd to either Py_LONG_LONG (Windows) or off_t > (others) > * three C API functions: PyNumber_AsOff_t, PyLong_AsOff_t, PyLong_FromOff_t > * an additional type code for PyArg_ParseTuple and friends; I suggest 'N' since > 'n' currently means Py_ssize_t, and a Py_off_t should always be at least as wide > as a Py_ssize_t > ? -1. How many functions actually require that type? > (the motivation is systems where Py_ssize_t is 32-bits wide, but large file > support makes off_t 64 bits wide) For argument parsing, you should use "long long" if SIZEOF_OFF_T is 8 and long long is supported, and then assign to off_t as appropriate. Regards, Martin From solipsis at pitrou.net Tue Jan 13 20:17:26 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 13 Jan 2009 19:17:26 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Add_Py=5Foff=5Ft_and_related_APIs=3F?= References: <496CE674.2040702@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > -1. How many functions actually require that type? Functions in the IO lib. I can't tell you how many, let's say a dozen. > > (the motivation is systems where Py_ssize_t is 32-bits wide, but large file > > support makes off_t 64 bits wide) > > For argument parsing, you should use "long long" if SIZEOF_OFF_T is 8 > and long long is supported, and then assign to off_t as appropriate. It's wrong, because floats would be accepted as argument to the seek() method. Hence the need for (at least) PyNumber_AsOff_t. (of course the IO lib can have its own private implementation of PyNumber_AsOff_t. But then why not make it benefit everyone?) From guido at python.org Tue Jan 13 20:40:03 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 13 Jan 2009 11:40:03 -0800 Subject: [Python-Dev] Python History blog started Message-ID: Now that Python has turned 19, I've started blogging about Python's history. I hope to keep it up at one article per week. So far, I've written an intro to the series (http://neopythonic.blogspot.com/2009/01/history-of-python-introduction.html) and posted two chapters to the history blog itself (http://python-history.blogspot.com/). Comments are welcome. While I have a whole series of material ready that started as a draft HOPL paper that never got published before, eventually I will run out of material. Therefore I'd like to invite others to write up *their* recollections. If you were one of the folks who helped create the newsgroup, or attended an early Python workshop or conference, I would particularly like to hear from you. (Do you recall "Sproing the bunny?" Do you own a T-shirt printed by Steve Majewsky? Then by all means get in touch!) If you would like to contribute on an ongoing basis, I'd be happy to add you to the blog as an author, so you can post your own articles. I also would like to hear from people who contributed more recently -- it is no longer the case that I know all that's going on in the community. Let's make this a group project! -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ncoghlan at gmail.com Tue Jan 13 21:21:39 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 14 Jan 2009 06:21:39 +1000 Subject: [Python-Dev] Add Py_off_t and related APIs? In-Reply-To: References: <496CE674.2040702@v.loewis.de> Message-ID: <496CF7D3.809@gmail.com> Antoine Pitrou wrote: > Martin v. L?wis v.loewis.de> writes: >> -1. How many functions actually require that type? > > Functions in the IO lib. I can't tell you how many, let's say a dozen. > >>> (the motivation is systems where Py_ssize_t is 32-bits wide, but large file >>> support makes off_t 64 bits wide) >> For argument parsing, you should use "long long" if SIZEOF_OFF_T is 8 >> and long long is supported, and then assign to off_t as appropriate. > > It's wrong, because floats would be accepted as argument to the seek() method. > Hence the need for (at least) PyNumber_AsOff_t. > (of course the IO lib can have its own private implementation of > PyNumber_AsOff_t. But then why not make it benefit everyone?) If the IO lib file support is solid, file access code will generally be using that API and not need to worry about the vagaries of off_t themselves*. I'd say start with the private version in IO lib, and then if there is demand consider moving it to abstract.h (but, like Martin, I don't expect such demand to ever develop). Cheers, Nick. * As another step is made on the road to version independent C extensions... -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From martin at v.loewis.de Tue Jan 13 21:33:28 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 13 Jan 2009 21:33:28 +0100 Subject: [Python-Dev] Add Py_off_t and related APIs? In-Reply-To: References: <496CE674.2040702@v.loewis.de> Message-ID: <496CFA98.7030800@v.loewis.de> >> For argument parsing, you should use "long long" if SIZEOF_OFF_T is 8 >> and long long is supported, and then assign to off_t as appropriate. > > It's wrong, because floats would be accepted as argument to the seek() method. I see. > Hence the need for (at least) PyNumber_AsOff_t. > (of course the IO lib can have its own private implementation of > PyNumber_AsOff_t. But then why not make it benefit everyone?) I would do this through a converter function (O&), but yes, making it private to the io library sounds about right. Who else would benefit from it? If we start with that, we end up with ParseTuple formats for uid_t, gid_t, pid_t, and the other dozen integral types that POSIX has invented. Regards, Martin From solipsis at pitrou.net Tue Jan 13 21:53:02 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 13 Jan 2009 20:53:02 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Add_Py=5Foff=5Ft_and_related_APIs=3F?= References: <496CE674.2040702@v.loewis.de> <496CF7D3.809@gmail.com> Message-ID: Nick Coghlan gmail.com> writes: > > I'd say start with the private version in IO lib, and then if there is > demand consider moving it to abstract.h (but, like Martin, I don't > expect such demand to ever develop). Ok, I'm gonna do this. Thanks Antoine. From victor.stinner at haypocalc.com Tue Jan 13 22:47:52 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Tue, 13 Jan 2009 22:47:52 +0100 Subject: [Python-Dev] Add Py_off_t and related APIs? In-Reply-To: <496CFA98.7030800@v.loewis.de> References: <496CFA98.7030800@v.loewis.de> Message-ID: <200901132247.52176.victor.stinner@haypocalc.com> Le Tuesday 13 January 2009 21:33:28 Martin v. L?wis, vous avez ?crit?: > I would do this through a converter function (O&), but yes, > making it private to the io library sounds about right. Who > else would benefit from it? On Linux, mmap() prototype is: void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset); mmapmodule.c uses "Py_ssize_t" type and _GetMapSize() private function to convert the long integer to the Py_ssize_t type. -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From victor.stinner at haypocalc.com Tue Jan 13 23:24:52 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Tue, 13 Jan 2009 23:24:52 +0100 Subject: [Python-Dev] Add Py_off_t and related APIs? In-Reply-To: <200901132247.52176.victor.stinner@haypocalc.com> References: <496CFA98.7030800@v.loewis.de> <200901132247.52176.victor.stinner@haypocalc.com> Message-ID: <200901132324.52805.victor.stinner@haypocalc.com> Le Tuesday 13 January 2009 22:47:52 Victor Stinner, vous avez ?crit?: > Le Tuesday 13 January 2009 21:33:28 Martin v. L?wis, vous avez ?crit?: > > I would do this through a converter function (O&), but yes, > > making it private to the io library sounds about right. Who > > else would benefit from it? > > On Linux, mmap() prototype is (...) A more complete answer... Current usage of off_t types in Python: (1) mmap.mmap(): [use Py_ssize_t] Use "Py_ssize_t i = PyNumber_AsSsize_t(o, PyExc_OverflowError)". (2) posix.lseek(), posix.ftruncate(), fcnt.lockf(): [off_t, struct flock for fcntl.lockf()] Use PyInt_AsLong(posobj), or "PyLong_Check(posobj)? PyLong_AsLongLong(posobj) : PyInt_AsLong(posobj)" if HAVE_LARGEFILE_SUPPORT. (3) posix.stat(): [struct win32_stat/struct stat] use PyInt_FromLong(st->st_size), or PyLong_FromLongLong((PY_LONG_LONG)st->st_size) if HAVE_LARGEFILE_SUPPORT --- Using >>find /usr/include -name "*.h"|xargs grep -H off_t<< I found: - file operations: * lseek(), truncate(), fseeko(), ftello() * file size in the *stat() structure * (fcntl) lockf(), fadvice(), fallocate() * pread(), pwrite() * async I/O like aio_read() - dirent structure used by readir() - sendfile() - mmap() Some libraries define their own offset type: - [gzip] z_off_t used by gzseek(), gztell() defined as a long - [mysql] my_off_t is unsigned long, or ulonglong size if sizeof(off_t) > 4, os_off_t is off_t - etc. -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From daniel at stutzbachenterprises.com Tue Jan 13 23:42:55 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Tue, 13 Jan 2009 16:42:55 -0600 Subject: [Python-Dev] Add Py_off_t and related APIs? In-Reply-To: <496CFA98.7030800@v.loewis.de> References: <496CE674.2040702@v.loewis.de> <496CFA98.7030800@v.loewis.de> Message-ID: On Tue, Jan 13, 2009 at 2:33 PM, "Martin v. L?wis" wrote: > If we start with that, we end up with ParseTuple formats for > uid_t, gid_t, pid_t, and the other dozen integral types that > POSIX has invented. > Perhaps it would be useful to provide generic support for integer types that might have different widths on different platforms? e.g.: uid_t uid = PyNumber_AS_INT_BY_SIZE(number_ob, uid_t); That way, the core does not need to know about every blah_t type used by POSIX and extension modules, while offering convenient conversion functions nonetheless. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Tue Jan 13 23:52:45 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Tue, 13 Jan 2009 23:52:45 +0100 Subject: [Python-Dev] Add Py_off_t and related APIs? In-Reply-To: References: <496CE674.2040702@v.loewis.de> <496CFA98.7030800@v.loewis.de> Message-ID: <496D1B3D.1040801@v.loewis.de> > Perhaps it would be useful to provide generic support for integer types > that might have different widths on different platforms? e.g.: > > uid_t uid = PyNumber_AS_INT_BY_SIZE(number_ob, uid_t); > > That way, the core does not need to know about every blah_t type used by > POSIX and extension modules, while offering convenient conversion > functions nonetheless. I don't think that this would be that useful. What might help is support for parsing arbitrary-sized integers in PyArg_ParseTuple, as this should typically be the path through which you get the valud into the (say) uid variable. However, it's still then fairly tricky: is uid_t a signed type or an unsigned type; if unsigned, can I still have negative values (which, for uid_t, I often can), does the platform have uid_t in the first place, and, if not, what other type should I use? and so on. Regards, Martin From dinov at microsoft.com Wed Jan 14 05:24:45 2009 From: dinov at microsoft.com (Dino Viehland) Date: Tue, 13 Jan 2009 20:24:45 -0800 Subject: [Python-Dev] should list's call to __len__ swallow SystemExit? Message-ID: <350E7D38B6D819428718949920EC23555704DFBBC0@NA-EXMSG-C102.redmond.corp.microsoft.com> We had a bug reported that effectively boils down to we?re not swallowing exceptions when list calls __len__ (http://www.codeplex.com/WorkItem/View.aspx?ProjectName=IronPython&WorkItemId=20598). We can obviously make the change to catch exceptions here in IronPython even if it seems like a bad idea to me ? But CPython seems to catch not only normal exceptions, but also SystemExit. It seems like there?s been a move away from this so I thought I?d mention it here. I tested it on 2.6.1 and 3.0. import sys class A(object): def __iter__(self): return iter(range(10)) def __len__(self): try: print('exiting') sys.exit(1) except Exception as e: print('can I catch it?', e) list(A()) which prints: exiting [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] From guido at python.org Wed Jan 14 06:21:10 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 13 Jan 2009 21:21:10 -0800 Subject: [Python-Dev] should list's call to __len__ swallow SystemExit? In-Reply-To: <350E7D38B6D819428718949920EC23555704DFBBC0@NA-EXMSG-C102.redmond.corp.microsoft.com> References: <350E7D38B6D819428718949920EC23555704DFBBC0@NA-EXMSG-C102.redmond.corp.microsoft.com> Message-ID: There seems to be an unconditional PyErr_Clear() in _PyObject_LengthHint(). I think that could and should be much more careful; it probably should only ignore AttributeErrors (though there may be unittests to the contrary). On Tue, Jan 13, 2009 at 8:24 PM, Dino Viehland wrote: > We had a bug reported that effectively boils down to we're not swallowing exceptions when list calls __len__ (http://www.codeplex.com/WorkItem/View.aspx?ProjectName=IronPython&WorkItemId=20598). > > We can obviously make the change to catch exceptions here in IronPython even if it seems like a bad idea to me ? But CPython seems to catch not only normal exceptions, but also SystemExit. It seems like there's been a move away from this so I thought I'd mention it here. I tested it on 2.6.1 and 3.0. > > import sys > class A(object): > def __iter__(self): return iter(range(10)) > def __len__(self): > try: > print('exiting') > sys.exit(1) > except Exception as e: > print('can I catch it?', e) > > list(A()) > > which prints: > > exiting > [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ncoghlan at gmail.com Wed Jan 14 12:12:05 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 14 Jan 2009 21:12:05 +1000 Subject: [Python-Dev] should list's call to __len__ swallow SystemExit? In-Reply-To: <350E7D38B6D819428718949920EC23555704DFBBC0@NA-EXMSG-C102.redmond.corp.microsoft.com> References: <350E7D38B6D819428718949920EC23555704DFBBC0@NA-EXMSG-C102.redmond.corp.microsoft.com> Message-ID: <496DC885.5070508@gmail.com> Dino Viehland wrote: > We had a bug reported that effectively boils down to we?re not > swallowing exceptions when list calls __len__ > (http://www.codeplex.com/WorkItem/View.aspx?ProjectName=IronPython&WorkItemId=20598). > > > We can obviously make the change to catch exceptions here in > IronPython even if it seems like a bad idea to me ? But CPython > seems to catch not only normal exceptions, but also SystemExit. It > seems like there?s been a move away from this so I thought I?d > mention it here. I tested it on 2.6.1 and 3.0. I'd agree that CPython appears to be the one misbehaving here, rather than IronPython. Opening a new issue at bugs.python.org would be the best way forward. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From kristjan at ccpgames.com Wed Jan 14 12:23:46 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 14 Jan 2009 11:23:46 +0000 Subject: [Python-Dev] socket.create_connection slow Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> Greetings. I spent the morning trying to find out why the disabled tests in test_xmlrpc.py ran so slowly on my vista box. After much digging, I found that it boiled down to socket.create_connection() trying to connect to "localhost", port. You see, it does a getaddrinfo() and then tries to connect using all the various addresses it finds until it succeeds. On Vista, it will return an AF_INET6 entry before the AF_INET one and try connection to that. This connect() attemt fails after approximately one second, after which we proceed to do an immediately successful connect() call to the AF_INET address. Now, I did fix this in test_xmlrpc.py by just speficying the loopback address, but I wonder if this might not be a problem in general? I can think of two things to make this better: 1) Make sure that AF_INET addresses are tried first in socket.create_connection() 2) Have the SocketServer create a listening socket for each address family by default. Any thoughts? K -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Jan 14 12:34:01 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 14 Jan 2009 11:34:01 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?socket=2Ecreate=5Fconnection_slow?= References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> Message-ID: Kristj?n Valur J?nsson ccpgames.com> writes: > > On Vista, it will return an AF_INET6 entry before the > AF_INET one and try connection to that. ?This connect() attemt fails after > approximately one second, after which we proceed to do an immediately > successful connect() call to the AF_INET address. > > Now, I did fix this in test_xmlrpc.py by just speficying the > loopback address, but I wonder if this might not be a problem in general? Is the fix ok? What if the user wanted to connect to an XMLRPC server using IPv6? > 2)????? > Have the SocketServer create a listening socket for > each address family by default. I don't see how that fixes the root of the problem, not all XMLRPC servers are implemented using SocketServer. From kmtracey at gmail.com Wed Jan 14 13:11:04 2009 From: kmtracey at gmail.com (Karen Tracey) Date: Wed, 14 Jan 2009 07:11:04 -0500 Subject: [Python-Dev] should list's call to __len__ swallow SystemExit? In-Reply-To: <496DC885.5070508@gmail.com> References: <350E7D38B6D819428718949920EC23555704DFBBC0@NA-EXMSG-C102.redmond.corp.microsoft.com> <496DC885.5070508@gmail.com> Message-ID: On Wed, Jan 14, 2009 at 6:12 AM, Nick Coghlan wrote: > Dino Viehland wrote: > > We had a bug reported that effectively boils down to we're not > > swallowing exceptions when list calls __len__ > > ( > http://www.codeplex.com/WorkItem/View.aspx?ProjectName=IronPython&WorkItemId=20598 > ). > > > > > > We can obviously make the change to catch exceptions here in > > IronPython even if it seems like a bad idea to me ? But CPython > > seems to catch not only normal exceptions, but also SystemExit. It > > seems like there's been a move away from this so I thought I'd > > mention it here. I tested it on 2.6.1 and 3.0. > > I'd agree that CPython appears to be the one misbehaving here, rather > than IronPython. > > Opening a new issue at bugs.python.org would be the best way forward. > There is already a bug for this, I believe: http://bugs.python.org/issue1242657 Karen -------------- next part -------------- An HTML attachment was scrubbed... URL: From victor.stinner at haypocalc.com Wed Jan 14 13:46:18 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 14 Jan 2009 13:46:18 +0100 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> Message-ID: <200901141346.18407.victor.stinner@haypocalc.com> Hi, Le Wednesday 14 January 2009 12:23:46 Kristj?n Valur J?nsson, vous avez ?crit?: > socket.create_connection() trying to connect to ("localhost", port) > (...) > return an AF_INET6 entry before the AF_INET one and try connection > to that. This connect() attemt fails after approximately one second, > after which we proceed to do an immediately successful connect() call > to the AF_INET address. This is the normal behaviour of dual stack (IPv4+IPv6): IPv6 is tried before IPv4. SocketServer uses AF_INET by default, so the "IPv6 port" is closed on your host. Why does it take so long to try to connect to the IPv6 port? On Linux, it's immediate: ---- $ time nc6 ::1 8080 nc6: unable to connect to address ::1, service 8080 real 0m0.023s user 0m0.000s sys 0m0.008s ---- On my host (Ubuntu Gutsy), "localhost" name has only an IPv4 address. The address "::1" is "ip6-localhost" or "ip6-loopback". You should check why the connect() to IPv6 is so long to raise an error. About the test: since SocketServer address family is constant (IPv4), you can force IPv4 for the client. -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From kristjan at ccpgames.com Wed Jan 14 14:31:42 2009 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Wed, 14 Jan 2009 13:31:42 +0000 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D78829A6@exchis.ccp.ad.local> The fix is for the unittest only, where a server is started on a separate thread using Ipv4 on localhost. Now, the problem I was describing in the mail isn't specific to xmlrpc, but for Anyone that wants to use socket.create_connection() to create a stream socket to a host whose name has both an Ipv4 and Ipv6 address. Unless the host is listening on its Ipv6 port, an unsuccessful connection attempt will first be made, taking approximately one second. Kristj?n -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Antoine Pitrou Sent: 14. jan?ar 2009 11:34 To: python-dev at python.org Subject: Re: [Python-Dev] socket.create_connection slow Kristj?n Valur J?nsson ccpgames.com> writes: > > On Vista, it will return an AF_INET6 entry before the > AF_INET one and try connection to that. ?This connect() attemt fails after > approximately one second, after which we proceed to do an immediately > successful connect() call to the AF_INET address. > > Now, I did fix this in test_xmlrpc.py by just speficying the > loopback address, but I wonder if this might not be a problem in general? Is the fix ok? What if the user wanted to connect to an XMLRPC server using IPv6? > 2)????? > Have the SocketServer create a listening socket for > each address family by default. I don't see how that fixes the root of the problem, not all XMLRPC servers are implemented using SocketServer. _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From drkjam at gmail.com Wed Jan 14 14:54:26 2009 From: drkjam at gmail.com (DrKJam) Date: Wed, 14 Jan 2009 13:54:26 +0000 Subject: [Python-Dev] Availability of IPv6 support in the socket module Message-ID: <538a660a0901140554h251cd333q331fc30ce5f4172a@mail.gmail.com> Are there any current plans for 2.7/3.1 to have inet_pton() and inet_ntop() made available via the socket module? Not sure how feasible or difficult doing this would be across all support Python platforms but it would certainly be useful, especially now there is talk about adding IPv4/IPv6 address manipulation support to the standard library (http://bugs.python.org/issue3959). This would provide handy C-level speed ups to IPv6 operations on interpreters where the socket module is available. Google App Engine's Python interpreter is one place where I believe inet_ntoa/aton and possibly inet_ntop/pton are not and will not be made available (by design). In such cases, a library could easily resort to ubiquitous (slightly slower) pure-Python fallbacks. Also, does anyone have a feeling for how available thet AF_INET6 constant is is across all Python's many supported platforms? Thanks, David Moss -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuzzyman at voidspace.org.uk Wed Jan 14 15:31:02 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 14 Jan 2009 14:31:02 +0000 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <200901141346.18407.victor.stinner@haypocalc.com> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> Message-ID: <496DF726.1050209@voidspace.org.uk> Victor Stinner wrote: > Hi, > > Le Wednesday 14 January 2009 12:23:46 Kristj?n Valur J?nsson, vous avez > ?crit : > >> socket.create_connection() trying to connect to ("localhost", port) >> (...) >> return an AF_INET6 entry before the AF_INET one and try connection >> to that. This connect() attemt fails after approximately one second, >> after which we proceed to do an immediately successful connect() call >> to the AF_INET address. >> > > This is the normal behaviour of dual stack (IPv4+IPv6): IPv6 is tried before > IPv4. SocketServer uses AF_INET by default, so the "IPv6 port" is closed on > your host. Why does it take so long to try to connect to the IPv6 port? On > Linux, it's immediate: > ---- > $ time nc6 ::1 8080 > nc6: unable to connect to address ::1, service 8080 > > real 0m0.023s > user 0m0.000s > sys 0m0.008s > ---- > > On my host (Ubuntu Gutsy), "localhost" name has only an IPv4 address. The > address "::1" is "ip6-localhost" or "ip6-loopback". > > You should check why the connect() to IPv6 is so long to raise an error. About > the test: since SocketServer address family is constant (IPv4), you can force > IPv4 for the client. > > This is something of a bugbear on Vista in general. Doing local web-development with localhost can be really painful until you realise that switching to 127.0.0.1 solves the problem... Michael From kristjan at ccpgames.com Wed Jan 14 15:35:58 2009 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Wed, 14 Jan 2009 14:35:58 +0000 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <200901141346.18407.victor.stinner@haypocalc.com> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> I have no idea why the connect refusal takes so long. Maybe it's a vista thing? from socket import * socket(AF_INET6).connect(("::1", 8080)) takes about one second to report active refusal. But so does an IPv4 connect. Maybe it is some kind of DOS attack throttling? I couldn't find any info. I've already asked the client in the test to use IPV4 by specifying the connection address as an IPv4 tuple ("http://127.0.0.1:..."). I see no other way to do it without extensive subclassing because the HTTPConnection() class uses socket.create_connection(). Cheers, Kristj?n -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Victor Stinner Sent: 14. jan?ar 2009 12:46 To: python-dev at python.org Subject: Re: [Python-Dev] socket.create_connection slow Hi, Le Wednesday 14 January 2009 12:23:46 Kristj?n Valur J?nsson, vous avez ?crit?: > socket.create_connection() trying to connect to ("localhost", port) > (...) > return an AF_INET6 entry before the AF_INET one and try connection > to that. This connect() attemt fails after approximately one second, > after which we proceed to do an immediately successful connect() call > to the AF_INET address. This is the normal behaviour of dual stack (IPv4+IPv6): IPv6 is tried before IPv4. SocketServer uses AF_INET by default, so the "IPv6 port" is closed on your host. Why does it take so long to try to connect to the IPv6 port? On Linux, it's immediate: ---- $ time nc6 ::1 8080 nc6: unable to connect to address ::1, service 8080 real 0m0.023s user 0m0.000s sys 0m0.008s ---- On my host (Ubuntu Gutsy), "localhost" name has only an IPv4 address. The address "::1" is "ip6-localhost" or "ip6-loopback". You should check why the connect() to IPv6 is so long to raise an error. About the test: since SocketServer address family is constant (IPv4), you can force IPv4 for the client. -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From billy.earney at gmail.com Wed Jan 14 16:42:58 2009 From: billy.earney at gmail.com (Billy Earney) Date: Wed, 14 Jan 2009 09:42:58 -0600 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> Message-ID: <496e0805.85c2f10a.20f7.ffff86c6@mx.google.com> This may be way out on a limb, but could it be a reverse lookup issue? -----Original Message----- From: python-dev-bounces+billy.earney=gmail.com at python.org [mailto:python-dev-bounces+billy.earney=gmail.com at python.org] On Behalf Of Kristj?n Valur J?nsson Sent: Wednesday, January 14, 2009 8:36 AM To: Victor Stinner; python-dev at python.org Subject: Re: [Python-Dev] socket.create_connection slow I have no idea why the connect refusal takes so long. Maybe it's a vista thing? from socket import * socket(AF_INET6).connect(("::1", 8080)) takes about one second to report active refusal. But so does an IPv4 connect. Maybe it is some kind of DOS attack throttling? I couldn't find any info. I've already asked the client in the test to use IPV4 by specifying the connection address as an IPv4 tuple ("http://127.0.0.1:..."). I see no other way to do it without extensive subclassing because the HTTPConnection() class uses socket.create_connection(). Cheers, Kristj?n -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Victor Stinner Sent: 14. jan?ar 2009 12:46 To: python-dev at python.org Subject: Re: [Python-Dev] socket.create_connection slow Hi, Le Wednesday 14 January 2009 12:23:46 Kristj?n Valur J?nsson, vous avez ?crit : > socket.create_connection() trying to connect to ("localhost", port) > (...) > return an AF_INET6 entry before the AF_INET one and try connection > to that. This connect() attemt fails after approximately one second, > after which we proceed to do an immediately successful connect() call > to the AF_INET address. This is the normal behaviour of dual stack (IPv4+IPv6): IPv6 is tried before IPv4. SocketServer uses AF_INET by default, so the "IPv6 port" is closed on your host. Why does it take so long to try to connect to the IPv6 port? On Linux, it's immediate: ---- $ time nc6 ::1 8080 nc6: unable to connect to address ::1, service 8080 real 0m0.023s user 0m0.000s sys 0m0.008s ---- On my host (Ubuntu Gutsy), "localhost" name has only an IPv4 address. The address "::1" is "ip6-localhost" or "ip6-loopback". You should check why the connect() to IPv6 is so long to raise an error. About the test: since SocketServer address family is constant (IPv4), you can force IPv4 for the client. -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/billy.earney%40gmail.com From drkjam at gmail.com Wed Jan 14 16:50:10 2009 From: drkjam at gmail.com (DrKJam) Date: Wed, 14 Jan 2009 15:50:10 +0000 Subject: [Python-Dev] Availability of IPv6 support in the socket module In-Reply-To: <538a660a0901140554h251cd333q331fc30ce5f4172a@mail.gmail.com> References: <538a660a0901140554h251cd333q331fc30ce5f4172a@mail.gmail.com> Message-ID: <538a660a0901140750p40e66fa1g4c75a881401cfd16@mail.gmail.com> 2009/1/14 DrKJam > Are there any current plans for 2.7/3.1 to have inet_pton() and inet_ntop() > made available via the socket module? > > Not sure how feasible or difficult doing this would be across all support > Python platforms but it would certainly be useful, especially now there is > talk about adding IPv4/IPv6 address manipulation support to the standard > library (http://bugs.python.org/issue3959). > > This would provide handy C-level speed ups to IPv6 operations on > interpreters where the socket module is available. Google App Engine's > Python interpreter is one place where I believe inet_ntoa/aton and possibly > inet_ntop/pton are not and will not be made available (by design). In such > cases, a library could easily resort to ubiquitous (slightly slower) > pure-Python fallbacks. > > Also, does anyone have a feeling for how available thet AF_INET6 constant > is is across all Python's many supported platforms? > > Thanks, > > David Moss > A real RTFM moment. I'm using Windows and I see Linux has had this support since 2.3 or earlier. Please ignore. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tseaver at palladion.com Wed Jan 14 16:52:29 2009 From: tseaver at palladion.com (Tres Seaver) Date: Wed, 14 Jan 2009 10:52:29 -0500 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <496DF726.1050209@voidspace.org.uk> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <496DF726.1050209@voidspace.org.uk> Message-ID: <496E0A3D.6090202@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Michael Foord wrote: > Victor Stinner wrote: >> Hi, >> >> Le Wednesday 14 January 2009 12:23:46 Kristj?n Valur J?nsson, vous avez >> ?crit : >> >>> socket.create_connection() trying to connect to ("localhost", port) >>> (...) >>> return an AF_INET6 entry before the AF_INET one and try connection >>> to that. This connect() attemt fails after approximately one second, >>> after which we proceed to do an immediately successful connect() call >>> to the AF_INET address. >>> >> This is the normal behaviour of dual stack (IPv4+IPv6): IPv6 is tried before >> IPv4. SocketServer uses AF_INET by default, so the "IPv6 port" is closed on >> your host. Why does it take so long to try to connect to the IPv6 port? On >> Linux, it's immediate: >> ---- >> $ time nc6 ::1 8080 >> nc6: unable to connect to address ::1, service 8080 >> >> real 0m0.023s >> user 0m0.000s >> sys 0m0.008s >> ---- >> >> On my host (Ubuntu Gutsy), "localhost" name has only an IPv4 address. The >> address "::1" is "ip6-localhost" or "ip6-loopback". >> >> You should check why the connect() to IPv6 is so long to raise an error. About >> the test: since SocketServer address family is constant (IPv4), you can force >> IPv4 for the client. >> >> > This is something of a bugbear on Vista in general. Doing local > web-development with localhost can be really painful until you realise > that switching to 127.0.0.1 solves the problem... It barfs on Macs as well: indeed, it is worse, because the connection just fails there, rather than trying IPv6 and then falling back to IPv4. For instance, tunneling a connection over SSH to a Mac box via '-L 9999:localhost:9999' will fail to connect at all, unless the server is listening on IPv6. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFJbgo9+gerLs4ltQ4RAvK9AKCfWhQx7ntw+sUNK7FCPU+Kb9jp5QCdEqCu 9BXvzTgBKipSCtA3SdydqjI= =tYDj -----END PGP SIGNATURE----- From kristjan at ccpgames.com Wed Jan 14 17:01:08 2009 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Wed, 14 Jan 2009 16:01:08 +0000 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <496e0805.85c2f10a.20f7.ffff86c6@mx.google.com> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> <496e0805.85c2f10a.20f7.ffff86c6@mx.google.com> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882A67@exchis.ccp.ad.local> Hardly. Successful connects complete in a jiffy, only actively refused ones take a second to do so. I suspect some michief in the vista tcp stack. -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Billy Earney Sent: 14. jan?ar 2009 15:43 To: python-dev at python.org Subject: Re: [Python-Dev] socket.create_connection slow This may be way out on a limb, but could it be a reverse lookup issue? From python at rcn.com Wed Jan 14 19:11:47 2009 From: python at rcn.com (Raymond Hettinger) Date: Wed, 14 Jan 2009 10:11:47 -0800 Subject: [Python-Dev] should list's call to __len__ swallow SystemExit? References: <350E7D38B6D819428718949920EC23555704DFBBC0@NA-EXMSG-C102.redmond.corp.microsoft.com> Message-ID: <1B8D21FC3A214F2AA5A0D6375406F676@RaymondLaptop1> _PyObject_LengthHint() is specified to never fail. If any exception occurs along the way, it returns a default value. In the context of checking for length hints from an iterator, this seems reasonable to me. If you want this changed, I can use a negative return value for other than an attribute error, and modify the calling code to handle the exception. To me this isn't worth making the code slower and more complex. But I can also see wanting to catch a SystemError at any possible step. I presume this same issue occurs everywhere the C API has a *this never fails* specification so that we have simpler, faster calling code at the expense of being able to raise a SystemError in every possible piece of code. Raymond From: "Guido van Rossum" > There seems to be an unconditional PyErr_Clear() in > _PyObject_LengthHint(). I think that could and should be much more > careful; it probably should only ignore AttributeErrors (though there > may be unittests to the contrary). > > On Tue, Jan 13, 2009 at 8:24 PM, Dino Viehland wrote: >> We had a bug reported that effectively boils down to we're not swallowing exceptions when list calls __len__ >> (http://www.codeplex.com/WorkItem/View.aspx?ProjectName=IronPython&WorkItemId=20598). >> >> We can obviously make the change to catch exceptions here in IronPython even if it seems like a bad idea to me ? But CPython >> seems to catch not only normal exceptions, but also SystemExit. It seems like there's been a move away from this so I thought >> I'd mention it here. I tested it on 2.6.1 and 3.0. >> >> import sys >> class A(object): >> def __iter__(self): return iter(range(10)) >> def __len__(self): >> try: >> print('exiting') >> sys.exit(1) >> except Exception as e: >> print('can I catch it?', e) >> >> list(A()) >> >> which prints: >> >> exiting >> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >> >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/python%40rcn.com > From guido at python.org Wed Jan 14 19:19:34 2009 From: guido at python.org (Guido van Rossum) Date: Wed, 14 Jan 2009 10:19:34 -0800 Subject: [Python-Dev] should list's call to __len__ swallow SystemExit? In-Reply-To: <1B8D21FC3A214F2AA5A0D6375406F676@RaymondLaptop1> References: <350E7D38B6D819428718949920EC23555704DFBBC0@NA-EXMSG-C102.redmond.corp.microsoft.com> <1B8D21FC3A214F2AA5A0D6375406F676@RaymondLaptop1> Message-ID: On Wed, Jan 14, 2009 at 10:11 AM, Raymond Hettinger wrote: > _PyObject_LengthHint() is specified to never fail. > If any exception occurs along the way, it returns a > default value. In the context of checking for length > hints from an iterator, this seems reasonable to me. > > If you want this changed, I can use a negative return > value for other than an attribute error, and modify > the calling code to handle the exception. > To me this isn't worth making the code slower and > more complex. But I can also see wanting to catch > a SystemError at any possible step. It has the potential of masking errors, and thus I'd like to see it fixed. > I presume this same issue occurs everywhere the C API > has a *this never fails* specification so that we have > simpler, faster calling code at the expense of being able > to raise a SystemError in every possible piece of code. "This never fails" C APIs that invoke Python code (or e.g. allocate memory) are not supposed to exist in CPython, for the reason above. There used to be several but we gradually killed them all. I'm sorry I wasn't involved more deeply in the review of this feature, I would have warned about it. > Raymond > > > > > > From: "Guido van Rossum" >> >> There seems to be an unconditional PyErr_Clear() in >> _PyObject_LengthHint(). I think that could and should be much more >> careful; it probably should only ignore AttributeErrors (though there >> may be unittests to the contrary). >> >> On Tue, Jan 13, 2009 at 8:24 PM, Dino Viehland >> wrote: >>> >>> We had a bug reported that effectively boils down to we're not swallowing >>> exceptions when list calls __len__ >>> (http://www.codeplex.com/WorkItem/View.aspx?ProjectName=IronPython&WorkItemId=20598). >>> >>> We can obviously make the change to catch exceptions here in IronPython >>> even if it seems like a bad idea to me ? But CPython seems to catch not >>> only normal exceptions, but also SystemExit. It seems like there's been a >>> move away from this so I thought I'd mention it here. I tested it on 2.6.1 >>> and 3.0. >>> >>> import sys >>> class A(object): >>> def __iter__(self): return iter(range(10)) >>> def __len__(self): >>> try: >>> print('exiting') >>> sys.exit(1) >>> except Exception as e: >>> print('can I catch it?', e) >>> >>> list(A()) >>> >>> which prints: >>> >>> exiting >>> [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> http://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> http://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >> >> >> >> -- >> --Guido van Rossum (home page: http://www.python.org/~guido/) >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/python%40rcn.com >> > > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Wed Jan 14 20:13:56 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 14 Jan 2009 20:13:56 +0100 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> Message-ID: <496E3974.4050408@v.loewis.de> > On Vista, it will return an AF_INET6 entry before the AF_INET one and > try connection to that. This connect() attemt fails after approximately > one second, after which we proceed to do an immediately successful > connect() call to the AF_INET address. Can you find out why it takes a second? That should not happen; it should fail immediately (assuming no server is listening). > Now, I did fix this in test_xmlrpc.py by just speficying the loopback > address, but I wonder if this might not be a problem in general? Yes, but possibly with your operating system or installation. We need to understand what is happening first before trying to fix it. Regards, Martin From martin at v.loewis.de Wed Jan 14 20:14:58 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 14 Jan 2009 20:14:58 +0100 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D78829A6@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <930F189C8A437347B80DF2C156F7EC7F04D78829A6@exchis.ccp.ad.local> Message-ID: <496E39B2.7040102@v.loewis.de> > Anyone that wants to use socket.create_connection() to create a > stream socket to a host whose name has both an Ipv4 and Ipv6 address. > Unless the host is listening on its Ipv6 port, an unsuccessful > connection attempt will first be made, taking approximately one > second. Again, it is a bug in the system or the installation takes a second. There should be no timeout, but an immediate error. Regards, Martin From martin at v.loewis.de Wed Jan 14 20:18:00 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 14 Jan 2009 20:18:00 +0100 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <496E0A3D.6090202@palladion.com> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <496DF726.1050209@voidspace.org.uk> <496E0A3D.6090202@palladion.com> Message-ID: <496E3A68.1080906@v.loewis.de> > It barfs on Macs as well: indeed, it is worse, because the connection > just fails there, rather than trying IPv6 and then falling back to IPv4. That depends on the application. Some applications fall back (as they should, if they added their support for IPv6 correctly), some don't. > For instance, tunneling a connection over SSH to a Mac box via '-L > 9999:localhost:9999' will fail to connect at all, unless the server is > listening on IPv6. That's actually bug in sshd. It should fall back to connecting through v4, but doesn't. It's not system specific; sshd as included in Debian has the same bug. Regards, Martin From martin at v.loewis.de Wed Jan 14 20:28:53 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 14 Jan 2009 20:28:53 +0100 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> Message-ID: <496E3CF5.3080306@v.loewis.de> > I have no idea why the connect refusal takes so long. Can you run wireshark, to find out whether it's sending out any requests that don't get responses? Could it be that your firewall is discarding the connection request? (rather than sending an ICMP destination unreachable back) Anything added to the event log? Regards, Martin From python at rcn.com Wed Jan 14 20:59:41 2009 From: python at rcn.com (Raymond Hettinger) Date: Wed, 14 Jan 2009 11:59:41 -0800 Subject: [Python-Dev] should list's call to __len__ swallow SystemExit? References: <350E7D38B6D819428718949920EC23555704DFBBC0@NA-EXMSG-C102.redmond.corp.microsoft.com> <1B8D21FC3A214F2AA5A0D6375406F676@RaymondLaptop1> Message-ID: >> If you want this changed, I can use a negative return >> value for other than an attribute error, and modify >> the calling code to handle the exception. >> To me this isn't worth making the code slower and >> more complex. But I can also see wanting to catch >> a SystemError at any possible step. > It has the potential of masking errors, and thus I'd like to see it fixed. No problem. I'll take care of this one. Since it's an internal API, it should be easy. Raymond From ncoghlan at gmail.com Wed Jan 14 21:21:15 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 15 Jan 2009 06:21:15 +1000 Subject: [Python-Dev] should list's call to __len__ swallow SystemExit? In-Reply-To: References: <350E7D38B6D819428718949920EC23555704DFBBC0@NA-EXMSG-C102.redmond.corp.microsoft.com> <496DC885.5070508@gmail.com> Message-ID: <496E493B.5060303@gmail.com> Karen Tracey wrote: > There is already a bug for this, I believe: > > http://bugs.python.org/issue1242657 Thanks. Raymond, based on your other message, I kicked that issue in your direction. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From exarkun at divmod.com Wed Jan 14 21:19:30 2009 From: exarkun at divmod.com (Jean-Paul Calderone) Date: Wed, 14 Jan 2009 15:19:30 -0500 Subject: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py In-Reply-To: <20090112180928.2E3471E404C@bag.python.org> Message-ID: <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> On Mon, 12 Jan 2009 19:09:28 +0100 (CET), "kristjan.jonsson" wrote: >Author: kristjan.jonsson >Date: Mon Jan 12 19:09:27 2009 >New Revision: 68547 > >Log: >Add tests for invalid format specifiers in strftime, and for handling of invalid file descriptors in the os module. > >Modified: > python/trunk/Lib/test/test_datetime.py > python/trunk/Lib/test/test_os.py Several of the tests added to test_os.py are invalid and fail. Jean-Paul From martin at v.loewis.de Wed Jan 14 21:31:04 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 14 Jan 2009 21:31:04 +0100 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> Message-ID: <496E4B88.4060309@v.loewis.de> > I have no idea why the connect refusal takes so long. > Maybe it's a vista thing? I've looked into this further. It doesn't just happen for localhost, but also for remote hosts, and not just for IPv6, but also for IPv4, and not just for Vista, but also for XP. The problem is this: 1. Vista sends SYN to target machine (say, through v6) 2. Target machine has no port open, and responds with RST,ACK 3. Vista waits 0.5s 4. Vista sends another SYN to target machine 5. Target machine responds with another RST,ACK 6. Vista waits another 0.5s 7. Vista retries a third time, again getting no connection 8. Vista gives up, having spend 1s in trying to establish a connection to a remote port where the remote system has confirmed that the connection is refused already a second ago. I have not found documentation for this feature of the Windows TCP stack, yet. There is the TcpMaxConnectRetransmissions parameter, but this supposed affects requests without responses (and supposed also starts of with 3s) Regards, Martin From kristjan at ccpgames.com Wed Jan 14 22:40:19 2009 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Wed, 14 Jan 2009 21:40:19 +0000 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <496E4B88.4060309@v.loewis.de> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> <496E4B88.4060309@v.loewis.de> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> Aha, thanks, since my wireshark wasn't working. I boiled a few pints of water (thanks, Google) and came up with this: http://support.microsoft.com/kb/175523 Here is the summary: Note that with other implementations of TCP, such as those commonly found in many UNIX systems, the connect() fails immediately upon the receipt of the first ACK/RST packet, resulting in the awareness of an error very quickly. However, this behavior is not specified in the RFCs and is left to each implementation to decide. The approach of Microsoft platforms is that the system administrator has the freedom to adjust TCP performance-related settings to their own tastes, namely the maximum retry that defaults to 3. The advantage of this is that the service you're trying to reach may have temporarily shut down and might resurface in between SYN attempts. In this case, it's convenient that the connect() waited long enough to obtain a connection since the service really was there. Yet another "undefined" thing affecting us, Martin. Kristj?n -----Original Message----- From: "Martin v. L?wis" [mailto:martin at v.loewis.de] Sent: 14. jan?ar 2009 20:31 To: Kristj?n Valur J?nsson Cc: Victor Stinner; python-dev at python.org Subject: Re: [Python-Dev] socket.create_connection slow > I have no idea why the connect refusal takes so long. > Maybe it's a vista thing? I've looked into this further. It doesn't just happen for localhost, but also for remote hosts, and not just for IPv6, but also for IPv4, and not just for Vista, but also for XP. The problem is this: 1. Vista sends SYN to target machine (say, through v6) 2. Target machine has no port open, and responds with RST,ACK 3. Vista waits 0.5s 4. Vista sends another SYN to target machine 5. Target machine responds with another RST,ACK 6. Vista waits another 0.5s 7. Vista retries a third time, again getting no connection 8. Vista gives up, having spend 1s in trying to establish a connection to a remote port where the remote system has confirmed that the connection is refused already a second ago. I have not found documentation for this feature of the Windows TCP stack, yet. There is the TcpMaxConnectRetransmissions parameter, but this supposed affects requests without responses (and supposed also starts of with 3s) Regards, Martin From kristjan at ccpgames.com Wed Jan 14 22:52:34 2009 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Wed, 14 Jan 2009 21:52:34 +0000 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> <496E4B88.4060309@v.loewis.de> <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882B0D@exchis.ccp.ad.local> And Microsoft, realizing their problem , came up with this: http://msdn.microsoft.com/en-us/library/bb513665(VS.85).aspx K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Kristj?n Valur J?nsson Sent: 14. jan?ar 2009 21:40 To: "Martin v. L?wis" Cc: python-dev at python.org Subject: Re: [Python-Dev] socket.create_connection slow Aha, thanks, since my wireshark wasn't working. I boiled a few pints of water (thanks, Google) and came up with this: http://support.microsoft.com/kb/175523 Here is the summary: Note that with other implementations of TCP, such as those commonly found in many UNIX systems, the connect() fails immediately upon the receipt of the first ACK/RST packet, resulting in the awareness of an error very quickly. However, this behavior is not specified in the RFCs and is left to each implementation to decide. The approach of Microsoft platforms is that the system administrator has the freedom to adjust TCP performance-related settings to their own tastes, namely the maximum retry that defaults to 3. The advantage of this is that the service you're trying to reach may have temporarily shut down and might resurface in between SYN attempts. In this case, it's convenient that the connect() waited long enough to obtain a connection since the service really was there. Yet another "undefined" thing affecting us, Martin. Kristj?n -----Original Message----- From: "Martin v. L?wis" [mailto:martin at v.loewis.de] Sent: 14. jan?ar 2009 20:31 To: Kristj?n Valur J?nsson Cc: Victor Stinner; python-dev at python.org Subject: Re: [Python-Dev] socket.create_connection slow > I have no idea why the connect refusal takes so long. > Maybe it's a vista thing? I've looked into this further. It doesn't just happen for localhost, but also for remote hosts, and not just for IPv6, but also for IPv4, and not just for Vista, but also for XP. The problem is this: 1. Vista sends SYN to target machine (say, through v6) 2. Target machine has no port open, and responds with RST,ACK 3. Vista waits 0.5s 4. Vista sends another SYN to target machine 5. Target machine responds with another RST,ACK 6. Vista waits another 0.5s 7. Vista retries a third time, again getting no connection 8. Vista gives up, having spend 1s in trying to establish a connection to a remote port where the remote system has confirmed that the connection is refused already a second ago. I have not found documentation for this feature of the Windows TCP stack, yet. There is the TcpMaxConnectRetransmissions parameter, but this supposed affects requests without responses (and supposed also starts of with 3s) Regards, Martin _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From eric at trueblade.com Wed Jan 14 23:14:25 2009 From: eric at trueblade.com (Eric Smith) Date: Wed, 14 Jan 2009 17:14:25 -0500 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> <496E4B88.4060309@v.loewis.de> <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> Message-ID: <496E63C1.6010102@trueblade.com> Kristj?n Valur J?nsson wrote: > Aha, thanks, since my wireshark wasn't working. > I boiled a few pints of water (thanks, Google) and came up with this: > > http://support.microsoft.com/kb/175523 > > Here is the summary: > Note that with other implementations of TCP, such as those commonly found in many UNIX systems, the connect() fails immediately upon the receipt of the first ACK/RST packet, resulting in the awareness of an error very quickly. However, this behavior is not specified in the RFCs and is left to each implementation to decide. The approach of Microsoft platforms is that the system administrator has the freedom to adjust TCP performance-related settings to their own tastes, namely the maximum retry that defaults to 3. The advantage of this is that the service you're trying to reach may have temporarily shut down and might resurface in between SYN attempts. In this case, it's convenient that the connect() waited long enough to obtain a connection since the service really was there. > > Yet another "undefined" thing affecting us, Martin. I know it's pointless to express my shock here, but I can't resist. It's truly amazing to me that they'd delay the connect call's failure for a second by default, in hopes that the other end might come back up between SYN's. How often could that possibly happen? Eric. From python at rcn.com Wed Jan 14 23:14:40 2009 From: python at rcn.com (Raymond Hettinger) Date: Wed, 14 Jan 2009 14:14:40 -0800 Subject: [Python-Dev] Support for the Haiku OS Message-ID: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> Martin closed a patch http://bugs.python.org/issue4933 for adding support so that Python runs on Haiku. The theory is that we don't want to support minority operation systems. My view is that we should support those systems to the extent that someone like the OP is willing to maintain the handful of deltas needed to get all tests to pass (the OP's comments indicate that very few customizations are necessary). Also, I think there is some merit to supporting other open source projects that are disadvantaged only because they are small. Choices to not support them imply a choice to make their disadvantage a self-fulfilling prophecy. Raymond From steve at holdenweb.com Wed Jan 14 23:23:16 2009 From: steve at holdenweb.com (Steve Holden) Date: Wed, 14 Jan 2009 17:23:16 -0500 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <496E63C1.6010102@trueblade.com> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> <496E4B88.4060309@v.loewis.de> <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> <496E63C1.6010102@trueblade.com> Message-ID: Eric Smith wrote: > Kristj?n Valur J?nsson wrote: >> Aha, thanks, since my wireshark wasn't working. >> I boiled a few pints of water (thanks, Google) and came up with this: >> >> http://support.microsoft.com/kb/175523 >> >> Here is the summary: >> Note that with other implementations of TCP, such as those commonly >> found in many UNIX systems, the connect() fails immediately upon the >> receipt of the first ACK/RST packet, resulting in the awareness of an >> error very quickly. However, this behavior is not specified in the >> RFCs and is left to each implementation to decide. The approach of >> Microsoft platforms is that the system administrator has the freedom >> to adjust TCP performance-related settings to their own tastes, namely >> the maximum retry that defaults to 3. The advantage of this is that >> the service you're trying to reach may have temporarily shut down and >> might resurface in between SYN attempts. In this case, it's convenient >> that the connect() waited long enough to obtain a connection since the >> service really was there. >> >> Yet another "undefined" thing affecting us, Martin. > > I know it's pointless to express my shock here, but I can't resist. It's > truly amazing to me that they'd delay the connect call's failure for a > second by default, in hopes that the other end might come back up > between SYN's. How often could that possibly happen? > When I read it I was tempted to observe they must have been testing Microsoft network services. It is a truly bizarre rationalization of a default that appears to have been taken from DOS-era network client applications. I remember demonstrating the phenomenon on a cli-based Telnet client at least 15 years ago. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From skip at pobox.com Wed Jan 14 23:31:15 2009 From: skip at pobox.com (skip at pobox.com) Date: Wed, 14 Jan 2009 16:31:15 -0600 Subject: [Python-Dev] Support for the Haiku OS In-Reply-To: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> References: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> Message-ID: <18798.26547.535845.514247@montanaro.dyndns.org> Raymond> The theory is that we don't want to support minority operation Raymond> systems. My view is that we should support those systems to Raymond> the extent that someone like the OP is willing to maintain the Raymond> handful of deltas needed to get all tests to pass (the OP's Raymond> comments indicate that very few customizations are necessary). +1. I would argue that Haiku OS is probably no more of a minority platform at this point than OS2/EMX, which continues to be supported, at least at the level if patches being applied to the source. Skip From martin at v.loewis.de Thu Jan 15 00:47:01 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 15 Jan 2009 00:47:01 +0100 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> <496E4B88.4060309@v.loewis.de> <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> Message-ID: <496E7975.4030102@v.loewis.de> > Yet another "undefined" thing affecting us, Martin. I think it's just another case of "Microsoft says it's undefined, even though the standards clearly specify what the behavior must be, and Microsoft managed to implement the behavior that not only violates the specification, but also hurts users of their systems." See Scott's analysis for why this is a case of such ignorance; I agree with this analysis, and many other people in the net also agree. Regards, Martin P.S. The sad thing is not that Microsoft makes mistakes; all system vendors do. The said thing is that they refuse to acknowledge their mistakes, and rather chose to persist the buggy implementation than risking backwards incompatibilities: somebody might rely on getting the connection only on the third attempt. From scott+python-dev at scottdial.com Thu Jan 15 00:19:53 2009 From: scott+python-dev at scottdial.com (Scott Dial) Date: Wed, 14 Jan 2009 18:19:53 -0500 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> <496E4B88.4060309@v.loewis.de> <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> Message-ID: <496E7319.90707@scottdial.com> Kristj?n Valur J?nsson wrote: > http://support.microsoft.com/kb/175523 > > Here is the summary: > Note that with other implementations of TCP, such as those commonly found in many UNIX systems, the connect() fails immediately upon the receipt of the first ACK/RST packet, resulting in the awareness of an error very quickly. However, this behavior is not specified in the RFCs and is left to each implementation to decide. The approach of Microsoft platforms is that the system administrator has the freedom to adjust TCP performance-related settings to their own tastes, namely the maximum retry that defaults to 3. The advantage of this is that the service you're trying to reach may have temporarily shut down and might resurface in between SYN attempts. In this case, it's convenient that the connect() waited long enough to obtain a connection since the service really was there. > > Yet another "undefined" thing affecting us, Martin. > I think RFC793 is actually pretty clear in stating that: """ If the receiver [of the RST packet] was in any other state, it aborts the connection and advises the user and goes to the CLOSED state. """ But alas, Microsoft thinks they know better.. A brief search yields a number of threads on mailing lists for proxies having to deal with this "feature". The solution that I see as most viable is temporarily making the sockets nonblocking before the connect() and scheduling our own timeout. The only variation on this is I have seen is to use GetTcpTable() to retrieve the status of the socket to determine the state of the socket (since a timeout would kill connects() that are just slow too..). -- Scott Dial scott at scottdial.com scodial at cs.indiana.edu From martin at v.loewis.de Thu Jan 15 01:06:32 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 15 Jan 2009 01:06:32 +0100 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882B0D@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> <496E4B88.4060309@v.loewis.de> <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> <930F189C8A437347B80DF2C156F7EC7F04D7882B0D@exchis.ccp.ad.local> Message-ID: <496E7E08.3070104@v.loewis.de> > And Microsoft, realizing their problem , came up with this: > http://msdn.microsoft.com/en-us/library/bb513665(VS.85).aspx Dual-stacked sockets are a useful thing to have (so useful that Linux made them the default, despite that the RFC says that the default should be IPV6_V6ONLY). The Python library should make all server sockets dual-stacked if possible. Unfortunately: - the socket option is not available on all systems, in particular, it is not available on Windows XP (you need Vista) - you'll see the 1s delay on the client side if the server is not dual-stacked, so if the server "misbehaves", the client has to suffer. Regards, Martin From guido at python.org Thu Jan 15 01:10:26 2009 From: guido at python.org (Guido van Rossum) Date: Wed, 14 Jan 2009 16:10:26 -0800 Subject: [Python-Dev] Support for the Haiku OS In-Reply-To: <18798.26547.535845.514247@montanaro.dyndns.org> References: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> <18798.26547.535845.514247@montanaro.dyndns.org> Message-ID: I'm with Martin. In these days of distributed version control systems, I would think that the effort for the Haiku folks to maintain a branch of Python in their own version control would be minimal. It is likely that for each new Python version that comes out, initially it is broken on Haiku, and then they have to go in and fix it. Doing that in their own version control has the advantage that they don't have to worry about not breaking support for any other minority operating systems, so I expect that all in all the cost will be less for them than if they have to submit these patches to core Python; and since that will definitely be less work for core Python, I would call that a win-win solution. On Wed, Jan 14, 2009 at 2:31 PM, wrote: > > Raymond> The theory is that we don't want to support minority operation > Raymond> systems. My view is that we should support those systems to > Raymond> the extent that someone like the OP is willing to maintain the > Raymond> handful of deltas needed to get all tests to pass (the OP's > Raymond> comments indicate that very few customizations are necessary). > > +1. I would argue that Haiku OS is probably no more of a minority platform > at this point than OS2/EMX, which continues to be supported, at least at the > level if patches being applied to the source. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From skip at pobox.com Thu Jan 15 05:28:53 2009 From: skip at pobox.com (skip at pobox.com) Date: Wed, 14 Jan 2009 22:28:53 -0600 Subject: [Python-Dev] stuck with dlopen... Message-ID: <18798.48005.285151.445565@montanaro.dyndns.org> I've recently been working on generating C functions on-the-fly which inline the C code necessary to implement the bytecode in a given Python function. For example, this bytecode: >>> dis.dis(f) 2 0 LOAD_FAST 0 (a) 3 LOAD_CONST 1 (1) 6 BINARY_ADD 7 RETURN_VALUE is transformed into this rather boring bit of C code: #include "Python.h" #include "code.h" #include "frameobject.h" #include "eval.h" #include "opcode.h" #include "structmember.h" #include "opcode_mini.h" PyObject * _PyEval_EvalMiniFrameEx(PyFrameObject *f, int throwflag) { static int jitting = 1; PyEval_EvalFrameEx_PROLOG1(); co = f->f_code; PyEval_EvalFrameEx_PROLOG2(); oparg = 0; LOAD_FAST_IMPL(oparg); oparg = 1; LOAD_CONST_IMPL(oparg); BINARY_ADD_IMPL(); RETURN_VALUE_IMPL(); PyEval_EvalFrameEx_EPILOG(); } The PROLOG1, PROLOG2 and EPILOG macros are just chunks of code from PyEval_EvalFrameEx. I have the code compiling and linking, and dlopen and dlsym seem to work, returning apparently valid pointers, but when I try to call the function I get Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_PROTECTION_FAILURE at address: 0x0000000c 0x0058066d in _PyEval_EvalMiniFrameEx (f=0x230d30, throwflag=0) at MwDLSf.c:17 Line 17 is the PROLOG1 macro. I presume it's probably barfed on the very first instruction. (This is all on an Intel Mac running Leopard BTW.) Here are the commands generated to compile and link the C code: gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall \ -Wstrict-prototypes -g -DPy_BUILD_CORE -DNDEBUG \ -I/Users/skip/src/python/py3k-t/Include \ -I/Users/skip/src/python/py3k-t -c dTd5cl.c \ -o /tmp/MwDLSf.o gcc -L/opt/local/lib -bundle -undefined dynamic_lookup -g \ /tmp/dTd5cl.o -L/Users/skip/src/python/py3k-t -lpython3.1 \ -o /tmp/MwDLSf.so (It just uses the distutils compiler module to build .so files.) The .so file looks more-or-less ok: % otool -L /tmp/MwDLSf.so /tmp/MwDLSf.so: /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.1.3) though nm doesn't show that any undefined _Py* symbols so I suspect I'm not linking it correctly. The Python executable was built without --enable-shared. I've tried building with that config flag, but that just gives me fits during debugging because it always wants to find libpython in the installation directory even if I'm running python.exe from the build directory. Installing is a little tedious because it relies on a properly functioning interpreter. dlopen is called very simply: handle = dlopen(shared, RTLD_NOW); I used RTLD_NOW because that's what sys.getdlopenflags() returns. I'm not calling dlclose for the time being. I'm not exactly sure where I should go from here. I'd be more than happy to open an item in the issue tracker. I was hoping to get something a bit closer to working before doing that though. The failure to properly load the compiled function makes it pretty much impossble to debug the generated code beyond what the compiler can tell me. Any suggestions? Skip From scottmc2 at gmail.com Thu Jan 15 08:23:04 2009 From: scottmc2 at gmail.com (scott mc) Date: Wed, 14 Jan 2009 23:23:04 -0800 Subject: [Python-Dev] Support for the Haiku OS In-Reply-To: References: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> <18798.26547.535845.514247@montanaro.dyndns.org> Message-ID: On Wed, Jan 14, 2009 at 4:10 PM, Guido van Rossum wrote: > I'm with Martin. In these days of distributed version control systems, > I would think that the effort for the Haiku folks to maintain a branch > of Python in their own version control would be minimal. It is likely > that for each new Python version that comes out, initially it is > broken on Haiku, and then they have to go in and fix it. Doing that in > their own version control has the advantage that they don't have to > worry about not breaking support for any other minority operating > systems, so I expect that all in all the cost will be less for them > than if they have to submit these patches to core Python; and since > that will definitely be less work for core Python, I would call that a > win-win solution. > Guido, Thanks for your feedback on this. We'd be ok with keeping track of Haiku specific patches in the HaikuPorts svn as you suggest. If we come across things we feel can apply to python as a whole and not just Haiku specific then we'll feed just those changes back to python. Ideally we'd get to the point though that each new version of python "will just work" on Haiku. -Scott McCreary HaikuPorts From kristjan at ccpgames.com Thu Jan 15 09:41:52 2009 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Thu, 15 Jan 2009 08:41:52 +0000 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <496E7E08.3070104@v.loewis.de> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> <496E4B88.4060309@v.loewis.de> <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> <930F189C8A437347B80DF2C156F7EC7F04D7882B0D@exchis.ccp.ad.local> <496E7E08.3070104@v.loewis.de> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882B4B@exchis.ccp.ad.local> Right, there is no way to try to simultaneously connect using ipv4 and ipv6, apparently. Also, the problem with setting the registry TcpConnectMaxRetries registry entry is that it also affects retries wen no ACK is received. This is probably something one doesn't want to mess with. Okay, so do we want to bug MS about this? Clearly it is a performance problem when implementing dual stack clients. K -----Original Message----- From: "Martin v. L?wis" [mailto:martin at v.loewis.de] Sent: 15. jan?ar 2009 00:07 To: Kristj?n Valur J?nsson Cc: python-dev at python.org Subject: Re: [Python-Dev] socket.create_connection slow > And Microsoft, realizing their problem , came up with this: > http://msdn.microsoft.com/en-us/library/bb513665(VS.85).aspx Dual-stacked sockets are a useful thing to have (so useful that Linux made them the default, despite that the RFC says that the default should be IPV6_V6ONLY). The Python library should make all server sockets dual-stacked if possible. Unfortunately: - the socket option is not available on all systems, in particular, it is not available on Windows XP (you need Vista) - you'll see the 1s delay on the client side if the server is not dual-stacked, so if the server "misbehaves", the client has to suffer. Regards, Martin From asmodai at in-nomine.org Thu Jan 15 10:00:56 2009 From: asmodai at in-nomine.org (Jeroen Ruigrok van der Werven) Date: Thu, 15 Jan 2009 10:00:56 +0100 Subject: [Python-Dev] Support for the Haiku OS In-Reply-To: References: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> <18798.26547.535845.514247@montanaro.dyndns.org> Message-ID: <20090115090056.GI1060@nexus.in-nomine.org> -On [20090115 01:11], Guido van Rossum (guido at python.org) wrote: >I'm with Martin. In these days of distributed version control systems, >I would think that the effort for the Haiku folks to maintain a branch >of Python in their own version control would be minimal. It is likely >that for each new Python version that comes out, initially it is >broken on Haiku, and then they have to go in and fix it. Last time I looked at Haiku and dabbled with it there were some people actively working on POSIX compliance. My only guess right now is that this work is largely complete. In effect that would mean that Python would work out of the box, more or less. So the cost of adding and maintaining it in the main repository should not be a big overhaul or anything. Just as a FYI. :) -- Jeroen Ruigrok van der Werven / asmodai ????? ?????? ??? ?? ?????? http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B Earth to earth, ashes to ashes, dust to dust... From kristjan at ccpgames.com Thu Jan 15 10:13:13 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Thu, 15 Jan 2009 09:13:13 +0000 Subject: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py In-Reply-To: <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> References: <20090112180928.2E3471E404C@bag.python.org> <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882B50@exchis.ccp.ad.local> Ok, in r 68610 I fixed some of this. The strftime test is now just an excercise, since clearly some platforms accept the %e without hesitation. Also, there were errors in two test_os cases. However, these: ====================================================================== ERROR: test_ftruncate (test.test_os.TestInvalidFD) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/python-trunk/trunk.norwitz-x86/build/Lib/test/test_os.py", line 570, in test_ftruncate self.assertRaises(OSError, os.ftruncate, 10, 0) File "/home/buildslave/python-trunk/trunk.norwitz-x86/build/Lib/unittest.py", line 345, in failUnlessRaises callableObj(*args, **kwargs) IOError: [Errno 9] Bad file descriptor ====================================================================== FAIL: test_close (test.test_os.TestInvalidFD) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/buildslave/python-trunk/trunk.norwitz-x86/build/Lib/test/test_os.py", line 542, in helper self.assertRaises(OSError, getattr(os, f), 10) AssertionError: OSError not raised Seem bogus. For ftruncate, an invalid filedescriptor really should return OSError, and close(10) should raise an OSError as well. However, these are just being mapped up from whatever the OS returns, so I suppose I should make the tests more lenient? K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Jean-Paul Calderone Sent: 14. jan?ar 2009 20:20 To: python-dev at python.org Subject: Re: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py On Mon, 12 Jan 2009 19:09:28 +0100 (CET), "kristjan.jonsson" wrote: >Author: kristjan.jonsson >Date: Mon Jan 12 19:09:27 2009 >New Revision: 68547 > >Log: >Add tests for invalid format specifiers in strftime, and for handling of invalid file descriptors in the os module. > >Modified: > python/trunk/Lib/test/test_datetime.py > python/trunk/Lib/test/test_os.py Several of the tests added to test_os.py are invalid and fail. Jean-Paul _______________________________________________ Python-Dev mailing list Python-Dev at python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/kristjan%40ccpgames.com From zhiyongcui at gmail.com Thu Jan 15 10:45:28 2009 From: zhiyongcui at gmail.com (=?GB2312?B?tN7WvtPC?=) Date: Thu, 15 Jan 2009 17:45:28 +0800 Subject: [Python-Dev] Help In-Reply-To: References: Message-ID: <871a9ede0901150145s4a16a8b3xbf8fb4745a797ce@mail.gmail.com> First of all,I'm so sorry for that my english is so poor that I can't use it freely. Can you tell me how can I install the Python3.0 on my computer with the Red Hat Enterprise 5? Thank you! cuizhiyong 2009-1-15 -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip at pobox.com Thu Jan 15 12:16:47 2009 From: skip at pobox.com (skip at pobox.com) Date: Thu, 15 Jan 2009 05:16:47 -0600 Subject: [Python-Dev] Help In-Reply-To: <871a9ede0901150145s4a16a8b3xbf8fb4745a797ce@mail.gmail.com> References: <871a9ede0901150145s4a16a8b3xbf8fb4745a797ce@mail.gmail.com> Message-ID: <18799.6943.104219.684138@montanaro.dyndns.org> >> Can you tell me how can I install the Python3.0 on my computer with >> the Red Hat Enterprise 5? You should ask this question on one of these three mailing lists: python-list at python.org help at python.org tutor at python.org This mailing list discusses Python development not how to use Python. -- Skip Montanaro - skip at pobox.com - http://smontanaro.dyndns.org/ From solipsis at pitrou.net Thu Jan 15 13:00:22 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 15 Jan 2009 12:00:22 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?=5BPython-checkins=5D_r68547_-_in_python/t?= =?utf-8?q?runk/Lib/test=3A_test=5Fdatetime=2Epytest=5Fos=2Epy?= References: <20090112180928.2E3471E404C@bag.python.org> <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> <930F189C8A437347B80DF2C156F7EC7F04D7882B50@exchis.ccp.ad.local> Message-ID: Kristj?n Valur J?nsson ccpgames.com> writes: > Seem bogus. > For ftruncate, an invalid filedescriptor really should return OSError, and close(10) should raise an > OSError as well. It seems wrong to assume that 10 is an invalid file descriptor at the time of running the test. IMO you should first open a file descriptor, remember its value and then close it, that way you are reasonably sure that it will be invalid just after. (I'm not saying this is why the tests are failing, but at least it would make them more robust) From stephane at voyageonline.co.uk Thu Jan 15 13:58:27 2009 From: stephane at voyageonline.co.uk (=?utf-8?q?St=C3=A9phane_Konstantaropoulos?=) Date: Thu, 15 Jan 2009 12:58:27 +0000 Subject: [Python-Dev] imap bodystructure parser - email.message.Message extension Message-ID: <200901151258.27413.stephane@voyageonline.co.uk> Hello, I wrote an extension to the imaplib library that implements a "BODYSTRUCTURE" parser. For this I wrote an extension to email.message.Message that allows a message structure to be loaded from imap, the message can be used like a normal email.message.Message but the actual payload is only loaded on the fly. This makes it much quicker for imap applications. The code is here if anyone fancies a try: http://voyageonline.co.uk/blog/article/python-imaplibx I though it might be useful as imaplib is very low level. Regards, -- Stephane Konstantaropoulos -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 197 bytes Desc: This is a digitally signed message part. URL: From dickinsm at gmail.com Thu Jan 15 16:43:44 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Thu, 15 Jan 2009 15:43:44 +0000 Subject: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882B50@exchis.ccp.ad.local> References: <20090112180928.2E3471E404C@bag.python.org> <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> <930F189C8A437347B80DF2C156F7EC7F04D7882B50@exchis.ccp.ad.local> Message-ID: <5c6f2a5d0901150743o15d7df65v85b50797404e803a@mail.gmail.com> On Thu, Jan 15, 2009 at 9:13 AM, Kristj?n Valur J?nsson wrote: > However, these: > > ====================================================================== > ERROR: test_ftruncate (test.test_os.TestInvalidFD) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/buildslave/python-trunk/trunk.norwitz-x86/build/Lib/test/test_os.py", line 570, in test_ftruncate > self.assertRaises(OSError, os.ftruncate, 10, 0) > File "/home/buildslave/python-trunk/trunk.norwitz-x86/build/Lib/unittest.py", line 345, in failUnlessRaises > callableObj(*args, **kwargs) > IOError: [Errno 9] Bad file descriptor At the risk of stating the obvious, shouldn't you be checking for IOError rather than OSError in assertRaises? Mark From guido at python.org Thu Jan 15 16:53:26 2009 From: guido at python.org (Guido van Rossum) Date: Thu, 15 Jan 2009 07:53:26 -0800 Subject: [Python-Dev] Support for the Haiku OS In-Reply-To: <20090115090056.GI1060@nexus.in-nomine.org> References: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> <18798.26547.535845.514247@montanaro.dyndns.org> <20090115090056.GI1060@nexus.in-nomine.org> Message-ID: On Thu, Jan 15, 2009 at 1:00 AM, Jeroen Ruigrok van der Werven wrote: > -On [20090115 01:11], Guido van Rossum (guido at python.org) wrote: >>I'm with Martin. In these days of distributed version control systems, >>I would think that the effort for the Haiku folks to maintain a branch >>of Python in their own version control would be minimal. It is likely >>that for each new Python version that comes out, initially it is >>broken on Haiku, and then they have to go in and fix it. > > Last time I looked at Haiku and dabbled with it there were some people > actively working on POSIX compliance. My only guess right now is that this > work is largely complete. In effect that would mean that Python would work > out of the box, more or less. So the cost of adding and maintaining it in > the main repository should not be a big overhaul or anything. > > Just as a FYI. :) Did you look at the patch they submitted? http://bugs.python.org/issue4933 -- --Guido van Rossum (home page: http://www.python.org/~guido/) From kristjan at ccpgames.com Thu Jan 15 17:19:14 2009 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Thu, 15 Jan 2009 16:19:14 +0000 Subject: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py In-Reply-To: <5c6f2a5d0901150743o15d7df65v85b50797404e803a@mail.gmail.com> References: <20090112180928.2E3471E404C@bag.python.org> <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> <930F189C8A437347B80DF2C156F7EC7F04D7882B50@exchis.ccp.ad.local> <5c6f2a5d0901150743o15d7df65v85b50797404e803a@mail.gmail.com> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882D97@exchis.ccp.ad.local> Well, all the other functions raise OSError when the file descriptor is invalid. IOError usually means that the IO itself failed. I wonder if it is platform specific? Does it raise IOError on all platforms? I can also change the test to test for IOError or OSError. K -----Original Message----- From: Mark Dickinson [mailto:dickinsm at gmail.com] Sent: 15. jan?ar 2009 15:44 To: Kristj?n Valur J?nsson Cc: Jean-Paul Calderone; python-dev at python.org Subject: Re: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py On Thu, Jan 15, 2009 at 9:13 AM, Kristj?n Valur J?nsson wrote: > However, these: > > ====================================================================== > ERROR: test_ftruncate (test.test_os.TestInvalidFD) > ---------------------------------------------------------------------- > Traceback (most recent call last): > File "/home/buildslave/python-trunk/trunk.norwitz-x86/build/Lib/test/test_os.py", line 570, in test_ftruncate > self.assertRaises(OSError, os.ftruncate, 10, 0) > File "/home/buildslave/python-trunk/trunk.norwitz-x86/build/Lib/unittest.py", line 345, in failUnlessRaises > callableObj(*args, **kwargs) > IOError: [Errno 9] Bad file descriptor At the risk of stating the obvious, shouldn't you be checking for IOError rather than OSError in assertRaises? Mark From dickinsm at gmail.com Thu Jan 15 18:00:59 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Thu, 15 Jan 2009 17:00:59 +0000 Subject: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882D97@exchis.ccp.ad.local> References: <20090112180928.2E3471E404C@bag.python.org> <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> <930F189C8A437347B80DF2C156F7EC7F04D7882B50@exchis.ccp.ad.local> <5c6f2a5d0901150743o15d7df65v85b50797404e803a@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7882D97@exchis.ccp.ad.local> Message-ID: <5c6f2a5d0901150900h6e5c857boea4eccae0f74693@mail.gmail.com> On Thu, Jan 15, 2009 at 4:19 PM, Kristj?n Valur J?nsson wrote: > Well, all the other functions raise OSError when the file descriptor is invalid. IOError usually means that the IO itself failed. > I wonder if it is platform specific? Does it raise IOError on all platforms? It certainly looks like it: here are lines 6632--6638 of posixmodule.c, in posix_ftruncate: Py_BEGIN_ALLOW_THREADS res = ftruncate(fd, length); Py_END_ALLOW_THREADS if (res < 0) { PyErr_SetFromErrno(PyExc_IOError); return NULL; } Mark From kristjan at ccpgames.com Thu Jan 15 18:06:54 2009 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Thu, 15 Jan 2009 17:06:54 +0000 Subject: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py In-Reply-To: <5c6f2a5d0901150900h6e5c857boea4eccae0f74693@mail.gmail.com> References: <20090112180928.2E3471E404C@bag.python.org> <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> <930F189C8A437347B80DF2C156F7EC7F04D7882B50@exchis.ccp.ad.local> <5c6f2a5d0901150743o15d7df65v85b50797404e803a@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7882D97@exchis.ccp.ad.local> <5c6f2a5d0901150900h6e5c857boea4eccae0f74693@mail.gmail.com> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882DD3@exchis.ccp.ad.local> Interesting. Looks like a bug, really. It's the only function that sets IOError. All others use posix_error which raises an OSError. Maybe tests are a good thing, then? Kristj?n -----Original Message----- From: Mark Dickinson [mailto:dickinsm at gmail.com] Sent: 15. jan?ar 2009 17:01 To: Kristj?n Valur J?nsson Cc: Jean-Paul Calderone; python-dev at python.org Subject: Re: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py On Thu, Jan 15, 2009 at 4:19 PM, Kristj?n Valur J?nsson wrote: > Well, all the other functions raise OSError when the file descriptor is invalid. IOError usually means that the IO itself failed. > I wonder if it is platform specific? Does it raise IOError on all platforms? It certainly looks like it: here are lines 6632--6638 of posixmodule.c, in posix_ftruncate: Py_BEGIN_ALLOW_THREADS res = ftruncate(fd, length); Py_END_ALLOW_THREADS if (res < 0) { PyErr_SetFromErrno(PyExc_IOError); return NULL; } Mark From orsenthil at gmail.com Thu Jan 15 18:20:58 2009 From: orsenthil at gmail.com (Senthil Kumaran) Date: Thu, 15 Jan 2009 22:50:58 +0530 Subject: [Python-Dev] Support for the Haiku OS Message-ID: <20090115172058.GC6777@goofy> Jeroen Ruigrok van der Werven wrote: > actively working on POSIX compliance. My only guess right now is that this > work is largely complete. In effect that would mean that Python would work > out of the box, more or less. So the cost of adding and maintaining it in This is very interesting to know. If Py 2.x and Py 3K gets supported and perhaps come built-in with next Haiku release, it would be definitely be a win-win situation for both Python and Haiku. -- Senthil From asmodai at in-nomine.org Thu Jan 15 18:23:36 2009 From: asmodai at in-nomine.org (Jeroen Ruigrok van der Werven) Date: Thu, 15 Jan 2009 18:23:36 +0100 Subject: [Python-Dev] Support for the Haiku OS In-Reply-To: References: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> <18798.26547.535845.514247@montanaro.dyndns.org> <20090115090056.GI1060@nexus.in-nomine.org> Message-ID: <20090115172335.GK1060@nexus.in-nomine.org> -On [20090115 16:53], Guido van Rossum (guido at python.org) wrote: >Did you look at the patch they submitted? http://bugs.python.org/issue4933 I did now (python-2.5.4-haiku-2.diff). I am not sure what you are implying though, Guido. It doesn't look like a huge change and most of it is close to 'one time only'. -- Jeroen Ruigrok van der Werven / asmodai ????? ?????? ??? ?? ?????? http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B Earth to earth, ashes to ashes, dust to dust... From guido at python.org Thu Jan 15 18:57:51 2009 From: guido at python.org (Guido van Rossum) Date: Thu, 15 Jan 2009 09:57:51 -0800 Subject: [Python-Dev] Support for the Haiku OS In-Reply-To: <20090115172335.GK1060@nexus.in-nomine.org> References: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> <18798.26547.535845.514247@montanaro.dyndns.org> <20090115090056.GI1060@nexus.in-nomine.org> <20090115172335.GK1060@nexus.in-nomine.org> Message-ID: On Thu, Jan 15, 2009 at 9:23 AM, Jeroen Ruigrok van der Werven wrote: > -On [20090115 16:53], Guido van Rossum (guido at python.org) wrote: >>Did you look at the patch they submitted? http://bugs.python.org/issue4933 > > I did now (python-2.5.4-haiku-2.diff). I am not sure what you are implying > though, Guido. It doesn't look like a huge change and most of it is close to > 'one time only'. That's the naive idea. That's what I used to thing too, but it just isn't so. We have quite a bit of experience with these kinds of "one time only" platform-specific changes, and they are never once-only -- they invariably get out of date with each new version released, since nobody except the users of that platform ever tests new versions on that platform. (Also the platform typically evolves faster than Python.) The effort to get a new version QA'ed on such a minority platform *before* it is released never gets made, so the expected and promised compatibility is disappointing for all -- and then the core developers get blamed. It can work only if there are core developers who care enough about such a minority platform to try new versions (both of the trunk and of maintenance branches!) on their platform, *and* submit necessary fixes right away. I don't see such a commitment in this case, but if a believable one comes up I'm sure Martin would happily revert his position. Note that a buildbot would have to be part of the deal. However a buildbot is not enough -- if nobody fixes the build for that platform it will just be ignored by release managers. And only the users of the platform can fix the issues. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Thu Jan 15 19:06:58 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 15 Jan 2009 19:06:58 +0100 Subject: [Python-Dev] socket.create_connection slow In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882B4B@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882928@exchis.ccp.ad.local> <200901141346.18407.victor.stinner@haypocalc.com> <930F189C8A437347B80DF2C156F7EC7F04D78829F8@exchis.ccp.ad.local> <496E4B88.4060309@v.loewis.de> <930F189C8A437347B80DF2C156F7EC7F04D7882B0C@exchis.ccp.ad.local> <930F189C8A437347B80DF2C156F7EC7F04D7882B0D@exchis.ccp.ad.local> <496E7E08.3070104@v.loewis.de> <930F189C8A437347B80DF2C156F7EC7F04D7882B4B@exchis.ccp.ad.local> Message-ID: <496F7B42.6010109@v.loewis.de> > Right, there is no way to try to simultaneously connect using ipv4 > and ipv6, apparently. Ah, I see what you meant. No, this cannot work - what if you get positive ACKs on both protocols? > Also, the problem with setting the registry TcpConnectMaxRetries > registry entry is that it also affects retries wen no ACK is > received. This is probably something one doesn't want to mess with. Indeed. They were wrong to overload this setting. > Okay, so do we want to bug MS about this? If you think it helps, go ahead! This has been in the system for so long that they are unlikely to change it. Yet, as IPv6 deployment progresses, this case will occur more and more often (until eventually all services are dual-stacked - in which case the only effect will be that you wait 2 seconds if the service is really not available; 1s delay per protocol). If the default retry counter can't be changed, I'd suggest that they provide a socket option. Regards, Martin From barry at python.org Thu Jan 15 19:29:38 2009 From: barry at python.org (Barry Warsaw) Date: Thu, 15 Jan 2009 13:29:38 -0500 Subject: [Python-Dev] imap bodystructure parser - email.message.Message extension In-Reply-To: <200901151258.27413.stephane@voyageonline.co.uk> References: <200901151258.27413.stephane@voyageonline.co.uk> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 15, 2009, at 7:58 AM, St?phane Konstantaropoulos wrote: > I wrote an extension to the imaplib library that implements a > "BODYSTRUCTURE" > parser. > > For this I wrote an extension to email.message.Message that allows a > message > structure to be loaded from imap, the message can be used like a > normal > email.message.Message but the actual payload is only loaded on the > fly. This > makes it much quicker for imap applications. > > The code is here if anyone fancies a try: > > http://voyageonline.co.uk/blog/article/python-imaplibx > > I though it might be useful as imaplib is very low level. This sounds very interesting. One of the things I'd hoped to do for the next version of the email package is actually design an API for remotely loading and saving the payloads. Yours is one of the use cases I've thought of, but also being able to store the payload in a disk cache so as not to chew up memory for say, lots of big image attachments. Unfortunately I don't have time right now to work on this, but I'll keep your message. You should think about joining the email-sig and bringing this up there, if you want to discuss such a generic API. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSW+AknEjvBPtnXfVAQIuKgP/SQcqJ+mdV7P9Euphb4kYPlqHKn6iUYTE uMSNIUvf49Z3pMoup8fJmnOjbela7hCW0g1ZIERaMSrttIq0RNACsoA9GSulfi2M upXp+IE0bGpnxrd0TNNs0DiRHSlerH7okqsix07CdOK2KR3iKBLPypmqWz5T4BpK PJbOEdUWSwo= =q0R0 -----END PGP SIGNATURE----- From martin at v.loewis.de Thu Jan 15 20:36:54 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 15 Jan 2009 20:36:54 +0100 Subject: [Python-Dev] Support for the Haiku OS In-Reply-To: References: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> <18798.26547.535845.514247@montanaro.dyndns.org> <20090115090056.GI1060@nexus.in-nomine.org> <20090115172335.GK1060@nexus.in-nomine.org> Message-ID: <496F9056.8060503@v.loewis.de> > I don't see such a commitment in this case, but if a > believable one comes up I'm sure Martin would happily revert his > position. Indeed. I have myself added support for AtheOS, even though I had never used the system. The AtheOS maintainer ran away, the code rotted, and eventually get ripped out. The same happened for a lot of other systems whose code was recently removed. Unless a core committer actively works on the port, I see little change that it remains in a usable shape over time. In case you wonder why it might break: configure gets rewritten to have new features (like --enable-shared), and then people contributing such a change can only contribute it for the systems they can test it on. New variables get introduced, which don't get set for Haiku, and the Makefile breaks - perhaps in a trivial way, but nonetheless useless for the end user who wants to build Haiku. In addition, I expect that a *true* Haiku port would have many additional modules that provide access to API specific to the system (such as support for GUI applications). The true Haiku port will have to provide all these things, but they won't be in the core. So people using Python on Haiku will have to get it from elsewhere, anyway. So I think adding the patch to Python has little advantage to Haiku users, and (if the patch is small) maintaining it outside of python.org should be little effort to the authors of the patch. (e.g. compared to writing all these other modules) Regards, Martin From dickinsm at gmail.com Thu Jan 15 21:39:52 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Thu, 15 Jan 2009 20:39:52 +0000 Subject: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882DD3@exchis.ccp.ad.local> References: <20090112180928.2E3471E404C@bag.python.org> <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> <930F189C8A437347B80DF2C156F7EC7F04D7882B50@exchis.ccp.ad.local> <5c6f2a5d0901150743o15d7df65v85b50797404e803a@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7882D97@exchis.ccp.ad.local> <5c6f2a5d0901150900h6e5c857boea4eccae0f74693@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7882DD3@exchis.ccp.ad.local> Message-ID: <5c6f2a5d0901151239j7820478cv9a1c3166595f7627@mail.gmail.com> On Thu, Jan 15, 2009 at 5:06 PM, Kristj?n Valur J?nsson wrote: > Interesting. > Looks like a bug, really. It's the only function that sets IOError. All others use posix_error which raises an OSError. Maybe. But changing it risks breaking existing code, so would certainly require (at least) a tracker discussion. In the meantime, please could you either revert or fix the r68547 checkin? It looks as though *all* of the (non-Windows) trunk buildbots are failing on test_os, and if any of the release managers notices we'll all be in trouble. :-) Mark From lkcl at lkcl.net Thu Jan 15 22:48:49 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 15 Jan 2009 21:48:49 +0000 Subject: [Python-Dev] report on building of python 2.5.2 under msys under wine on linux. Message-ID: no, the above subject-line is not a joke: i really _have_ successfully built python2.5.2 by installing wine on linux, then msys under wine, and then mingw32 compiler - no, not the linux mingw32-cross-compiler, the _native_ mingw32 compiler that runs under msys, and then hacking things into submission until it worked. issue-report: http://bugs.python.org/issue4954 source code: http://github.com/lkcl/pythonwine/tree/python_2.5.2_wine related issue-report: http://bugs.python.org/issue3871 related issue-report: http://bugs.python.org/issue1597850 i'm going to _try_ to merge in #3871 but it's... the prospect of sitting waiting for configure to take THREE hours to complete, due to /bin/sh.exe instances taking TWO SECONDS _each_ to start up does not really fill me with deep joy. consequently i did a major hatchet-job on configure.in with repeated applications of "if test $win32build = no; then" ... cue several hundred lines of configure.in tests blatantly ignored "fi # $win32build=no! " and thus cut the configure time down from three hours to a mere 15 minutes. the only reason why this was possible at all was because PC/config.h already exists and has been pre-set-up with lots of lovely #defines. also, there is another significant difference between #3871 and #4954 - i chose to build in to libpython2.5.dll exactly as many modules as are in the proprietary win32 build. this turned out to be a good practical decision, due to /bin/sh.exe messing around and stopping python.exe from running! (under cmd.exe it's fine. i have to do a bit more investigation: my guess is that the msys "remounter" is getting in the way, somehow. compiling python to have a prefix of /python25 results in files being installed in /python25 which maps to c:/msys/python25/ but.... actually that doesn't get communicated correctly to the compiled python.exe.... it's all a bit odd - it still feels like things are being cross-compiled... but they're not... it's just that setup.py has paths that don't _quite_ match up with the msys environment... needs work, there. the regression testing is _great_ fun! some of the failures are really quite spectacular, but surprisingly there are less than anticipated. file "sharing violation" isn't a big surprise (under wine); the ctypes structure failures are going to be a bitch to hunt down; the test_str %f failure _was_ a big surpise; the builtin file \r\n <-> \n thing wasn't in the _least_ bit of a surprise :) overall, this has been... interesting. and the key thing is that thanks to #3871 and #4954 and #1597850, python will soon happily compile for win32 _without_ the dependence on _any_ proprietary software or operating systems. that's a pretty significant milestone. l. p.s. if anyone would like to try out this build, on a windows box, to see if it fares any better on the regression tests please say so and i will make the binaries available. From kristjan at ccpgames.com Thu Jan 15 23:40:40 2009 From: kristjan at ccpgames.com (=?utf-8?B?S3Jpc3Rqw6FuIFZhbHVyIErDs25zc29u?=) Date: Thu, 15 Jan 2009 22:40:40 +0000 Subject: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py In-Reply-To: <5c6f2a5d0901151239j7820478cv9a1c3166595f7627@mail.gmail.com> References: <20090112180928.2E3471E404C@bag.python.org> <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> <930F189C8A437347B80DF2C156F7EC7F04D7882B50@exchis.ccp.ad.local> <5c6f2a5d0901150743o15d7df65v85b50797404e803a@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7882D97@exchis.ccp.ad.local> <5c6f2a5d0901150900h6e5c857boea4eccae0f74693@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7882DD3@exchis.ccp.ad.local> <5c6f2a5d0901151239j7820478cv9a1c3166595f7627@mail.gmail.com> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882E63@exchis.ccp.ad.local> Right. I've fixed the remainder, things should quiet down now. K -----Original Message----- From: Mark Dickinson [mailto:dickinsm at gmail.com] Sent: 15. jan?ar 2009 20:40 To: Kristj?n Valur J?nsson Cc: Jean-Paul Calderone; python-dev at python.org Subject: Re: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py On Thu, Jan 15, 2009 at 5:06 PM, Kristj?n Valur J?nsson wrote: > Interesting. > Looks like a bug, really. It's the only function that sets IOError. All others use posix_error which raises an OSError. Maybe. But changing it risks breaking existing code, so would certainly require (at least) a tracker discussion. In the meantime, please could you either revert or fix the r68547 checkin? It looks as though *all* of the (non-Windows) trunk buildbots are failing on test_os, and if any of the release managers notices we'll all be in trouble. :-) Mark From jan.malachowski at gmail.com Fri Jan 16 00:05:50 2009 From: jan.malachowski at gmail.com (Jan Malakhovski) Date: Fri, 16 Jan 2009 02:05:50 +0300 Subject: [Python-Dev] Lib/email/header.py ecre regular expression Message-ID: <20090116020550.bb62d597.jan.malachowski@gmail.com> Hello. Welcome message to this mail list said that it's good to tell a few words about myself. So, my name is Jan Malakhovski aka OXIj, I'm living in St. Petersburg, Russia, student of ITMO University, 4th year of education. I have dedicated mail server at home and it holds about 1G of mail. Most of mail is in non UTF-8 codepage, so today I wrote little script that should recode all letters to UTF. But I found that email.header.decode_header parses some headers wrong. For example, header Content-Type: application/x-msword; name="2008 =?windows-1251?B?wu7v8O7x+w==?= 2 =?windows-1251?B?4+7kIDgwONUwMC5kb2M=?=" parsed as [('application/x-msword; name="2008', None), ('\xc2\xee\xef\xf0\xee\xf1\xfb', 'windows-1251'), ('2 =?windows-1251?B?4+7kIDgwONUwMC5kb2M=?="', None)] that is obviously wrong. Now I'm playing with email/header.py file in python 2.5 debian package (but it's same in 2.6.1 version except that all <> changed to !=). If it's patched with ==================BEGIN CUT================== --- oldheader.py 2009-01-16 01:47:32.553130030 +0300 +++ header.py 2009-01-16 01:47:16.783119846 +0300 @@ -39,7 +39,6 @@ \? # literal ? (?P.*?) # non-greedy up to the next ?= is the encoded string \?= # literal ?= - (?=[ \t]|$) # whitespace or the end of the string ''', re.VERBOSE | re.IGNORECASE | re.MULTILINE) # Field name regexp, including trailing colon, but not separating whitespace, ==================END CUT================== it works fine. So I wonder if this (?=[ \t]|$) # whitespace or the end of the string really needed, after all if there is only whitespaces after encoded word, its just appended to the list by parts = ecre.split(line) -- Jan From aahz at pythoncraft.com Fri Jan 16 00:09:55 2009 From: aahz at pythoncraft.com (Aahz) Date: Thu, 15 Jan 2009 15:09:55 -0800 Subject: [Python-Dev] report on building of python 2.5.2 under msys under wine on linux. In-Reply-To: References: Message-ID: <20090115230954.GA2734@panix.com> On Thu, Jan 15, 2009, Luke Kenneth Casson Leighton wrote: > > p.s. if anyone would like to try out this build, on a windows box, to > see if it fares any better on the regression tests please say so and i > will make the binaries available. Don't bother on my account, but please publicize the URL if you end up doing it; I may try it on my Mac with Windows under Fusion... -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian W. Kernighan From lkcl at lkcl.net Fri Jan 16 00:11:19 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 15 Jan 2009 23:11:19 +0000 Subject: [Python-Dev] report on building of python 2.5.2 under msys under wine on linux. In-Reply-To: References: Message-ID: > practical decision, due to /bin/sh.exe messing around and stopping > python.exe from running! (under cmd.exe it's fine. i have to do a > bit more investigation: http://bugs.python.org/issue4956 found it. From aahz at pythoncraft.com Fri Jan 16 00:11:27 2009 From: aahz at pythoncraft.com (Aahz) Date: Thu, 15 Jan 2009 15:11:27 -0800 Subject: [Python-Dev] Lib/email/header.py ecre regular expression In-Reply-To: <20090116020550.bb62d597.jan.malachowski@gmail.com> References: <20090116020550.bb62d597.jan.malachowski@gmail.com> Message-ID: <20090115231126.GB2734@panix.com> On Fri, Jan 16, 2009, Jan Malakhovski wrote: > > I have dedicated mail server at home and it holds about 1G of mail. > Most of mail is in non UTF-8 codepage, so today I wrote little > script that should recode all letters to UTF. But I found that > email.header.decode_header parses some headers wrong. Please post this to http://bugs.python.org/ -- regardless of whether this is a real bug, using the tracker ensures that we won't lose it. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian W. Kernighan From luke.leighton at googlemail.com Fri Jan 16 00:08:20 2009 From: luke.leighton at googlemail.com (Luke Kenneth Casson Leighton) Date: Thu, 15 Jan 2009 23:08:20 +0000 Subject: [Python-Dev] [patch] moving file load to AFTER Py_Initialize performed Message-ID: http://bugs.python.org/issue4956 uhn... weird bug, totally left-field scenario (using python under msys under wine under linux) but this rather strange scenario has a situation where loading the filename from the command line cannot be done until _after_ PyInitialize is called. prior to PyInitialize() being called, file handling including stderr is _so_ screwed that it's impossible to even use sprintf(stderr, "help help there's something wrong!\n") to diagnose the problem. i wanted to ask people: does this patch (in #4956) look... reasonable? there's no global variables or subtle interaction that "steals" argv or argc (as part of PyInitialize()) is there, that would, once it's finished, disrupt the loading of the file? if so, it would be necessary to split up the code that "gets" the filename (filename = argv[_PyOS_optind]) and leave that _bfore_ PyInitialize(), from the code that _loads_ the file (into fp). following the deployment of this fix, the build of python under msys+wine+linux proceeds _from_ msys, rather than having to be irritatingly interrupted and continued from "wineconsole cmd". l. From prologic at shortcircuit.net.au Fri Jan 16 06:46:59 2009 From: prologic at shortcircuit.net.au (James Mills) Date: Fri, 16 Jan 2009 15:46:59 +1000 Subject: [Python-Dev] multiprocessing vs. distributed processing Message-ID: I've noticed over the past few weeks lots of questions asked about multi-processing (including myself). For those of you new to multi-processing, perhaps this thread may help you. Some things I want to start off with to point out are: "multiprocessing will not always help you get things done faster." "be aware of I/O bound applications vs. CPU bound" "multiple CPUs (cores) can compute multiple concurrent expressions - not read 2 files concurrently" "in some cases, you may be after distributed processing rather than multi or parallel processing" cheers James -- -- "Problems are solved by method" From matthieu.brucher at gmail.com Fri Jan 16 09:30:36 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Fri, 16 Jan 2009 09:30:36 +0100 Subject: [Python-Dev] multiprocessing vs. distributed processing In-Reply-To: References: Message-ID: (Sorry for the double send...) 2009/1/16 James Mills : > I've noticed over the past few weeks lots of questions > asked about multi-processing (including myself). Funny, I was going to blog about this, but not just for Python. > For those of you new to multi-processing, perhaps this > thread may help you. Some things I want to start off > with to point out are: > > "multiprocessing will not always help you get things done faster." Of course. There are some programs that are I/O or memory bandwidth bound. So if one of those bottlenecks is common to the cores you use, you can't benefit from their use. > "be aware of I/O bound applications vs. CPU bound" Exactly. We read a lot about Folding at Home, SETI at Home, they can be distributed, as it is more or less "take a chunk, process it somewhere and when you're finish tell me if there something interesting in it". Not a lot of communications between the nodes. Then, there are other applications that process a lot of data, they must read data from memory, make one computation, read other data, compute a little bit (finite difference schemes), and here we are memory bandwidth bound, not CPU bound. > "multiple CPUs (cores) can compute multiple concurrent expressions - > not read 2 files concurrently" Let's say that this is true for the usual computers. Clusters can make concurrent reads, as long as there is the correct architecture behind. Of course, if you only have one hard disk, you are limited. > "in some cases, you may be after distributed processing rather than > multi or parallel processing" Of course. Clusters can be expensive, their interconnections even more. So if your application is made of independent blocks that can run on small nodes, without much I/Os, you can try distributed computing. If you need big nodes with high-speed interconnections, you will have to use parallel processing. This is just what my thoughts on the sucjet are, but I think I'm not far from the truth. Of course, if I'm proved wrong, I'll be glad to hear why. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From kristjan at ccpgames.com Fri Jan 16 10:14:54 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Fri, 16 Jan 2009 09:14:54 +0000 Subject: [Python-Dev] issue 4927: Inconsistent unicode repr for fileobject Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7882E93@exchis.ccp.ad.local> I would appreciate if some of you could chip in your opinion of this issue. http://bugs.python.org/issue4927 Cheers, Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From dickinsm at gmail.com Fri Jan 16 10:20:57 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Fri, 16 Jan 2009 09:20:57 +0000 Subject: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882E63@exchis.ccp.ad.local> References: <20090112180928.2E3471E404C@bag.python.org> <20090114201930.9754.151532828.divmod.quotient.2566@henry.divmod.com> <930F189C8A437347B80DF2C156F7EC7F04D7882B50@exchis.ccp.ad.local> <5c6f2a5d0901150743o15d7df65v85b50797404e803a@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7882D97@exchis.ccp.ad.local> <5c6f2a5d0901150900h6e5c857boea4eccae0f74693@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7882DD3@exchis.ccp.ad.local> <5c6f2a5d0901151239j7820478cv9a1c3166595f7627@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7882E63@exchis.ccp.ad.local> Message-ID: <5c6f2a5d0901160120g796f4ba1v3032dcfc7133dddf@mail.gmail.com> On Thu, Jan 15, 2009 at 10:40 PM, Kristj?n Valur J?nsson wrote: > Right. I've fixed the remainder, things should quiet down now. > K Thank you! From ncoghlan at gmail.com Fri Jan 16 11:01:51 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 16 Jan 2009 20:01:51 +1000 Subject: [Python-Dev] multiprocessing vs. distributed processing In-Reply-To: References: Message-ID: <49705B0F.70500@gmail.com> James Mills wrote: > I've noticed over the past few weeks lots of questions > asked about multi-processing (including myself). While these are fair points, did you perhaps mean to send this to python-list rather than python-dev? Cheers, Nick. > > For those of you new to multi-processing, perhaps this > thread may help you. Some things I want to start off > with to point out are: > > "multiprocessing will not always help you get things done faster." > > "be aware of I/O bound applications vs. CPU bound" > > "multiple CPUs (cores) can compute multiple concurrent expressions - > not read 2 files concurrently" > > "in some cases, you may be after distributed processing rather than > multi or parallel processing" > > cheers > James > > -- > -- "Problems are solved by method" > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ncoghlan%40gmail.com > -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From nick at craig-wood.com Fri Jan 16 13:16:41 2009 From: nick at craig-wood.com (Nick Craig-Wood) Date: Fri, 16 Jan 2009 12:16:41 +0000 Subject: [Python-Dev] py3k: TypeError: object.__init__() takes no parameters Message-ID: <20090116121640.GA8033@craig-wood.com> I've noticed with latest python 3.1 checkout (68631) if I have this object hierarchy with a default __init__ in the superclass to be used by the subclasses which don't necessarily need an __init__ it blows up with a TypeError. class Field(object): def __init__(self, data): """Default init for the subclasses""" print("init class=%r, self=%r" % (self.__class__.__name__, self)) super(Field, self).__init__(data) self.data = self.orig = data class IntegerField(Field): def __init__(self, data): """Overridden init""" super(IntegerField, self).__init__(data) self.data = int(data) class StringField(Field): pass f1 = StringField('abc') f2 = IntegerField('10') print("f1=%r" % f1.data) print("f2=%r" % f2.data) print(type(f1)) print(type(f2)) It blows up with init class='StringField', self=<__main__.StringField object at 0xb7d47b4c> Traceback (most recent call last): File "subclass-super-problem-py3k.py", line 17, in f1 = StringField('abc') File "subclass-super-problem-py3k.py", line 5, in __init__ super(Field, self).__init__(data) TypeError: object.__init__() takes no parameters The exact same code runs under py 2.5 just fine. I can't think of anything to write in Field.__init__ to tell whether super is about to run __init__ on object. The problem can be fixed (inelegantly IMHO) like this class BaseField(object): def __init__(self, data): """Default init for the subclasses""" self.data = self.orig = data class Field(BaseField): def __init__(self, data): """Another Default init for the subclasses""" super(Field, self).__init__(data) class IntegerField(Field): def __init__(self, data): """Overridden init""" super(IntegerField, self).__init__(data) self.data = int(data) class StringField(Field): pass f1 = StringField('abc') f2 = IntegerField('10') print("f1=%r" % f1.data) print("f2=%r" % f2.data) print(type(f1)) print(type(f2)) Is this a bug or a feature? Is there a better work-around? -- Nick Craig-Wood -- http://www.craig-wood.com/nick From tjreedy at udel.edu Fri Jan 16 17:12:49 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 16 Jan 2009 11:12:49 -0500 Subject: [Python-Dev] py3k: TypeError: object.__init__() takes no parameters In-Reply-To: <20090116121640.GA8033@craig-wood.com> References: <20090116121640.GA8033@craig-wood.com> Message-ID: Nick Craig-Wood wrote: > I've noticed with latest python 3.1 checkout (68631) if I have this > object hierarchy with a default __init__ in the superclass to be used > by the subclasses which don't necessarily need an __init__ it blows up > with a TypeError. > > class Field(object): object is default baseclass, hence not needed > def __init__(self, data): > """Default init for the subclasses""" > print("init class=%r, self=%r" % (self.__class__.__name__, self)) > super(Field, self).__init__(data) This line is the problem: remove it and I believe all is fine. Since object()s are immutable, its init cannot do anything as far as I know. Deleting this is effectively what you did below. Actually, I am puzzled why object even has __init__. Perhaps to avoid hasattr(ob,'__init__') check Doc implies that is it possible for a class to not have one. " object.__init__(self[, ...]) Called when the instance is created. The arguments are those passed to the class constructor expression. If a base class has an __init__() method, the derived class?s __init__() method, if any, must explicitly call it to ensure proper initialization of the base class part of the instance; for example: BaseClass.__init__(self, [args...]). As a special constraint on constructors, no value may be returned; doing so will cause a TypeError to be raised at runtime. " But in 3.0, *all* classes will inherit object.__init__. From super() doc... "There are two typical use cases for ?super?. In a class hierarchy with single inheritance, ?super? can be used to refer to parent classes without naming them explicitly, thus making the code more maintainable." I wonder about this claim. This use of super() does not eliminate the need to pass legal args, so you have to know what actually is called, so why not name it? > self.data = self.orig = data > > class IntegerField(Field): > def __init__(self, data): > """Overridden init""" ?? This over-rides and is not over-ridden. > super(IntegerField, self).__init__(data) > self.data = int(data) > > class StringField(Field): > pass > > f1 = StringField('abc') > f2 = IntegerField('10') > print("f1=%r" % f1.data) > print("f2=%r" % f2.data) > print(type(f1)) > print(type(f2)) > > It blows up with > > init class='StringField', self=<__main__.StringField object at 0xb7d47b4c> > Traceback (most recent call last): > File "subclass-super-problem-py3k.py", line 17, in > f1 = StringField('abc') > File "subclass-super-problem-py3k.py", line 5, in __init__ > super(Field, self).__init__(data) > TypeError: object.__init__() takes no parameters > > The exact same code runs under py 2.5 just fine. Perhaps 2.5's object.__init__ just swallowed all args, thus hiding bogus calls. > I can't think of anything to write in Field.__init__ to tell whether > super is about to run __init__ on object. I do not understand. You know it is going to run the .__init__ of its one and only base class, which here is object. > The problem can be fixed (inelegantly IMHO) like this > > class BaseField(object): > def __init__(self, data): > """Default init for the subclasses""" > self.data = self.orig = data > > class Field(BaseField): > def __init__(self, data): > """Another Default init for the subclasses""" > super(Field, self).__init__(data) These two inits together are the original without bad call to object.__init__. No need to do this. > class IntegerField(Field): > def __init__(self, data): > """Overridden init""" > super(IntegerField, self).__init__(data) > self.data = int(data) > > class StringField(Field): > pass > > f1 = StringField('abc') > f2 = IntegerField('10') > print("f1=%r" % f1.data) > print("f2=%r" % f2.data) > print(type(f1)) > print(type(f2)) > > Is this a bug or a feature? Is there a better work-around? Eliminate bad call. Terry Jan Reedy From nick at craig-wood.com Fri Jan 16 17:53:52 2009 From: nick at craig-wood.com (Nick Craig-Wood) Date: Fri, 16 Jan 2009 16:53:52 +0000 Subject: [Python-Dev] py3k: TypeError: object.__init__() takes no parameters In-Reply-To: References: <20090116121640.GA8033@craig-wood.com> Message-ID: <20090116165352.E6DD814CA88@irishsea.home.craig-wood.com> Terry Reedy wrote: > Nick Craig-Wood wrote: > > I've noticed with latest python 3.1 checkout (68631) if I have this > > object hierarchy with a default __init__ in the superclass to be used > > by the subclasses which don't necessarily need an __init__ it blows up > > with a TypeError. > > > > class Field(object): > > object is default baseclass, hence not needed Agreed, but I wanted the code to run with py < 3 also! > > def __init__(self, data): > > """Default init for the subclasses""" > > print("init class=%r, self=%r" % (self.__class__.__name__, self)) > > super(Field, self).__init__(data) > > This line is the problem: remove it and I believe all is fine. > Since object()s are immutable, its init cannot do anything as far as I > know. Deleting this is effectively what you did below. Yes you are absolutely right - that super is never needed. I don't know what I was thinking of! Without that the problem disappears. [snip] > Perhaps 2.5's object.__init__ just swallowed all args, thus hiding bogus > calls. Yes it did which is the fundamental difference in behaviour between py2 and py3 as far as I can see. [snip] > Eliminate bad call. Check! (Bashes head against wall!) Thanks Nick -- Nick Craig-Wood -- http://www.craig-wood.com/nick From alexandre.tp at gmail.com Fri Jan 16 18:04:12 2009 From: alexandre.tp at gmail.com (Alexandre Passos) Date: Fri, 16 Jan 2009 15:04:12 -0200 Subject: [Python-Dev] py3k: TypeError: object.__init__() takes no parameters In-Reply-To: References: <20090116121640.GA8033@craig-wood.com> Message-ID: On Fri, Jan 16, 2009 at 2:12 PM, Terry Reedy wrote: > > I do not understand. You know it is going to run the .__init__ of its one > and only base class, which here is object. Because this class might be used as base of another class. Take this trivial example code (in py2.6): class A(object): def __init__(self, a): #super(A, self).__init__(a) self.a = a print "A" class B(object): def __init__(self, a): #super(B, self).__init__(a) self.b = a print "B" class C(A, B): def __init__(self, a): super(C, self).__init__(a) self.c = a print "C", dir(self) C(1) Running the last line shows that A's constructor got called, but not B's constructor. The only way to make sure all __init__s are called in this example is by doing class A(object): def __init__(self, a): super(A, self).__init__(a) self.a = a print "A" class B(object): def __init__(self, a): #super(B, self).__init__(a) self.b = a print "B" class C(A, B): def __init__(self, a): super(C, self).__init__(a) self.c = a print "C", dir(self) C(1) which is really ugly (as in, why is B's call to super.__init__ commented but not A's, if A and B are otherwise identical?) I'm not sure, but I think the proper behavior for object.__init__ should be ignoring all args. -- - Alexandre From status at bugs.python.org Fri Jan 16 18:06:43 2009 From: status at bugs.python.org (Python tracker) Date: Fri, 16 Jan 2009 18:06:43 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20090116170643.7A7A4781D8@psf.upfronthosting.co.za> ACTIVITY SUMMARY (01/09/09 - 01/16/09) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 2313 open (+40) / 14473 closed (+24) / 16786 total (+64) Open issues with patches: 796 Average duration of open issues: 699 days. Median duration of open issues: 4 days. Open Issues Breakdown open 2289 (+40) pending 24 ( +0) Issues Created Or Reopened (65) _______________________________ thread_nt.c update 01/12/09 CLOSED http://bugs.python.org/issue3582 reopened loewis patch, patch Can't Locate File with No Capital in Name 01/09/09 CLOSED http://bugs.python.org/issue4900 created markpennock inconsistent API docs for tp_iter 01/09/09 CLOSED http://bugs.python.org/issue4901 created garcia failed to build ctypes in Python2.6.1 (even with gcc) 01/09/09 CLOSED http://bugs.python.org/issue4902 created akineko binascii.crc32() - document signed vs unsigned results 01/11/09 http://bugs.python.org/issue4903 reopened gregory.p.smith Typo for PickingError in pickle.py 01/10/09 CLOSED http://bugs.python.org/issue4904 created erickt Use INVALID_FILE_ATTRIBUTES instead of magic numbers 01/10/09 http://bugs.python.org/issue4905 created eckhardt patch os.listdir fails on debug build (windows) 01/10/09 CLOSED http://bugs.python.org/issue4906 created ocean-city patch, needs review ast.literal_eval does not properly handled complex numbers 01/10/09 CLOSED http://bugs.python.org/issue4907 created aronacher patch, patch adding a get_metadata in distutils 01/10/09 http://bugs.python.org/issue4908 created tarek patch incorrect renaming in imports 01/10/09 CLOSED http://bugs.python.org/issue4909 created loewis patch Remove uses of nb_long slot, and rename to nb_reserved. 01/10/09 http://bugs.python.org/issue4910 created marketdickinson patch Windows installer Quad processor issues 01/10/09 CLOSED http://bugs.python.org/issue4911 created zhar2 Invalid syntax in ctypes/util.py 01/10/09 CLOSED http://bugs.python.org/issue4912 created pitrou wave.py: add writesamples() and readsamples() 01/11/09 http://bugs.python.org/issue4913 created alex_python_org patch trunc(x) erroneously documented as built-in 01/11/09 http://bugs.python.org/issue4914 created MLModel Port sysmodule.c to MS Windows CE 01/11/09 CLOSED http://bugs.python.org/issue4915 created eckhardt patch test_io is broken on UCS4 01/11/09 CLOSED http://bugs.python.org/issue4916 created benjamin.peterson patch PyBytes_Format documented but doesn't exist in C/API 01/12/09 CLOSED http://bugs.python.org/issue4917 created omorvant Windows installer created with Python 2.5 does not work with Py 01/12/09 http://bugs.python.org/issue4918 created rantanen 2.6.1 build issues on solaris with SunStudio 12 01/12/09 http://bugs.python.org/issue4919 created taverngeek Inconsistent usage of next/__next__ in ABC collections; collecti 01/12/09 http://bugs.python.org/issue4920 created jrosiek patch Object lifetime and inner recursive function 01/12/09 CLOSED http://bugs.python.org/issue4921 created ocean-city set.add and set.discard are not conformant to collections.Mutabl 01/12/09 http://bugs.python.org/issue4922 created jrosiek 26backport time.strftime documentation needs update 01/12/09 http://bugs.python.org/issue4923 created riquito gc.collect() won't always collect as expected 01/12/09 http://bugs.python.org/issue4924 created pitrou Improve error message of subprocess 01/12/09 http://bugs.python.org/issue4925 created mmokrejs putenv() accepts names containing '=', return value of unsetenv( 01/12/09 http://bugs.python.org/issue4926 created baikie patch Inconsistent unicode repr for fileobject 01/13/09 http://bugs.python.org/issue4927 created krisvale patch, patch, needs review Problem with tempfile.NamedTemporaryFile on Solaris 10 01/13/09 http://bugs.python.org/issue4928 created rphilips smptlib.py can raise socket.error 01/13/09 CLOSED http://bugs.python.org/issue4929 created krisvale patch, patch, needs review Small optimization in type construction 01/13/09 http://bugs.python.org/issue4930 created amaury.forgeotdarc patch distutils does not show any error msg when can't build C module 01/13/09 http://bugs.python.org/issue4931 created giampaolo.rodola patch, needs review Little improvement on urlparse module, urlparse function. 01/13/09 http://bugs.python.org/issue4932 created andrix patch Patch to add preliminary support for Haiku 01/13/09 CLOSED http://bugs.python.org/issue4933 created scottmc patch tp_del and tp_version_tag undocumented 01/13/09 http://bugs.python.org/issue4934 created stutzbach Segmentation fault in bytearray tests 01/13/09 CLOSED http://bugs.python.org/issue4935 created pitrou patch bytearrayobject.o does not depend on stringlib files 01/13/09 CLOSED http://bugs.python.org/issue4936 created pitrou Mac DMG install missing version.plist required by bundlebuilder. 01/13/09 http://bugs.python.org/issue4937 created barry-scott Pdb cannot access doctest source in postmortem 01/13/09 http://bugs.python.org/issue4938 created belopolsky Failures in test_xmlrpc 01/13/09 CLOSED http://bugs.python.org/issue4939 created pitrou decimal.Decimal.__init__ should raise an instance of ValueError 01/14/09 CLOSED http://bugs.python.org/issue4940 created rech Tell GCC Py_DECREF is unlikely to call the destructor 01/14/09 http://bugs.python.org/issue4941 created ajaksu2 patch accept() on AF_UNIX sockets broken on arm as of 2.5.3 01/14/09 CLOSED http://bugs.python.org/issue4942 created hmoffatt trace.CoverageResults.write_results can't write results file for 01/14/09 http://bugs.python.org/issue4943 created matthewlmcclure patch os.fsync() doesn't work as expect in Windows 01/14/09 http://bugs.python.org/issue4944 created javen72 json checks True/False by identity, not boolean value 01/14/09 http://bugs.python.org/issue4945 created gagenellina patch Lib/test/test__locale uses is to compare strings 01/14/09 CLOSED http://bugs.python.org/issue4946 created fijal patch sys.stdout fails to use default encoding as advertised 01/14/09 http://bugs.python.org/issue4947 created stevenjd Make heapq work with all mutable sequences 01/14/09 http://bugs.python.org/issue4948 created bboissin Constness in PyErr_NewException 01/14/09 http://bugs.python.org/issue4949 created inducer patch Redundant declaration in pyerrors.h 01/14/09 CLOSED http://bugs.python.org/issue4950 created flub failure in test_httpservers 01/15/09 http://bugs.python.org/issue4951 created pitrou Running Python Script to Run a C++ Code 01/15/09 CLOSED http://bugs.python.org/issue4952 created dominade27 cgi module cannot handle POST with multipart/form-data in 3.0 01/15/09 http://bugs.python.org/issue4953 created oopos native build of python win32 using msys under wine. 01/15/09 http://bugs.python.org/issue4954 created lkcl inconsistent, perhaps incorrect, behavior with respect to entiti 01/15/09 http://bugs.python.org/issue4955 created exarkun Py_Initialize needs to be done before file load (on msys+wine) 01/15/09 http://bugs.python.org/issue4956 created lkcl os.ftruncate raises IOError instead of OSError 01/15/09 http://bugs.python.org/issue4957 created krisvale patch, patch email/header.py ecre regular expression issue 01/15/09 http://bugs.python.org/issue4958 created oxij inspect.formatargspec fails for keyword args without defaults, a 01/16/09 http://bugs.python.org/issue4959 created dariusp patch askdirectory from tkinter.filedialog does not work 01/16/09 CLOSED http://bugs.python.org/issue4960 created kvutza Inconsistent/wrong result of askyesno function in tkMessageBox 01/16/09 http://bugs.python.org/issue4961 created eb303 urlparse & nfs url (rfc 2224) 01/16/09 http://bugs.python.org/issue4962 created yuhl mimetypes.guess_extension result changes after mimetypes.init() 01/16/09 http://bugs.python.org/issue4963 created siona Issues Now Closed (58) ______________________ Truncate __len__() at sys.maxsize 259 days http://bugs.python.org/issue2723 haypo patch Return results from Python callbacks to Tcl as Tcl objects, plea 222 days http://bugs.python.org/issue3038 gpolo thread_nt.c update 1 days http://bugs.python.org/issue3582 loewis patch, patch importing from UNC roots doesn't work 137 days http://bugs.python.org/issue3677 krisvale patch GzipFile and BZ2File should support context manager protocol 119 days http://bugs.python.org/issue3860 pitrou patch Building a list of tuples has non-linear performance 94 days http://bugs.python.org/issue4074 loewis patch Builtins treated as free variables? 75 days http://bugs.python.org/issue4220 georg.brandl Module 'parser' fails to build 65 days http://bugs.python.org/issue4279 loewis patch parsermodule and grammar variable 63 days http://bugs.python.org/issue4288 loewis patch Thread Safe Py_AddPendingCall 63 days http://bugs.python.org/issue4293 krisvale patch, patch Fix performance issues in xmlrpclib 53 days http://bugs.python.org/issue4336 krisvale patch, patch, easy test_socket fails occassionaly in teardown: AssertionError: [Err 53 days http://bugs.python.org/issue4397 marketdickinson patch Is shared lib building broken on trunk for Mac OS X? 41 days http://bugs.python.org/issue4472 skip.montanaro patch close() seems to have limited effect 32 days http://bugs.python.org/issue4604 pitrou patch uuid behavior with multiple threads 32 days http://bugs.python.org/issue4607 facundobatista python3.0 -u: unbuffered stdout 20 days http://bugs.python.org/issue4705 pitrou patch The function, Threading.Timer.run(), may be Inappropriate 15 days http://bugs.python.org/issue4781 amaury.forgeotdarc Documentation changes break existing URIs 11 days http://bugs.python.org/issue4789 msapiro Decimal to receive from_float method 10 days http://bugs.python.org/issue4796 rhettinger needs review wrong wsprintf usage 11 days http://bugs.python.org/issue4807 amaury.forgeotdarc patch fix problems with ctypes.util.find_library 4 days http://bugs.python.org/issue4861 doko patch, patch Faster utf-8 decoding 3 days http://bugs.python.org/issue4868 pitrou patch Allow buffering for HTTPResponse 3 days http://bugs.python.org/issue4879 krisvale patch, patch Python's timezon handling: daylight saving option 5 days http://bugs.python.org/issue4881 earendili510 Work around gethostbyaddr_r bug 2 days http://bugs.python.org/issue4884 jyasskin patch Use separate thread support code under MS Windows CE 3 days http://bugs.python.org/issue4893 loewis patch Missing strdup() under MS Windows CE 2 days http://bugs.python.org/issue4895 loewis patch PyIter_Next documentation inconsistent with implementation 0 days http://bugs.python.org/issue4897 benjamin.peterson Can't Locate File with No Capital in Name 0 days http://bugs.python.org/issue4900 benjamin.peterson inconsistent API docs for tp_iter 1 days http://bugs.python.org/issue4901 benjamin.peterson failed to build ctypes in Python2.6.1 (even with gcc) 0 days http://bugs.python.org/issue4902 loewis Typo for PickingError in pickle.py 0 days http://bugs.python.org/issue4904 benjamin.peterson os.listdir fails on debug build (windows) 0 days http://bugs.python.org/issue4906 krisvale patch, needs review ast.literal_eval does not properly handled complex numbers 3 days http://bugs.python.org/issue4907 aronacher patch, patch incorrect renaming in imports 0 days http://bugs.python.org/issue4909 benjamin.peterson patch Windows installer Quad processor issues 1 days http://bugs.python.org/issue4911 haypo Invalid syntax in ctypes/util.py 0 days http://bugs.python.org/issue4912 benjamin.peterson Port sysmodule.c to MS Windows CE 1 days http://bugs.python.org/issue4915 loewis patch test_io is broken on UCS4 0 days http://bugs.python.org/issue4916 benjamin.peterson patch PyBytes_Format documented but doesn't exist in C/API 0 days http://bugs.python.org/issue4917 benjamin.peterson Object lifetime and inner recursive function 0 days http://bugs.python.org/issue4921 pitrou smptlib.py can raise socket.error 2 days http://bugs.python.org/issue4929 krisvale patch, patch, needs review Patch to add preliminary support for Haiku 1 days http://bugs.python.org/issue4933 gvanrossum patch Segmentation fault in bytearray tests 0 days http://bugs.python.org/issue4935 pitrou patch bytearrayobject.o does not depend on stringlib files 0 days http://bugs.python.org/issue4936 benjamin.peterson Failures in test_xmlrpc 0 days http://bugs.python.org/issue4939 krisvale decimal.Decimal.__init__ should raise an instance of ValueError 0 days http://bugs.python.org/issue4940 benjamin.peterson accept() on AF_UNIX sockets broken on arm as of 2.5.3 0 days http://bugs.python.org/issue4942 hmoffatt Lib/test/test__locale uses is to compare strings 2 days http://bugs.python.org/issue4946 benjamin.peterson patch Redundant declaration in pyerrors.h 1 days http://bugs.python.org/issue4950 benjamin.peterson Running Python Script to Run a C++ Code 0 days http://bugs.python.org/issue4952 loewis askdirectory from tkinter.filedialog does not work 0 days http://bugs.python.org/issue4960 loewis Overflow in Python Profiler 1818 days http://bugs.python.org/issue881261 amaury.forgeotdarc inspect.getmembers() breaks sometimes 1403 days http://bugs.python.org/issue1162154 amaury.forgeotdarc Add current dir when running try_run test program 1393 days http://bugs.python.org/issue1168055 fernando_gomes patch cannot import extension module with Purify 1097 days http://bugs.python.org/issue1403068 amaury.forgeotdarc Add collections.counts() 646 days http://bugs.python.org/issue1696199 rhettinger patch Armin's method cache optimization updated for Python 2.6 643 days http://bugs.python.org/issue1700288 collinwinter patch Top Issues Most Discussed (10) ______________________________ 17 Faster opcode dispatch on gcc 21 days open http://bugs.python.org/issue4753 16 wave.py: add writesamples() and readsamples() 6 days open http://bugs.python.org/issue4913 14 ast.literal_eval does not properly handled complex numbers 3 days closed http://bugs.python.org/issue4907 13 Inconsistent unicode repr for fileobject 3 days open http://bugs.python.org/issue4927 12 optimize bytecode for conditional branches 26 days open http://bugs.python.org/issue4715 11 os.fsync() doesn't work as expect in Windows 3 days open http://bugs.python.org/issue4944 10 Patch to add preliminary support for Haiku 1 days closed http://bugs.python.org/issue4933 10 Use separate thread support code under MS Windows CE 3 days closed http://bugs.python.org/issue4893 9 Failures in test_xmlrpc 0 days closed http://bugs.python.org/issue4939 9 Add collections.counts() 646 days closed http://bugs.python.org/issue1696199 From guido at python.org Fri Jan 16 18:33:54 2009 From: guido at python.org (Guido van Rossum) Date: Fri, 16 Jan 2009 09:33:54 -0800 Subject: [Python-Dev] issue 4927: Inconsistent unicode repr for fileobject In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7882E93@exchis.ccp.ad.local> References: <930F189C8A437347B80DF2C156F7EC7F04D7882E93@exchis.ccp.ad.local> Message-ID: Done. Rejected, with argumentation. On Fri, Jan 16, 2009 at 1:14 AM, Kristj?n Valur J?nsson wrote: > I would appreciate if some of you could chip in your opinion of this issue. > > http://bugs.python.org/issue4927 -- --Guido van Rossum (home page: http://www.python.org/~guido/) From rdmurray at bitdance.com Fri Jan 16 18:36:24 2009 From: rdmurray at bitdance.com (rdmurray at bitdance.com) Date: Fri, 16 Jan 2009 12:36:24 -0500 (EST) Subject: [Python-Dev] py3k: TypeError: object.__init__() takes no parameters In-Reply-To: <20090116165352.E6DD814CA88@irishsea.home.craig-wood.com> References: <20090116121640.GA8033@craig-wood.com> <20090116165352.E6DD814CA88@irishsea.home.craig-wood.com> Message-ID: On Fri, 16 Jan 2009 at 16:53, Nick Craig-Wood wrote: > [snip] >> Perhaps 2.5's object.__init__ just swallowed all args, thus hiding bogus >> calls. > > Yes it did which is the fundamental difference in behaviour between > py2 and py3 as far as I can see. Actually, between py<=2.5 and py>=2.6. --RDM From tjreedy at udel.edu Fri Jan 16 22:32:17 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 16 Jan 2009 16:32:17 -0500 Subject: [Python-Dev] py3k: TypeError: object.__init__() takes no parameters In-Reply-To: References: <20090116121640.GA8033@craig-wood.com> Message-ID: Alexandre Passos wrote: > On Fri, Jan 16, 2009 at 2:12 PM, Terry Reedy wrote: >> I do not understand. You know it is going to run the .__init__ of its one >> and only base class, which here is object. > > Because this class might be used as base of another class. Take this > trivial example code (in py2.6): > > class A(object): > def __init__(self, a): > #super(A, self).__init__(a) > self.a = a > print "A" > > class B(object): > def __init__(self, a): > #super(B, self).__init__(a) > self.b = a > print "B" > > class C(A, B): > def __init__(self, a): > super(C, self).__init__(a) > self.c = a > print "C", dir(self) > > C(1) > > Running the last line shows that A's constructor got called, but not > B's constructor. Same in 3.0 with print ()s > The only way to make sure all __init__s are called in > this example is by doing > > class A(object): > def __init__(self, a): > super(A, self).__init__(a) > self.a = a > print "A" > > class B(object): > def __init__(self, a): > #super(B, self).__init__(a) > self.b = a > print "B" > > class C(A, B): > def __init__(self, a): > super(C, self).__init__(a) > self.c = a > print "C", dir(self) > > C(1) > > which is really ugly (as in, why is B's call to super.__init__ > commented but not A's, if A and B are otherwise identical?) Especially since the commenting would have to be reversed should the definition of C change to reverse the inheritance order. Should one write "class D(B,A) .." or "class D(B,C..)", nothing would work. > I'm not sure, but I think the proper behavior for object.__init__ > should be ignoring all args. Given "The second use case is to support cooperative multiple inheritence in a dynamic execution environment. ... Good design dictates that this method have the same calling signature in every case (because the order of parent calls is determined at runtime and because that order adapts to changes in the class hierarchy)." the change makes object unsuitable as a multiple inheritance base. I think as a practical matter it is anyway since practical cases have other methods that object does not have that need exist in the pyramid base. This must be why there have not been squawks about the change in 2.6. So I wonder whether the proper change might have been to remove object.__init__. Terry Jan Reedy From dickinsm at gmail.com Fri Jan 16 22:34:05 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Fri, 16 Jan 2009 21:34:05 +0000 Subject: [Python-Dev] Deprecate PyNumber_Long? Message-ID: <5c6f2a5d0901161334o7e2608a0udd5c28592c36f821@mail.gmail.com> Now that all uses of nb_long and __long__ have disappeared from the 3.x codebase, would it make sense to mark PyNumber_Long as deprecated in the c-api documentation, and convert all existing uses (I count a grand total of 3 uses in the py3k branch!) to PyNumber_Int? (The two functions behave identically: PyNumber_Int is a macro that's #defined to expand to PyNumber_Long.) Mark From brett at python.org Fri Jan 16 23:42:06 2009 From: brett at python.org (Brett Cannon) Date: Fri, 16 Jan 2009 14:42:06 -0800 Subject: [Python-Dev] Deprecate PyNumber_Long? In-Reply-To: <5c6f2a5d0901161334o7e2608a0udd5c28592c36f821@mail.gmail.com> References: <5c6f2a5d0901161334o7e2608a0udd5c28592c36f821@mail.gmail.com> Message-ID: On Fri, Jan 16, 2009 at 13:34, Mark Dickinson wrote: > Now that all uses of nb_long and __long__ have disappeared from > the 3.x codebase, would it make sense to mark PyNumber_Long > as deprecated in the c-api documentation, and convert all existing > uses (I count a grand total of 3 uses in the py3k branch!) to > PyNumber_Int? > > (The two functions behave identically: PyNumber_Int is a macro > that's #defined to expand to PyNumber_Long.) Assuming we have been moving the C API usage to PyInt and not PyLong, then yes it makes sense. -Brett From ncoghlan at gmail.com Sat Jan 17 01:38:47 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 17 Jan 2009 10:38:47 +1000 Subject: [Python-Dev] py3k: TypeError: object.__init__() takes no parameters In-Reply-To: References: <20090116121640.GA8033@craig-wood.com> Message-ID: <49712897.6000605@gmail.com> Terry Reedy wrote: > Given > "The second use case is to support cooperative multiple inheritence in > a dynamic execution environment. ... Good design dictates that this > method have the same calling signature in every case (because the order > of parent calls is determined at runtime and because that order adapts > to changes in the class hierarchy)." > the change makes object unsuitable as a multiple inheritance base. > > I think as a practical matter it is anyway since practical cases have > other methods that object does not have that need exist in the pyramid > base. This must be why there have not been squawks about the change in > 2.6. I think that sums up the reasoning fairly well - the (IMO, reasonable) expectation is that a CMI hierarchy will look something like: object | CMIBase | While the signatures of __new__ and __init__ will remain the same for all classes in the CMI hierarchy, CMIBase will assume the only class after it in the MRO is object, with signatures for __new__ and __init__ that accept no additional arguments beyond the class and instance respectively. Note that CMIBase serves another purpose in such a hierarchy: isinstance(obj, CMIBase) becomes a concise check to see if an object is a member of the hierarchy or not. > So I wonder whether the proper change might have been to remove > object.__init__. That would have broken too much code, since a lot of immutable types rely on it (they only override __new__ and leave __init__ alone). For more info, see Guido's checkin changing the behaviour and the associated tracker issue: http://svn.python.org/view?rev=54539&view=rev http://bugs.python.org/issue1683368 Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From tjreedy at udel.edu Sat Jan 17 03:20:59 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 16 Jan 2009 21:20:59 -0500 Subject: [Python-Dev] py3k: TypeError: object.__init__() takes no parameters In-Reply-To: <49712897.6000605@gmail.com> References: <20090116121640.GA8033@craig-wood.com> <49712897.6000605@gmail.com> Message-ID: Nick Coghlan wrote: > Terry Reedy wrote: >> So I wonder whether the proper change might have been to remove >> object.__init__. > > That would have broken too much code, since a lot of immutable types > rely on it (they only override __new__ and leave __init__ alone). In what way do they depend on the equivalent of "def f(): pass"? If, during the object creation process, "if hasattr(newob, '__init__'):" were added after calling cls.__new__ and before calling newob.__init__, what dependency would be left? To repeat a previous comment, the doc sentence beginning "If a base class has an __init__() method," implies that it is intended to be possible for classes to not have an __init__ method. Should the doc be changed. Is this just a holdover from pre-object-class days? > For more info, see Guido's checkin changing the behaviour and the > associated tracker issue: > http://svn.python.org/view?rev=54539&view=rev > http://bugs.python.org/issue1683368 Ah yes. In that thread I complained that >>> object.__init__.__doc__ 'x.__init__(...) initializes x; see x.__class__.__doc__ for signature' unchanged in 3.0) is uninformative. Why cannot object.__init__.__doc__ tell the truth? "object.__init__(self) takes no other args and does nothing" The signature of a class as a callable is *not* the signature of its __init__ method! In particular >>> object.__class__.__doc__ "type(object) -> the object's type\ntype(name, bases, dict) -> a new type" (also unchanged in 3.0) is irrelevant and uninformative as to whether object.__init__ accepts (as it used to) or rejects (as it now does) args other than 'self'. Terry Jan Reedy From barry at python.org Sat Jan 17 03:42:34 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 16 Jan 2009 21:42:34 -0500 Subject: [Python-Dev] Problems with unicode_literals Message-ID: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I've been playing with 'from __future__ import unicode_literals' just to see how unicode unclean some of my code was. Almost everything was fairly easy to fix but I found two interesting situations. One seems fairly shallow and might arguably be fixable in Python 2.6 (but probably not :). The other clearly can't be addressed in Python 2.6, but the question is whether it should be changed for Python 2.7. Here's some sample code: - -----snip snip----- from __future__ import unicode_literals def foo(a=None, b=None): print a, b # This is a TypeError foo(**{'a': 1, 'b': 2}) foo(**dict(a=1, b=2)) from optparse import OptionParser parser = OptionParser() # This also raises a TypeError parser.add_option('-f', '--foo') - -----snip snip----- The add_option() failure is a one-line fix. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSXFFmnEjvBPtnXfVAQKx0QP/Un7RG++ugtgywBHXd+pWTD2V7QC1JDqP rpIkwqocicMZiNBbg0NS5/TSGHa0CyaQphDmBBeNFr7jFb4rxdUESyLmBNNIz7dV /PEBZxJp5ZjTGCIylEJoXHMSN102wqe/n6QAAGqV5ce7e3Fhr8b7kU2m7cMT6yDQ /1b4riH/H0Y= =dp0u -----END PGP SIGNATURE----- From guido at python.org Sat Jan 17 04:26:00 2009 From: guido at python.org (Guido van Rossum) Date: Fri, 16 Jan 2009 19:26:00 -0800 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> Message-ID: Is the issue that in foo(**{'a': 1, 'b': 1}) the 'a' and 'b' are unicode and not acceptable as keyword arguments? I agree that should be fixed, though I'm not sure it'll be easy. I'm not sure you're saying that the optparse case shouldn't be fixed in 2.6. or the foo(**{...}) shouldn't be fixed in 2.6, though I think the latter. On Fri, Jan 16, 2009 at 6:42 PM, Barry Warsaw wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I've been playing with 'from __future__ import unicode_literals' just to see > how unicode unclean some of my code was. Almost everything was fairly easy > to fix but I found two interesting situations. One seems fairly shallow and > might arguably be fixable in Python 2.6 (but probably not :). The other > clearly can't be addressed in Python 2.6, but the question is whether it > should be changed for Python 2.7. > > Here's some sample code: > > - -----snip snip----- > from __future__ import unicode_literals > > def foo(a=None, b=None): > print a, b > > # This is a TypeError > foo(**{'a': 1, 'b': 2}) > > foo(**dict(a=1, b=2)) > > from optparse import OptionParser > > parser = OptionParser() > > # This also raises a TypeError > parser.add_option('-f', '--foo') > - -----snip snip----- > > The add_option() failure is a one-line fix. > > - -Barry > > -----BEGIN PGP SIGNATURE----- > Version: GnuPG v1.4.9 (Darwin) > > iQCVAwUBSXFFmnEjvBPtnXfVAQKx0QP/Un7RG++ugtgywBHXd+pWTD2V7QC1JDqP > rpIkwqocicMZiNBbg0NS5/TSGHa0CyaQphDmBBeNFr7jFb4rxdUESyLmBNNIz7dV > /PEBZxJp5ZjTGCIylEJoXHMSN102wqe/n6QAAGqV5ce7e3Fhr8b7kU2m7cMT6yDQ > /1b4riH/H0Y= > =dp0u > -----END PGP SIGNATURE----- > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at python.org Sat Jan 17 04:45:28 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 16 Jan 2009 22:45:28 -0500 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> Message-ID: <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 16, 2009, at 10:26 PM, Guido van Rossum wrote: > Is the issue that in foo(**{'a': 1, 'b': 1}) the 'a' and 'b' are > unicode and not acceptable as keyword arguments? I agree that should > be fixed, though I'm not sure it'll be easy. > > I'm not sure you're saying that the optparse case shouldn't be fixed > in 2.6. or the foo(**{...}) shouldn't be fixed in 2.6, though I think > the latter. Yep, sorry, it's been a long week. ;) The optparse one could easily be fixed for 2.6, if we agree it should be fixed. This untested patch should do it I think: Index: Lib/optparse.py =================================================================== - --- Lib/optparse.py (revision 68465) +++ Lib/optparse.py (working copy) @@ -994,7 +994,7 @@ """add_option(Option) add_option(opt_str, ..., kwarg=val, ...) """ - - if type(args[0]) is types.StringType: + if type(args[0]) in types.StringTypes: option = self.option_class(*args, **kwargs) elif len(args) == 1 and not kwargs: option = args[0] Should this be fixed, or wait for 2.7? The fact that 'a' and 'b' are unicodes and not accepted as keyword arguments is probably the tougher problem. I haven't yet looked at what it might take to fix. Is it worth fixing in 2.6 or is this a wait-for-2.7 thing? - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSXFUWHEjvBPtnXfVAQK0cgQAt5CqfAYmDCCaN7XkplrYg1mr2B6SBj5Q oPGxuYaQAu5k4iEcicl27JFElbzzAqMtJ/bpRPVajQlagZt8s7o+dbn/dhHvIBpQ u2nPUAtBcfoqvfMvoaCmA9xixI/N4z1dAJjkifwG9n2Dh/PhDzc6KuFFXthh6Euy KnguC64McvE= =U2B+ -----END PGP SIGNATURE----- From benjamin at python.org Sat Jan 17 04:52:40 2009 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 16 Jan 2009 21:52:40 -0600 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> Message-ID: <1afaf6160901161952x77ebfd04x241db3eea1989901@mail.gmail.com> On Fri, Jan 16, 2009 at 9:45 PM, Barry Warsaw wrote: > > The optparse one could easily be fixed for 2.6, if we agree it should be > fixed. This untested patch should do it I think: > > Index: Lib/optparse.py > =================================================================== > - --- Lib/optparse.py (revision 68465) > +++ Lib/optparse.py (working copy) > @@ -994,7 +994,7 @@ > """add_option(Option) > add_option(opt_str, ..., kwarg=val, ...) > """ > - - if type(args[0]) is types.StringType: > + if type(args[0]) in types.StringTypes: > option = self.option_class(*args, **kwargs) > elif len(args) == 1 and not kwargs: > option = args[0] It'd probably be better to replace that whole line with isinstance(args[0], basestring). > > The fact that 'a' and 'b' are unicodes and not accepted as keyword arguments > is probably the tougher problem. I haven't yet looked at what it might take > to fix. Is it worth fixing in 2.6 or is this a wait-for-2.7 thing? Actually, this looks like a one line fix, too: --- Python/ceval.c (revision 68625) +++ Python/ceval.c (working copy) @@ -2932,7 +2932,8 @@ PyObject *keyword = kws[2*i]; PyObject *value = kws[2*i + 1]; int j; - if (keyword == NULL || !PyString_Check(keyword)) { + if (keyword == NULL || !(PyString_Check(keyword) || + PyUnicode_Check(keyword))) { PyErr_Format(PyExc_TypeError, "%.200s() keywords must be strings", PyString_AsString(co->co_name)); But I agree with Guido when he says this should be a 2.7 feature. -- Regards, Benjamin From martin at v.loewis.de Sat Jan 17 09:25:55 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 17 Jan 2009 09:25:55 +0100 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> Message-ID: <49719613.20302@v.loewis.de> > Index: Lib/optparse.py > =================================================================== > --- Lib/optparse.py (revision 68465) > +++ Lib/optparse.py (working copy) > @@ -994,7 +994,7 @@ > """add_option(Option) > add_option(opt_str, ..., kwarg=val, ...) > """ > - if type(args[0]) is types.StringType: > + if type(args[0]) in types.StringTypes: > option = self.option_class(*args, **kwargs) > elif len(args) == 1 and not kwargs: > option = args[0] > > Should this be fixed, or wait for 2.7? It would be a new feature. So if we apply a strict policy, it can't be added to 2.6. Regards, Martin From dickinsm at gmail.com Sat Jan 17 09:53:02 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Sat, 17 Jan 2009 08:53:02 +0000 Subject: [Python-Dev] Deprecate PyNumber_Long? In-Reply-To: References: <5c6f2a5d0901161334o7e2608a0udd5c28592c36f821@mail.gmail.com> Message-ID: <5c6f2a5d0901170053k76d523dbg5b5e036f9685b641@mail.gmail.com> On Fri, Jan 16, 2009 at 10:42 PM, Brett Cannon wrote: > Assuming we have been moving the C API usage to PyInt and not PyLong, > then yes it makes sense. Hmm. I don't think there's been any such move. Maybe there should be. Benjamin wondered aloud about deprecating PyNumber_Long in the issue 4910 discussion; I suggested deprecating PyNumber_Int instead, but on reflection I think Benjamin's right: it seems neater to keep the PyNumber_Int <-> int() <-> nb_int naming connections than the PyNumber_Long <-> PyLong ones. At any rate, I think it would be good to deprecate one or the other; I don't really have a strong opinion about which one. Mark From solipsis at pitrou.net Sat Jan 17 13:53:25 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 17 Jan 2009 12:53:25 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Deprecate_PyNumber=5FLong=3F?= References: <5c6f2a5d0901161334o7e2608a0udd5c28592c36f821@mail.gmail.com> <5c6f2a5d0901170053k76d523dbg5b5e036f9685b641@mail.gmail.com> Message-ID: Mark Dickinson gmail.com> writes: > > Benjamin wondered aloud about deprecating PyNumber_Long > in the issue 4910 discussion; I suggested deprecating > PyNumber_Int instead, but on reflection I think Benjamin's right: > it seems neater to keep the PyNumber_Int <-> int() <-> nb_int > naming connections than the PyNumber_Long <-> PyLong > ones. The C API uses the Long (rather than Int) wording, so it would be rather strange to have an outlier in PyNumber_Int. We should keep PyNumber_Long instead. From victor.stinner at haypocalc.com Sat Jan 17 14:03:13 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Sat, 17 Jan 2009 14:03:13 +0100 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> Message-ID: <200901171403.13806.victor.stinner@haypocalc.com> Le Saturday 17 January 2009 04:45:28 Barry Warsaw, vous avez ?crit?: > The optparse one could easily be fixed for 2.6, if we agree it should > be fixed. This untested patch should do it I think: > > Index: Lib/optparse.py > =================================================================== > --- Lib/optparse.py (revision 68465) > +++ Lib/optparse.py (working copy) > @@ -994,7 +994,7 @@ > """add_option(Option) > add_option(opt_str, ..., kwarg=val, ...) > """ > - if type(args[0]) is types.StringType: > + if type(args[0]) in types.StringTypes: > option = self.option_class(*args, **kwargs) > elif len(args) == 1 and not kwargs: > option = args[0] See also related issues: - http://bugs.python.org/issue2931: optparse: various problems with unicode and gettext - http://bugs.python.org/issue4319: optparse and non-ascii help strings -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From barry at barrys-emacs.org Sat Jan 17 14:08:17 2009 From: barry at barrys-emacs.org (Barry Scott) Date: Sat, 17 Jan 2009 13:08:17 +0000 Subject: [Python-Dev] bundlebuilder broken in 2.6 Message-ID: <7043CB7C-18F4-4E16-AE0C-CDA6BA311044@barrys-emacs.org> It seems that the packaging of Mac Python 2.6 is missing at least one file that is critical to the operation of bundlebuilder.py. I've logged the issue as http://bugs.python.org/issue4937. Barry From bugtrack at roumenpetrov.info Sat Jan 17 15:34:34 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Sat, 17 Jan 2009 16:34:34 +0200 Subject: [Python-Dev] issue2233: ... extra slash before $(DESTDIR) ... cygwin install fail Message-ID: <4971EC7A.1040602@roumenpetrov.info> Hi All, May I ask for a vote how issue to be resolved. The problem is command "... setup.py install ... --root=/$(DESTDIR)" if DESTDIR is specified: error: could not create '//...': No such host or network path Currently issue http://bugs.python.org/issue2233 propose three solutions: 1) replace "slash before $(DESTDIR)" with "/./" . Lets call solution "hacky". 2) lets call second one "shell script based": INSTROOT=$(DESTDIR); test -z "$$INSTROOT" && INSTROOT=/; ... setup.py install ... --root=$$INSTROOT 3) the third "shell parameter expansion": ... --root=$${DESTDIR-/} The question for last one is: "So, if a user executes "make DESTDIR= install", then the build will fail. Or, maybe we shouldn't worry about that corner case." Which solution is prefered ? What about other solutions ? Roumen From bugtrack at roumenpetrov.info Sat Jan 17 16:16:58 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Sat, 17 Jan 2009 17:16:58 +0200 Subject: [Python-Dev] report on building of python 2.5.2 under msys under wine on linux. In-Reply-To: References: Message-ID: <4971F66A.2040805@roumenpetrov.info> Luke Kenneth Casson Leighton wrote: [SNIP] > i'm going to _try_ to merge in #3871 but it's... the prospect of > sitting waiting for configure to take THREE hours to complete, due to > /bin/sh.exe instances taking TWO SECONDS _each_ to start up does not > really fill me with deep joy. As from version 1.1.8 msys bash could be run in wine. May be wine issue 12046 is not enough to run a born shell in wine. The ash from pw32 projects (not updated in past 6 years :( ) work in wine but the problem is same - it is to slow, even more. [SNIP] > it's all a bit odd - it still feels like things are being > cross-compiled... but they're not... it's just that setup.py has paths > that don't _quite_ match up with the msys environment... You could use CPPFLAGS and LDFLAGS to point other locations. Usually wine drive Z: is mapped to filesystem root and visible as /z in msys. [SNIP] > the regression testing is _great_ fun! some of the failures are > really quite spectacular, but surprisingly there are less than > anticipated. About 5 test fail in emulated environment due bugs in emulator. May be some math tests fail. Python math tests are too strict for msvcrt 7.0/6.0 (default runtime). Simple work around is to use long double functions from mingwex library. Other work-around is to build whole python (including modules) with msvcrt 9.0 but I'm not sure that I will publish soon technical details about this build as "DLL hell" is replaces by "Assembly hell" (see python issues related to vista and MSVC build). [SNIP] Roumen From benjamin at python.org Sat Jan 17 17:22:22 2009 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 17 Jan 2009 10:22:22 -0600 Subject: [Python-Dev] Deprecate PyNumber_Long? In-Reply-To: References: <5c6f2a5d0901161334o7e2608a0udd5c28592c36f821@mail.gmail.com> <5c6f2a5d0901170053k76d523dbg5b5e036f9685b641@mail.gmail.com> Message-ID: <1afaf6160901170822p6909140dqab76a419cdc1c3ae@mail.gmail.com> On 1/17/09, Antoine Pitrou wrote: > Mark Dickinson gmail.com> writes: >> >> Benjamin wondered aloud about deprecating PyNumber_Long >> in the issue 4910 discussion; I suggested deprecating >> PyNumber_Int instead, but on reflection I think Benjamin's right: >> it seems neater to keep the PyNumber_Int <-> int() <-> nb_int >> naming connections than the PyNumber_Long <-> PyLong >> ones. > > The C API uses the Long (rather than Int) wording, so it would be rather > strange > to have an outlier in PyNumber_Int. We should keep PyNumber_Long instead. I agree with Antoine here. Using nb_int instead of nb_long is rather unfortunate, but I think it's more important to keep the C-API function names consistent. -- Regards, Benjamin From lkcl at lkcl.net Sat Jan 17 18:41:12 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 17 Jan 2009 17:41:12 +0000 Subject: [Python-Dev] report on building of python 2.5.2 under msys under wine on linux. In-Reply-To: <4971F66A.2040805@roumenpetrov.info> References: <4971F66A.2040805@roumenpetrov.info> Message-ID: hiya roumen, good to hear from you - i've been merging in the work that you did, on mingw native-and-cross compiles. got a couple of questions, will post them in the bugreport ok? On Sat, Jan 17, 2009 at 3:16 PM, Roumen Petrov wrote: > Luke Kenneth Casson Leighton wrote: > [SNIP] >> >> i'm going to _try_ to merge in #3871 but it's... the prospect of >> sitting waiting for configure to take THREE hours to complete, due to >> /bin/sh.exe instances taking TWO SECONDS _each_ to start up does not >> really fill me with deep joy. > > As from version 1.1.8 msys bash could be run in wine. ah, that might be worth trying. thank you! >> it's all a bit odd - it still feels like things are being >> cross-compiled... but they're not... it's just that setup.py has paths >> that don't _quite_ match up with the msys environment... > > You could use CPPFLAGS and LDFLAGS to point other locations. > Usually wine drive Z: is mapped to filesystem root and visible as /z in > msys. oh don't get me wrong - it all works, all compiles absolutely fine, modules and everything... it just... _feels_ like a cross-compile, because of msys. you're sort-of in unix-land, yet the resultant binary python.exe is most _definitely_ "c:/"ey. >> the regression testing is _great_ fun! some of the failures are >> really quite spectacular, but surprisingly there are less than >> anticipated. > > About 5 test fail in emulated environment due bugs in emulator. i have 12 that fail - but if i replace msvcrt builtin with msvcrt native, all but two or three disappear. the S8I one (which is due to gcc, you already found that i noticed); tmpfile() tries to write to z:/ which is of course / - the root filesystem; and os.environ['HELLO'] = 'World; os.popen("/bin/sh echo $HELLO").read() != 'World' but i'm not sure if that one _should_ succeed: the only reason it got run is because i _happened_ to have msys installed. > May be some math tests fail. Python math tests are too strict for msvcrt > 7.0/6.0 (default runtime). Simple work around is to use long double > functions from mingwex library. oh i noticed in http://bugs.python.org/issue3871 that you replaced some of the math functions. > Other work-around is to build whole python (including modules) with msvcrt > 9.0 but I'm not sure that I will publish soon technical details about this > build as "DLL hell" is replaces by "Assembly hell" (see python issues > related to vista and MSVC build). ooo, scarey. oh! roumen, very important: martin says we have to be _really_ careful about releases of mingw-compiled python.exe and libpython2.N.dll - they _have_ to match up with the python-win32 build msvcrt otherwise it will cause absolute mayhem. i'm currently adding an msvcr80 specfile so i can build under wine - see http://bugs.python.org/msg79978 which requires that you customise wine to get it to drop something blah blah don't care in the least, i just want it all to work :) l. From lkcl at lkcl.net Sat Jan 17 18:47:21 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 17 Jan 2009 17:47:21 +0000 Subject: [Python-Dev] report on building of python 2.5.2 under msys under wine on linux. In-Reply-To: <4971F66A.2040805@roumenpetrov.info> References: <4971F66A.2040805@roumenpetrov.info> Message-ID: > About 5 test fail in emulated environment due bugs in emulator. oh - nearly forgot: several of the ctypes tests fail quite spectacularly :) From guido at python.org Sat Jan 17 20:25:13 2009 From: guido at python.org (Guido van Rossum) Date: Sat, 17 Jan 2009 11:25:13 -0800 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: <49719613.20302@v.loewis.de> References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> <49719613.20302@v.loewis.de> Message-ID: On Sat, Jan 17, 2009 at 12:25 AM, "Martin v. L?wis" wrote: >> Index: Lib/optparse.py >> =================================================================== >> --- Lib/optparse.py (revision 68465) >> +++ Lib/optparse.py (working copy) >> @@ -994,7 +994,7 @@ >> """add_option(Option) >> add_option(opt_str, ..., kwarg=val, ...) >> """ >> - if type(args[0]) is types.StringType: >> + if type(args[0]) in types.StringTypes: >> option = self.option_class(*args, **kwargs) >> elif len(args) == 1 and not kwargs: >> option = args[0] >> >> Should this be fixed, or wait for 2.7? > > It would be a new feature. So if we apply a strict policy, it > can't be added to 2.6. That seems a bit *too* strict to me, as long as the Unicode strings contain just ASCII. I'm fine with fixing both cases Barry mentioned, especially if it otherwise breaks "from __future__ import unicode_literals". I expect though that as one tries more things one will find more things broken with that mode. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Sat Jan 17 20:41:36 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 17 Jan 2009 20:41:36 +0100 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> <49719613.20302@v.loewis.de> Message-ID: <49723470.8010703@v.loewis.de> > That seems a bit *too* strict to me, as long as the Unicode strings > contain just ASCII. I'm fine with fixing both cases Barry mentioned, > especially if it otherwise breaks "from __future__ import > unicode_literals". I expect though that as one tries more things one > will find more things broken with that mode. Of course, the proposed patch would widen it to arbitrary Unicode command options; nothing in the patch restricts it to pure ASCII. Even when only ASCII characters are used in the option name, we might still get encoding exceptions or warnings if a non-ASCII byte string (e.g. from the command line) happens to be compared with the option name (although I just now couldn't produce such a case). Regards, Martin P.S. optparse already defines a function isbasestring; it might be better to use that one instead. From nad at acm.org Sat Jan 17 21:08:11 2009 From: nad at acm.org (Ned Deily) Date: Sat, 17 Jan 2009 12:08:11 -0800 Subject: [Python-Dev] bundlebuilder broken in 2.6 References: <7043CB7C-18F4-4E16-AE0C-CDA6BA311044@barrys-emacs.org> Message-ID: In article <7043CB7C-18F4-4E16-AE0C-CDA6BA311044 at barrys-emacs.org>, Barry Scott wrote: > It seems that the packaging of Mac Python 2.6 is missing at least one > file > that is critical to the operation of bundlebuilder.py. > > I've logged the issue as http://bugs.python.org/issue4937. I've noted a workaround in the tracker: just copy the file from an older version of Python. It's a simple xml plist and I don't think its contents are all that critical anyway. While the build should be fixed for 2.6+ (I'll send a patch), note that bundlebuilder is gone in 3.0. -- Ned Deily, nad at acm.org From lkcl at lkcl.net Sat Jan 17 21:46:40 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 17 Jan 2009 20:46:40 +0000 Subject: [Python-Dev] off-by-one on ftell on wine, but no regression test to catch it Message-ID: folks, hi, http://bugs.winehq.org/show_bug.cgi?id=16982 related to this: from array import array TESTFN = "testfile.txt" def fail(x): print x testlines = [ "spam, spam and eggs\n", "eggs, spam, ham and spam\n", "saussages, spam, spam and eggs\n", "spam, ham, spam and eggs\n", "spam, spam, spam, spam, spam, ham, spam\n", "wonderful spaaaaaam.\n" ] try: # Prepare the testfile bag = open(TESTFN, "w") bag.writelines(testlines) bag.close() f = open(TESTFN) testline = testlines.pop(0) line = f.readline() testline = testlines.pop(0) buf = array("c", "\x00" * len(testline)) f.readinto(buf) testline = testlines.pop(0) print "length of testline:", len(testline) line = f.read(len(testline)) if line != testline: fail("read() after next() with empty buffer " "failed. Got %r, expected %r" % (line, testline)) lines = f.readlines() if lines != testlines: fail("readlines() after next() with empty buffer " "failed. Got %r, expected %r" % (line, testline)) f.close() finally: os.unlink(TESTFN) which is a reduced version of Lib/test/test_file.py running under wine, ftell() has an off-by-one bug, where the file position accidentally doesn't include the fact that the CR of the CRLF has been skipped. but, now with the fgets() bug fixed, the regression tests pass, but there's still the off-by-one bug which _isn't_ caught. this really should be added as a windows test. actually, it should be added as a test for everything: it's not always reasonable to assume that OSes get their file positions right :) l. From solipsis at pitrou.net Sat Jan 17 21:58:04 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 17 Jan 2009 20:58:04 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?off-by-one_on_ftell_on_wine=2C=09but_no_re?= =?utf-8?q?gression_test_to_catch_it?= References: Message-ID: Luke Kenneth Casson Leighton lkcl.net> writes: > running under wine, ftell() has an off-by-one bug, where the file > position accidentally doesn't include the fact that the CR of the CRLF > has been skipped. but, now with the fgets() bug fixed, the regression > tests pass, but there's still the off-by-one bug which _isn't_ caught. I don't understand why we should have a test for this. The regression tests are meant to caught bugs in Python itself, not in the underlying OS libs... The only reason to care about this would be if the aforementioned OS bug managed to crash the interpreter. From rvossler at qwest.net Sat Jan 17 22:12:23 2009 From: rvossler at qwest.net (Roger Vossler) Date: Sat, 17 Jan 2009 14:12:23 -0700 Subject: [Python-Dev] Python 3 for Mac OSX Message-ID: <655D2F11-818D-43E4-8234-1DD6C3B749B3@qwest.net> Hi, > Any idea when the Mac OSX disk image (.dmg file) for Python 3.x > will be available? Are we talking about days? weeks? Or, should I start learning how to build Python from source? Any info would be appreciated. Thanks, Roger..... From martin at v.loewis.de Sat Jan 17 23:17:08 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 17 Jan 2009 23:17:08 +0100 Subject: [Python-Dev] Python 3 for Mac OSX In-Reply-To: <655D2F11-818D-43E4-8234-1DD6C3B749B3@qwest.net> References: <655D2F11-818D-43E4-8234-1DD6C3B749B3@qwest.net> Message-ID: <497258E4.60601@v.loewis.de> > Are we talking about days? weeks? Or, should I start learning how to > build Python from source? Any info would be appreciated. The latter. Don't ever expect that others will help you. This is open source; you have to help yourself. Regards, Martin From nad at acm.org Sat Jan 17 23:22:26 2009 From: nad at acm.org (Ned Deily) Date: Sat, 17 Jan 2009 14:22:26 -0800 Subject: [Python-Dev] Python 3 for Mac OSX References: <655D2F11-818D-43E4-8234-1DD6C3B749B3@qwest.net> Message-ID: In article <655D2F11-818D-43E4-8234-1DD6C3B749B3 at qwest.net>, Roger Vossler wrote: > > Any idea when the Mac OSX disk image (.dmg file) for Python 3.x > > will be available? > > Are we talking about days? weeks? Or, should I start learning how to > build Python from > source? Any info would be appreciated. I don't know what other activity is going on but I am working on it right now. I expect to submit several patches in the next few days that should allow the production of at least a test installer image. -- Ned Deily, nad at acm.org From luke.leighton at googlemail.com Sat Jan 17 23:06:01 2009 From: luke.leighton at googlemail.com (Luke Kenneth Casson Leighton) Date: Sat, 17 Jan 2009 22:06:01 +0000 Subject: [Python-Dev] http://bugs.python.org/issue4977 - assumption that maxint64 fits into long on 32-bit systems Message-ID: this was found as part of the regression tests, compiling python under wine under msys with mingw32. test_maxint64 failed. i tracked it down to the assumption that a long will fit into 32-bits, which obviously... it won't! the simplest case is to add a test for the length of the data string being 10 or more characters, on the basis that 9 or less is definitely going into an int; 11 or more is _definitely_ going into a long, and... well... bugger the 10 case, just stuff it in a long rather than waste time trying to find out how many chars it is. if anyone wants to do a more perfect version of this, simple patch, they're mooore than welcome. l. From barry at python.org Sat Jan 17 23:58:59 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 17 Jan 2009 17:58:59 -0500 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: <1afaf6160901161952x77ebfd04x241db3eea1989901@mail.gmail.com> References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> <1afaf6160901161952x77ebfd04x241db3eea1989901@mail.gmail.com> Message-ID: <36D31F4D-0B8B-46D5-92EA-8A3259D25CA3@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 16, 2009, at 10:52 PM, Benjamin Peterson wrote: > On Fri, Jan 16, 2009 at 9:45 PM, Barry Warsaw > wrote: >> >> The optparse one could easily be fixed for 2.6, if we agree it >> should be >> fixed. This untested patch should do it I think: >> >> Index: Lib/optparse.py >> =================================================================== >> - --- Lib/optparse.py (revision 68465) >> +++ Lib/optparse.py (working copy) >> @@ -994,7 +994,7 @@ >> """add_option(Option) >> add_option(opt_str, ..., kwarg=val, ...) >> """ >> - - if type(args[0]) is types.StringType: >> + if type(args[0]) in types.StringTypes: >> option = self.option_class(*args, **kwargs) >> elif len(args) == 1 and not kwargs: >> option = args[0] > > It'd probably be better to replace that whole line with > isinstance(args[0], basestring). I thought about that, but clearly the style of that module is to use the 'is' test. I'm assuming that's because of some required backward compatibility reason, but honestly I didn't check, I just copied the style of the file. >> The fact that 'a' and 'b' are unicodes and not accepted as keyword >> arguments >> is probably the tougher problem. I haven't yet looked at what it >> might take >> to fix. Is it worth fixing in 2.6 or is this a wait-for-2.7 thing? > > Actually, this looks like a one line fix, too: > > --- Python/ceval.c (revision 68625) > +++ Python/ceval.c (working copy) > @@ -2932,7 +2932,8 @@ > PyObject *keyword = kws[2*i]; > PyObject *value = kws[2*i + 1]; > int j; > - if (keyword == NULL || ! > PyString_Check(keyword)) { > + if (keyword == NULL || ! > (PyString_Check(keyword) || > + > PyUnicode_Check(keyword))) { > PyErr_Format(PyExc_TypeError, > "%.200s() keywords must be > strings", > PyString_AsString(co->co_name)); That seems reasonable. > But I agree with Guido when he says this should be a 2.7 feature. As does that. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSXJis3EjvBPtnXfVAQLnsAP+I7ZIa8vKwSJV+cGqlFyKYNdyYysxYW5w QL36DMXMwfg+Gddb5GN16IGXZt54yTneFAp6fxNgq55Seql/LFmhSrYoq0dk0uXz +sb92PRtYD7QjV6BkOUFlIGphmuOS7Vxv6+M2Xi1YoSyU6DHhno0AyYUFa3ysJiC lfNP6TLgGL0= =mp9M -----END PGP SIGNATURE----- From barry at python.org Sun Jan 18 00:03:16 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 17 Jan 2009 18:03:16 -0500 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> <49719613.20302@v.loewis.de> Message-ID: <3F508C36-9AB0-49E2-A5D0-1201BCFB530E@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 17, 2009, at 2:25 PM, Guido van Rossum wrote: > On Sat, Jan 17, 2009 at 12:25 AM, "Martin v. L?wis" > wrote: >>> Index: Lib/optparse.py >>> =================================================================== >>> --- Lib/optparse.py (revision 68465) >>> +++ Lib/optparse.py (working copy) >>> @@ -994,7 +994,7 @@ >>> """add_option(Option) >>> add_option(opt_str, ..., kwarg=val, ...) >>> """ >>> - if type(args[0]) is types.StringType: >>> + if type(args[0]) in types.StringTypes: >>> option = self.option_class(*args, **kwargs) >>> elif len(args) == 1 and not kwargs: >>> option = args[0] >>> >>> Should this be fixed, or wait for 2.7? >> >> It would be a new feature. So if we apply a strict policy, it >> can't be added to 2.6. > > That seems a bit *too* strict to me, as long as the Unicode strings > contain just ASCII. I'm fine with fixing both cases Barry mentioned, > especially if it otherwise breaks "from __future__ import > unicode_literals". I expect though that as one tries more things one > will find more things broken with that mode. Maybe not though! I've finished converting the Mailman 3 code base and there were the only two problems that I could attribute to Python. Everything else were really attributed to my own sloppiness between bytes and strings. In fact, I started this experiment to fix a "problem" with the Storm ORM, which is much stricter about its column data types. I'm happy enough with the code that I'm going to keep it even with the Python nits. It sounds like you're amenable to fixing this in Python 2.6. - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSXJjtXEjvBPtnXfVAQJ8pgP/WZptUyiYvMJnNGOQ/QF6fD7VpwN4GeDF h/idTycEFx9xD32qcdoO+1mZTbsUHyyIg+5hbJP4LP7Iy3yGfVNioih7dg6M287F 2QQ9LVimxyWGvJ+imT508EWDydLfszDkDi9zaqYxvCIU0fbJTs4ylWOFDQlnktNh HR/wU0n+7UQ= =5GII -----END PGP SIGNATURE----- From benjamin at python.org Sun Jan 18 00:10:30 2009 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 17 Jan 2009 17:10:30 -0600 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: <36D31F4D-0B8B-46D5-92EA-8A3259D25CA3@python.org> References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> <1afaf6160901161952x77ebfd04x241db3eea1989901@mail.gmail.com> <36D31F4D-0B8B-46D5-92EA-8A3259D25CA3@python.org> Message-ID: <1afaf6160901171510p3d07c966lc381f48774b203fb@mail.gmail.com> On Sat, Jan 17, 2009 at 4:58 PM, Barry Warsaw wrote: > On Jan 16, 2009, at 10:52 PM, Benjamin Peterson wrote: >> On Fri, Jan 16, 2009 at 9:45 PM, Barry Warsaw wrote: >>> - - if type(args[0]) is types.StringType: >>> + if type(args[0]) in types.StringTypes: >> >> It'd probably be better to replace that whole line with >> isinstance(args[0], basestring). > > I thought about that, but clearly the style of that module is to use the > 'is' test. I'm assuming that's because of some required backward > compatibility reason, but honestly I didn't check, I just copied the style > of the file. optparse is now no longer externally maintained, so it could probably use a little TLC and modernization. > >>> The fact that 'a' and 'b' are unicodes and not accepted as keyword >>> arguments >>> is probably the tougher problem. I haven't yet looked at what it might >>> take >>> to fix. Is it worth fixing in 2.6 or is this a wait-for-2.7 thing? >> >> Actually, this looks like a one line fix, too: .... > That seems reasonable. I've posted this to the tracker with a test: http://bugs.python.org/issue4978 -- Regards, Benjamin From barry at python.org Sun Jan 18 00:20:11 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 17 Jan 2009 18:20:11 -0500 Subject: [Python-Dev] Problems with unicode_literals In-Reply-To: <200901171403.13806.victor.stinner@haypocalc.com> References: <95049E80-B317-45F9-ACE8-D47A0BE6A952@python.org> <9CDEBB61-05C2-4BC8-86E0-269067BD898E@python.org> <200901171403.13806.victor.stinner@haypocalc.com> Message-ID: <820144FF-226B-400E-89BA-A339C93A8F98@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 17, 2009, at 8:03 AM, Victor Stinner wrote: > Le Saturday 17 January 2009 04:45:28 Barry Warsaw, vous avez ?crit : >> The optparse one could easily be fixed for 2.6, if we agree it should >> be fixed. This untested patch should do it I think: >> >> Index: Lib/optparse.py >> =================================================================== >> --- Lib/optparse.py (revision 68465) >> +++ Lib/optparse.py (working copy) >> @@ -994,7 +994,7 @@ >> """add_option(Option) >> add_option(opt_str, ..., kwarg=val, ...) >> """ >> - if type(args[0]) is types.StringType: >> + if type(args[0]) in types.StringTypes: >> option = self.option_class(*args, **kwargs) >> elif len(args) == 1 and not kwargs: >> option = args[0] > > See also related issues: > - http://bugs.python.org/issue2931: optparse: various problems with > unicode > and gettext This one definitely covers the optparse problem I complained about. > - http://bugs.python.org/issue4319: optparse and non-ascii help > strings - -Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSXJnq3EjvBPtnXfVAQIy0QP/ZCveuE1fjdRFxd4KPnvOD9CEAOEb/bEs k6xpcCrOrrmhaseIMMgrfDvfFnio+3kbBoewfoD1tQpWAqNmKdqmIKcPxTNUf0cL 66Wv1212O5XrJACq+UnSO50rdkMbV/oD2RMOmsQRB4MJmNSafu9KUuyl56WzIa0S I7zBrqpcC/U= =uoVG -----END PGP SIGNATURE----- From brett at python.org Sun Jan 18 08:01:37 2009 From: brett at python.org (Brett Cannon) Date: Sat, 17 Jan 2009 23:01:37 -0800 Subject: [Python-Dev] No email about buildbot failures for 3.1? Message-ID: I just realized that I had not received any emails on python-checkins about the buildbot failures I accidentally caused. And then I noticed that I had not gotten any emails for py3k in a while. Did that get switched off on purpose? -Brett From martin at v.loewis.de Sun Jan 18 10:28:12 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 18 Jan 2009 10:28:12 +0100 Subject: [Python-Dev] No email about buildbot failures for 3.1? In-Reply-To: References: Message-ID: <4972F62C.3040108@v.loewis.de> Brett Cannon wrote: > I just realized that I had not received any emails on python-checkins > about the buildbot failures I accidentally caused. And then I noticed > that I had not gotten any emails for py3k in a while. Did that get > switched off on purpose? No, it did not get switched off at all. I don't know what's happening. Regards, Martin From martin at v.loewis.de Sun Jan 18 10:53:53 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 18 Jan 2009 10:53:53 +0100 Subject: [Python-Dev] No email about buildbot failures for 3.1? In-Reply-To: References: Message-ID: <4972FC31.9000003@v.loewis.de> Brett Cannon wrote: > I just realized that I had not received any emails on python-checkins > about the buildbot failures I accidentally caused. And then I noticed > that I had not gotten any emails for py3k in a while. Did that get > switched off on purpose? I'm not even sure that anything changed at all. Buildbot sent a failure message for the 3.0 branch as late as today: http://mail.python.org/pipermail/python-checkins/2009-January/077320.html Regards, Martin From barry at barrys-emacs.org Sun Jan 18 18:10:26 2009 From: barry at barrys-emacs.org (Barry Scott) Date: Sun, 18 Jan 2009 17:10:26 +0000 Subject: [Python-Dev] bundlebuilder broken in 2.6 In-Reply-To: References: <7043CB7C-18F4-4E16-AE0C-CDA6BA311044@barrys-emacs.org> Message-ID: On 17 Jan 2009, at 20:08, Ned Deily wrote: > In article <7043CB7C-18F4-4E16-AE0C-CDA6BA311044 at barrys-emacs.org>, > Barry Scott wrote: > >> It seems that the packaging of Mac Python 2.6 is missing at least one >> file >> that is critical to the operation of bundlebuilder.py. >> >> I've logged the issue as http://bugs.python.org/issue4937. > > I've noted a workaround in the tracker: just copy the file from an > older > version of Python. It's a simple xml plist and I don't think its > contents are all that critical anyway. I figured the contents are not not important as the files from 2.5 and 2.4 talk of alpha this that and the other. > > > While the build should be fixed for 2.6+ (I'll send a patch), note > that > bundlebuilder is gone in 3.0. What is the replacement for bundlebuilder for 3.0? Lack of bundlebuilder becomes a serious porting problem for me. I deliver pysvn WOrkbench as a bundle to simplify installation by my users. Barry From dickinsm at gmail.com Sun Jan 18 19:03:15 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Sun, 18 Jan 2009 18:03:15 +0000 Subject: [Python-Dev] Strategies for debugging buildbot failures? Message-ID: <5c6f2a5d0901181003j96d32a5x35db64bc710d3405@mail.gmail.com> This is probably a stupid question, but here goes: Can anyone suggest good strategies for debugging buildbot test failures, for problems that aren't reproducible locally? There have been various times in the past that I've wanted to be able to do this. Right now, I'm thinking particularly of the 'Unknown signal 32' failure that's been occurring on the gentoo x86 buildbots for 3.0 and 3.x since pre- 3.0 alpha days. I recently noticed an apparent pattern to these failures: (failure occurs at the first test that involves threads, after test_os has been run), but am unsure how to proceed from there. Is it acceptable to commit a change (to the trunk or py3k, not to the release branches) solely for the purpose of getting more information about a failure? I don't see a lot of this kind of activity going on in the checkin messages, so I'm not sure whether this is okay or not. If I did this, the commit message would clearly indicate that the checkin was meant to be temporary, and give an expected time to reversion. Alternatively, is it reasonable to create a new branch solely for the purpose of tracking down one particular problem? Again, I don't see this sort of thing happening, but it seems like an attractive strategy, since it allows one to test one particular buildbot (via the form for requesting a build) without messing up anything else. What do others do to debug these failures? Mark (P.S. After a bit of Googling, I suspect the 'Unknown signal 32' failure of being related to the LinuxThreads library, and probably not Python's fault. But it would still be good to understand why it occurs with 3.x but not 2.x, and whether there's an easy workaround.) From fuzzyman at voidspace.org.uk Sun Jan 18 19:07:07 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sun, 18 Jan 2009 18:07:07 +0000 Subject: [Python-Dev] Strategies for debugging buildbot failures? In-Reply-To: <5c6f2a5d0901181003j96d32a5x35db64bc710d3405@mail.gmail.com> References: <5c6f2a5d0901181003j96d32a5x35db64bc710d3405@mail.gmail.com> Message-ID: <49736FCB.50202@voidspace.org.uk> Mark Dickinson wrote: > This is probably a stupid question, but here goes: > > Can anyone suggest good strategies for debugging buildbot > test failures, for problems that aren't reproducible locally? > > There have been various times in the past that I've wanted > to be able to do this. Right now, I'm thinking particularly of > the 'Unknown signal 32' failure that's been occurring on the > gentoo x86 buildbots for 3.0 and 3.x since pre- 3.0 alpha > days. I recently noticed an apparent pattern to these > failures: (failure occurs at the first test that involves > threads, after test_os has been run), but am unsure how > to proceed from there. > > Is it acceptable to commit a change (to the trunk or py3k, not to > the release branches) solely for the purpose of getting more > information about a failure? I don't see a lot of this kind of > activity going on in the checkin messages, so I'm not sure > whether this is okay or not. If I did this, the commit > message would clearly indicate that the checkin was > meant to be temporary, and give an expected time to reversion. > At Resolver Systems we regularly extend the test framework purely to provide more diagnostic information in the event of test failures. We do a lot of functional testing through the UI, which is particularly prone to intermittent and hard to diagnose failures. It can be built in in a way that doesn't affect the test run unless the test fails - and so there is no reason not to make the changes permanent unless they are particularly intrusive. Michael Foord > Alternatively, is it reasonable to create a new branch solely > for the purpose of tracking down one particular problem? > Again, I don't see this sort of thing happening, but it seems > like an attractive strategy, since it allows one to test one > particular buildbot (via the form for requesting a build) > without messing up anything else. > > What do others do to debug these failures? > > Mark > > (P.S. After a bit of Googling, I suspect the 'Unknown > signal 32' failure of being related to the LinuxThreads > library, and probably not Python's fault. But it would > still be good to understand why it occurs with 3.x but > not 2.x, and whether there's an easy workaround.) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog From martin at v.loewis.de Sun Jan 18 19:16:01 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 18 Jan 2009 19:16:01 +0100 Subject: [Python-Dev] Strategies for debugging buildbot failures? In-Reply-To: <5c6f2a5d0901181003j96d32a5x35db64bc710d3405@mail.gmail.com> References: <5c6f2a5d0901181003j96d32a5x35db64bc710d3405@mail.gmail.com> Message-ID: <497371E1.70604@v.loewis.de> > Is it acceptable to commit a change (to the trunk or py3k, not to > the release branches) solely for the purpose of getting more > information about a failure? [...] > Alternatively, is it reasonable to create a new branch solely > for the purpose of tracking down one particular problem? Either is ok. Committing to the trunk is "more noisy", so I would prefer creation of a branch. > Again, I don't see this sort of thing happening, but it seems > like an attractive strategy, since it allows one to test one > particular buildbot (via the form for requesting a build) > without messing up anything else. Buildbot also supports submission of patches directly to the slaves. This is currently not activated, and clearly requires some authentication/authorization; if you want to use that, I'd be happy to experiment with setting it up, though. > What do others do to debug these failures? In the past, for the really difficult problems, we arranged to have the developers get access to the buildbot slaves. Feel free to contact the owner of the slave if you want that; let me know if I should introduce you. Regards, Martin From techtonik at gmail.com Sun Jan 18 20:21:16 2009 From: techtonik at gmail.com (anatoly techtonik) Date: Sun, 18 Jan 2009 21:21:16 +0200 Subject: [Python-Dev] Single Sign-On for *.python.org Message-ID: Hello, Should we open a ticket to make a single sign-on service for *.python.org sites? There are at least 3 logins there may be more, for example if we are going to make some online content edition/comment system for docs. These are: bugs.python.org wiki.python.org pypi.python.org History ~~~~~ Some months ago I filled a proposal to make an OpenID service for http://bugs.python.org http://bugs.python.org/issue2837 Because it was the only python.org service I used, I did not realize that *.python.org is bunch of separate services with their own authorization, but with the common template. So I was bounced to make an OpenID for roundup. Unfortunately, the sole openidenabled library failed to communicate with some Blogger servers, and due to the lack of support from the developer I was unable to continue my work. But OpenID is just convenience feature. Now that I have three accounts for python.org I fail to sync passwords between them if one of them is reset. I know about "remember me" buttons and the likes, but it is not always possible to work from personal station. So, the question is - should we open a ticket for Single Sign-On system for *.python.org or it bugs only me? WBR, -- --anatoly t. From tjreedy at udel.edu Sun Jan 18 20:39:33 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Sun, 18 Jan 2009 14:39:33 -0500 Subject: [Python-Dev] Single Sign-On for *.python.org In-Reply-To: References: Message-ID: anatoly techtonik wrote: > Hello, > > Should we open a ticket to make a single sign-on service for *.python.org sites? > There are at least 3 logins there may be more, for example if we are > going to make some online content edition/comment system for docs. > These are: > > bugs.python.org > wiki.python.org > pypi.python.org > > > History > ~~~~~ > Some months ago I filled a proposal to make an OpenID service for > http://bugs.python.org http://bugs.python.org/issue2837 Because it was > the only python.org service I used, I did not realize that > *.python.org is bunch of separate services with their own > authorization, but with the common template. So I was bounced to make > an OpenID for roundup. Unfortunately, the sole openidenabled library > failed to communicate with some Blogger servers, and due to the lack > of support from the developer I was unable to continue my work. > > But OpenID is just convenience feature. Now that I have three accounts > for python.org I fail to sync passwords between them if one of them is > reset. I know about "remember me" buttons and the likes, but it is not > always possible to work from personal station. So, the question is - > should we open a ticket for Single Sign-On system for *.python.org or > it bugs only me? No. Aside from the hassle of three, the registration process for the wiki is a bit broken, so if it would were obsoleted, that would be great. From ironfroggy at gmail.com Sun Jan 18 20:47:48 2009 From: ironfroggy at gmail.com (Calvin Spealman) Date: Sun, 18 Jan 2009 14:47:48 -0500 Subject: [Python-Dev] Single Sign-On for *.python.org In-Reply-To: References: Message-ID: <76fd5acf0901181147h4773eb43l2746e2a15daf2dd0@mail.gmail.com> I would like to see that kind of coherence. I think anything that gets in the way of someone getting in is in danger of holding someone off from contributing something, be it wiki edits, bug reports, or packages. One might also ask about the mailman lists here. On Sun, Jan 18, 2009 at 2:21 PM, anatoly techtonik wrote: > Hello, > > Should we open a ticket to make a single sign-on service for *.python.org sites? > There are at least 3 logins there may be more, for example if we are > going to make some online content edition/comment system for docs. > These are: > > bugs.python.org > wiki.python.org > pypi.python.org > > > History > ~~~~~ > Some months ago I filled a proposal to make an OpenID service for > http://bugs.python.org http://bugs.python.org/issue2837 Because it was > the only python.org service I used, I did not realize that > *.python.org is bunch of separate services with their own > authorization, but with the common template. So I was bounced to make > an OpenID for roundup. Unfortunately, the sole openidenabled library > failed to communicate with some Blogger servers, and due to the lack > of support from the developer I was unable to continue my work. > > But OpenID is just convenience feature. Now that I have three accounts > for python.org I fail to sync passwords between them if one of them is > reset. I know about "remember me" buttons and the likes, but it is not > always possible to work from personal station. So, the question is - > should we open a ticket for Single Sign-On system for *.python.org or > it bugs only me? > > WBR, > -- > --anatoly t. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From martin at v.loewis.de Sun Jan 18 20:51:37 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 18 Jan 2009 20:51:37 +0100 Subject: [Python-Dev] Single Sign-On for *.python.org In-Reply-To: References: Message-ID: <49738849.7050300@v.loewis.de> > So, the question is - > should we open a ticket for Single Sign-On system for *.python.org or > it bugs only me? Submission of tickets is futile. Code talks. Regards, Martin From martin at v.loewis.de Sun Jan 18 23:02:00 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 18 Jan 2009 23:02:00 +0100 Subject: [Python-Dev] Help! Vista symlinks and IDLE Message-ID: <4973A6D8.5030207@v.loewis.de> Apparently, if you install Python into a localized version of \Program Files on Vista (such as \Programas, or \Programmer), IDLE fails to start; see http://bugs.python.org/3881 Apparently, Tcl cannot properly initialize on such a system, and apparently, this is related to these folders being symlinks. It would be good if anybody who has access to such a system can diagnose what the specific problem is, how to reproduce it on a system that has the English version of Vista installed, and preferably, how to solve the problem. Regards, Martin From brett at python.org Sun Jan 18 23:21:20 2009 From: brett at python.org (Brett Cannon) Date: Sun, 18 Jan 2009 14:21:20 -0800 Subject: [Python-Dev] No email about buildbot failures for 3.1? In-Reply-To: <4972FC31.9000003@v.loewis.de> References: <4972FC31.9000003@v.loewis.de> Message-ID: On Sun, Jan 18, 2009 at 01:53, "Martin v. L?wis" wrote: > Brett Cannon wrote: >> I just realized that I had not received any emails on python-checkins >> about the buildbot failures I accidentally caused. And then I noticed >> that I had not gotten any emails for py3k in a while. Did that get >> switched off on purpose? > > I'm not even sure that anything changed at all. Buildbot sent a failure > message for the 3.0 branch as late as today: > > http://mail.python.org/pipermail/python-checkins/2009-January/077320.html All I know is that I checked in a test that failed on all case-sensitive file system for py3k (3.1) and there does not appear to be a single email about it. And the buildbots very clearly had the chance to fail. -Brett From martin at v.loewis.de Sun Jan 18 23:22:24 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 18 Jan 2009 23:22:24 +0100 Subject: [Python-Dev] No email about buildbot failures for 3.1? In-Reply-To: References: <4972FC31.9000003@v.loewis.de> Message-ID: <4973ABA0.3010706@v.loewis.de> > All I know is that I checked in a test that failed on all > case-sensitive file system for py3k (3.1) and there does not appear to > be a single email about it. And the buildbots very clearly had the > chance to fail. What is the specific checkin? What specific builds failed? Regards, Martin From brett at python.org Sun Jan 18 23:27:10 2009 From: brett at python.org (Brett Cannon) Date: Sun, 18 Jan 2009 14:27:10 -0800 Subject: [Python-Dev] No email about buildbot failures for 3.1? In-Reply-To: <4973ABA0.3010706@v.loewis.de> References: <4972FC31.9000003@v.loewis.de> <4973ABA0.3010706@v.loewis.de> Message-ID: On Sun, Jan 18, 2009 at 14:22, "Martin v. L?wis" wrote: > >> All I know is that I checked in a test that failed on all >> case-sensitive file system for py3k (3.1) and there does not appear to >> be a single email about it. And the buildbots very clearly had the >> chance to fail. > > What is the specific checkin? What specific builds failed? > How about one that just happened: http://www.python.org/dev/buildbot/3.x.stable/sparc%20solaris10%20gcc%203.x/builds/126 . If you look at the python-checkins for the revision (http://mail.python.org/pipermail/python-checkins/2009-January/077369.html) at the index, there is no email about the buildbot failure. -Brett From steve at holdenweb.com Sun Jan 18 23:27:54 2009 From: steve at holdenweb.com (Steve Holden) Date: Sun, 18 Jan 2009 17:27:54 -0500 Subject: [Python-Dev] Single Sign-On for *.python.org In-Reply-To: <49738849.7050300@v.loewis.de> References: <49738849.7050300@v.loewis.de> Message-ID: Martin v. L?wis wrote: >> So, the question is - >> should we open a ticket for Single Sign-On system for *.python.org or >> it bugs only me? > > Submission of tickets is futile. Code talks. > And don't forget that while a common authentication base is fine, you need to ensure that various services can be separately authorized - someone may have permission to log in to one server but not others? regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From hall.jeff at gmail.com Sun Jan 18 23:30:41 2009 From: hall.jeff at gmail.com (Jeff Hall) Date: Sun, 18 Jan 2009 17:30:41 -0500 Subject: [Python-Dev] Help! Vista symlinks and IDLE In-Reply-To: <4973A6D8.5030207@v.loewis.de> References: <4973A6D8.5030207@v.loewis.de> Message-ID: <1bc395c10901181430x44aba467w1a3e58bbf0ccc4f3@mail.gmail.com> I'm glad someone sent this out... I was having this EXACT problem today I've got it installed on my wife's computer and I'm certain that it worked when I first installed 3.0a but it stopped working (I didn't update Python but my wife has done several Vista security updates)... Hopefully, that will help in the bug tracking for this. On Sun, Jan 18, 2009 at 5:02 PM, "Martin v. L?wis" wrote: > Apparently, if you install Python into a localized > version of \Program Files on Vista (such as \Programas, > or \Programmer), IDLE fails to start; see > http://bugs.python.org/3881 > > Apparently, Tcl cannot properly initialize on such a system, > and apparently, this is related to these folders being > symlinks. > > It would be good if anybody who has access to such a system > can diagnose what the specific problem is, how to reproduce > it on a system that has the English version of Vista > installed, and preferably, how to solve the problem. > > Regards, > Martin > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/hall.jeff%40gmail.com > -- Haikus are easy Most make very little sense Refrigerator -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Sun Jan 18 23:49:59 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sun, 18 Jan 2009 23:49:59 +0100 Subject: [Python-Dev] No email about buildbot failures for 3.1? In-Reply-To: References: <4972FC31.9000003@v.loewis.de> <4973ABA0.3010706@v.loewis.de> Message-ID: <4973B217.5000607@v.loewis.de> >> What is the specific checkin? What specific builds failed? >> > > How about one that just happened: > http://www.python.org/dev/buildbot/3.x.stable/sparc%20solaris10%20gcc%203.x/builds/126 > . If you look at the python-checkins for the revision > (http://mail.python.org/pipermail/python-checkins/2009-January/077369.html) > at the index, there is no email about the buildbot failure. In that case, no mail was sent because this builder had a failed build prior to this build already: http://www.python.org/dev/buildbot/3.x.stable/sparc solaris10 gcc 3.x/builds/125 The MailNotifier is currently configured to send mail only when the slave transitions from passed to failed ('problem'), not while it stays in failed. That could be changed, of course (to either 'failing': send mail for all failed builds, or 'all': send mail for all builds). However, this has been the configuration since buildbot was first installed, so its not a recent configuration change. Regards, Martin From brett at python.org Sun Jan 18 23:52:31 2009 From: brett at python.org (Brett Cannon) Date: Sun, 18 Jan 2009 14:52:31 -0800 Subject: [Python-Dev] No email about buildbot failures for 3.1? In-Reply-To: <4973B217.5000607@v.loewis.de> References: <4972FC31.9000003@v.loewis.de> <4973ABA0.3010706@v.loewis.de> <4973B217.5000607@v.loewis.de> Message-ID: On Sun, Jan 18, 2009 at 14:49, "Martin v. L?wis" wrote: >>> What is the specific checkin? What specific builds failed? >>> >> >> How about one that just happened: >> http://www.python.org/dev/buildbot/3.x.stable/sparc%20solaris10%20gcc%203.x/builds/126 >> . If you look at the python-checkins for the revision >> (http://mail.python.org/pipermail/python-checkins/2009-January/077369.html) >> at the index, there is no email about the buildbot failure. > > In that case, no mail was sent because this builder had a failed build > prior to this build already: > > http://www.python.org/dev/buildbot/3.x.stable/sparc solaris10 gcc > 3.x/builds/125 > > The MailNotifier is currently configured to send mail only when the > slave transitions from passed to failed ('problem'), not while it stays > in failed. That could be changed, of course (to either 'failing': > send mail for all failed builds, or 'all': send mail for all builds). > However, this has been the configuration since buildbot was first > installed, so its not a recent configuration change. Ah, OK. Thanks for the clarification, Martin. Guess 3.1 has just been failing for a while then. -Brett From scottmc2 at gmail.com Mon Jan 19 00:03:38 2009 From: scottmc2 at gmail.com (scott mc) Date: Sun, 18 Jan 2009 23:03:38 +0000 Subject: [Python-Dev] Support for the Haiku OS In-Reply-To: <496F9056.8060503@v.loewis.de> References: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> <18798.26547.535845.514247@montanaro.dyndns.org> <20090115090056.GI1060@nexus.in-nomine.org> <20090115172335.GK1060@nexus.in-nomine.org> <496F9056.8060503@v.loewis.de> Message-ID: The config.guess/.sub files in python/trunk/Modules/_ctypes/libffi are from , which is just before Haiku was finally added to the offical versions from gnulib. http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD and http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD So we just get fresh copies, perhaps it's time to update the version included with python? I built 2.7 on Haiku, but am getting failures in the regression tests. Many of them are in math related tests, failing in the 15th decimal place on test_decimal and a few others like that, I posted a ticket on Haiku's trac for that as it might be related to Haiku's built in math lib? (libm is built into Haiku's libroot.so) http://dev.haiku-os.org/ticket/3308 -scottmc From solipsis at pitrou.net Mon Jan 19 00:03:49 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 18 Jan 2009 23:03:49 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?=5F=5Fdel=5F=5F_and_tp=5Fdealloc_in_the_IO?= =?utf-8?q?_lib?= Message-ID: Dear python-dev, The Python implementation of IOBase, the base class for everything IO, has the (strange) idea to define a __del__ method. It is probably meant to avoid code duplication, so that users subclassing IOBase automatically get the close-on-destruct behaviour. (there is an even stranger test in test_io which involves overriding the __del__ method in a class derived from FileIO...) However, it has the drawback that all IO objects inherit a __del__ method, meaning problems when collecting reference cycles (the __del__ may not get called if caught in a reference cycle, defeating the whole point). While rewriting the IO stack in C, we have tried to keep this behaviour, but it seems better to just do it in the tp_dealloc function, and kill the __del__ (actually, we *already* do it in tp_dealloc, because __del__ / tp_del behaviour for C types is shady). Subclassing IOBase in Python would keep the tp_dealloc and therefore the close-on-destruct behaviour, without the problems of a __del__ method. (the implementation has to take a few precautions, first revive the object, then check its "closed" attribute/property - ignoring errors -, and if "closed" ended False then call the close() method) What do you think? Antoine. From brett at python.org Mon Jan 19 00:19:39 2009 From: brett at python.org (Brett Cannon) Date: Sun, 18 Jan 2009 15:19:39 -0800 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: Message-ID: On Sun, Jan 18, 2009 at 15:03, Antoine Pitrou wrote: > Dear python-dev, > > The Python implementation of IOBase, the base class for everything IO, has the > (strange) idea to define a __del__ method. It is probably meant to avoid code > duplication, so that users subclassing IOBase automatically get the > close-on-destruct behaviour. > > (there is an even stranger test in test_io which involves overriding the __del__ > method in a class derived from FileIO...) > > However, it has the drawback that all IO objects inherit a __del__ method, > meaning problems when collecting reference cycles (the __del__ may not get > called if caught in a reference cycle, defeating the whole point). > > While rewriting the IO stack in C, we have tried to keep this behaviour, but it > seems better to just do it in the tp_dealloc function, and kill the __del__ > (actually, we *already* do it in tp_dealloc, because __del__ / tp_del behaviour > for C types is shady). Subclassing IOBase in Python would keep the tp_dealloc > and therefore the close-on-destruct behaviour, without the problems of a __del__ > method. > > (the implementation has to take a few precautions, first revive the object, then > check its "closed" attribute/property - ignoring errors -, and if "closed" ended > False then call the close() method) > > What do you think? Fine by me. People should be using the context manager for guaranteed file closure anyway IMO. -Brett From lists at cheimes.de Mon Jan 19 00:53:53 2009 From: lists at cheimes.de (Christian Heimes) Date: Mon, 19 Jan 2009 00:53:53 +0100 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: Message-ID: Brett Cannon schrieb: > Fine by me. People should be using the context manager for guaranteed > file closure anyway IMO. You make a very good point! Perhaps we should stop promising that files get closed as soon as possible and encourage people in using the with statement. Christian From skip at pobox.com Mon Jan 19 01:25:39 2009 From: skip at pobox.com (skip at pobox.com) Date: Sun, 18 Jan 2009 18:25:39 -0600 Subject: [Python-Dev] Strategies for debugging buildbot failures? In-Reply-To: <5c6f2a5d0901181003j96d32a5x35db64bc710d3405@mail.gmail.com> References: <5c6f2a5d0901181003j96d32a5x35db64bc710d3405@mail.gmail.com> Message-ID: <18803.51331.633341.223823@montanaro.dyndns.org> Mark> Is it acceptable to commit a change (to the trunk or py3k, not to Mark> the release branches) solely for the purpose of getting more Mark> information about a failure? I think it would be kind of nice if you could force a buildbot to use a specific branch. You could then check your diagnostic changes in on a branch meant just for that purpose. Once you're done with your changes you could simply delete the branch. Skip From benjamin at python.org Mon Jan 19 01:26:49 2009 From: benjamin at python.org (Benjamin Peterson) Date: Sun, 18 Jan 2009 18:26:49 -0600 Subject: [Python-Dev] Strategies for debugging buildbot failures? In-Reply-To: <18803.51331.633341.223823@montanaro.dyndns.org> References: <5c6f2a5d0901181003j96d32a5x35db64bc710d3405@mail.gmail.com> <18803.51331.633341.223823@montanaro.dyndns.org> Message-ID: <1afaf6160901181626j4d92eb31v1ad5400e1a37460b@mail.gmail.com> On Sun, Jan 18, 2009 at 6:25 PM, wrote: > > Mark> Is it acceptable to commit a change (to the trunk or py3k, not to > Mark> the release branches) solely for the purpose of getting more > Mark> information about a failure? > > I think it would be kind of nice if you could force a buildbot to use a > specific branch. You could then check your diagnostic changes in on a > branch meant just for that purpose. Once you're done with your changes you > could simply delete the branch. You can already do that. Just click on the name of the buildbot, and then enter the branch you want it to test. -- Regards, Benjamin From ben+python at benfinney.id.au Mon Jan 19 02:13:39 2009 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 19 Jan 2009 12:13:39 +1100 Subject: [Python-Dev] Single Sign-On for *.python.org References: <76fd5acf0901181147h4773eb43l2746e2a15daf2dd0@mail.gmail.com> Message-ID: <87tz7w85do.fsf@benfinney.id.au> Calvin Spealman writes: > I would like to see that kind of coherence. I think anything that gets > in the way of someone getting in is in danger of holding someone off > from contributing something, be it wiki edits, bug reports, or > packages. One might also ask about the mailman lists here. The inability to authenticate using one of my OpenIDs is a major factor keeping me from bothering to submit bug reports and edit wiki pages. I've also had fruitless discussions about adding OpenID authentication to Roundup. Gmane allows me to use NNTP without requiring authentication to participate in the discussion forums. -- \ ?The truth is the most valuable thing we have. Let us economize | `\ it.? ?Mark Twain, _Following the Equator_ | _o__) | Ben Finney From nad at acm.org Mon Jan 19 02:35:25 2009 From: nad at acm.org (Ned Deily) Date: Sun, 18 Jan 2009 17:35:25 -0800 Subject: [Python-Dev] bundlebuilder broken in 2.6 References: <7043CB7C-18F4-4E16-AE0C-CDA6BA311044@barrys-emacs.org> Message-ID: In article , Barry Scott wrote: > What is the replacement for bundlebuilder for 3.0? Lack of > bundlebuilder becomes a serious porting problem for me. > I deliver pysvn WOrkbench as a bundle to simplify installation > by my users. Most people are using py2app these days to produce OSX application bundles for 2.x and the hope is for 3.0, as well. The pythonmac-sig forum is probably the best place to ask about experiences so far. Be aware that py2app hasn't been re-packaged recently so it's better to get it directly from its svn repository: or -- Ned Deily, nad at acm.org From greg at krypto.org Mon Jan 19 02:38:18 2009 From: greg at krypto.org (Gregory P. Smith) Date: Mon, 19 Jan 2009 01:38:18 +0000 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: Message-ID: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> +1 on getting rid of the IOBase __del__ in the C rewrite in favor of tp_dealloc. On Sun, Jan 18, 2009 at 11:53 PM, Christian Heimes wrote: > Brett Cannon schrieb: > > Fine by me. People should be using the context manager for guaranteed > > file closure anyway IMO. > Yes they should. (how I really really wish i didn't have to use 2.4 anymore ;) But lets at least be clear that is never acceptable for a python implementation to leak file descriptors/handles (or other system resources), they should be closed and released whenever the particular GC implementation gets around to it. > > You make a very good point! Perhaps we should stop promising that files > get closed as soon as possible and encourage people in using the with > statement. > > Christian > eegads, do we actually -promise- that somewhere? If so I'll happily go update those docs with a caveat. I regularly point out in code reviews that the very convenient and common idiom of open(name, 'w').write(data) doesn't guarantee when the file will be closed; its up to the GC implementation details. Good code should never depend on the GC for a timely release of scarce external resources (file descriptors/handles). -------------- next part -------------- An HTML attachment was scrubbed... URL: From guido at python.org Mon Jan 19 05:32:36 2009 From: guido at python.org (Guido van Rossum) Date: Sun, 18 Jan 2009 20:32:36 -0800 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> Message-ID: On Sun, Jan 18, 2009 at 5:38 PM, Gregory P. Smith wrote: > +1 on getting rid of the IOBase __del__ in the C rewrite in favor of > tp_dealloc. > > On Sun, Jan 18, 2009 at 11:53 PM, Christian Heimes wrote: >> >> Brett Cannon schrieb: >> > Fine by me. People should be using the context manager for guaranteed >> > file closure anyway IMO. > > Yes they should. (how I really really wish i didn't have to use 2.4 anymore > ;) Come on, the open-try-use-finally-close idiom isn't *that* bad... > But lets at least be clear that is never acceptable for a python > implementation to leak file descriptors/handles (or other system resources), > they should be closed and released whenever the particular GC implementation > gets around to it. I would like to make a stronger promise. I think that for files open for *writing*, all data written to the file should be flushed to disk before the fd is closed. This is the real reason for having the __del__: closing the fd is done by the C implementation of FileIO, but since (until the rewrite in C) the buffer management is all in Python (both the binary I/O buffer and the additional text I/O buffer), I felt the downside of having a __del__ method was preferable over the possibility of output files not being flushed (which is always nightmarish to debug). Of course, once both layers of buffering are implemented in C, the need for __del__ to do this goes away, and I would be fine with doing it all in tp_alloc. >> You make a very good point! Perhaps we should stop promising that files >> get closed as soon as possible and encourage people in using the with >> statement. >> >> Christian > > eegads, do we actually -promise- that somewhere? If so I'll happily go > update those docs with a caveat. I don't think we've promised that ever since the days when JPython (with a P!) was young... > I regularly point out in code reviews that the very convenient and common > idiom of open(name, 'w').write(data) doesn't guarantee when the file will be > closed; its up to the GC implementation details. Good code should never > depend on the GC for a timely release of scarce external resources (file > descriptors/handles). And buffer flushing. While I don't want to guarantee that the buffer is flushed ASAP, I do want to continue promising that it is flushed before the object is GC'ed and before the fd is closed. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Mon Jan 19 07:36:51 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Mon, 19 Jan 2009 07:36:51 +0100 Subject: [Python-Dev] Single Sign-On for *.python.org In-Reply-To: <87tz7w85do.fsf@benfinney.id.au> References: <76fd5acf0901181147h4773eb43l2746e2a15daf2dd0@mail.gmail.com> <87tz7w85do.fsf@benfinney.id.au> Message-ID: <49741F83.9020804@v.loewis.de> > I've also had fruitless discussions about adding OpenID authentication > to Roundup. Did you offer patches to roundup during these discussions? Regards, Martin From ben+python at benfinney.id.au Mon Jan 19 08:29:35 2009 From: ben+python at benfinney.id.au (Ben Finney) Date: Mon, 19 Jan 2009 18:29:35 +1100 Subject: [Python-Dev] Single Sign-On for *.python.org References: <76fd5acf0901181147h4773eb43l2746e2a15daf2dd0@mail.gmail.com> <87tz7w85do.fsf@benfinney.id.au> <49741F83.9020804@v.loewis.de> Message-ID: <87vdsb7nz4.fsf@benfinney.id.au> "Martin v. L?wis" writes: > > I've also had fruitless discussions about adding OpenID > > authentication to Roundup. > > Did you offer patches to roundup during these discussions? I grabbed the source code, but got lost trying to figure out how Roundup does authentication internally. So, no patches were forthcoming from me on that. -- \ ?An eye for an eye would make the whole world blind.? ?Mahatma | `\ Gandhi | _o__) | Ben Finney From rhamph at gmail.com Mon Jan 19 08:49:58 2009 From: rhamph at gmail.com (Adam Olsen) Date: Mon, 19 Jan 2009 00:49:58 -0700 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> Message-ID: On Sun, Jan 18, 2009 at 9:32 PM, Guido van Rossum wrote: > On Sun, Jan 18, 2009 at 5:38 PM, Gregory P. Smith wrote: >> +1 on getting rid of the IOBase __del__ in the C rewrite in favor of >> tp_dealloc. >> >> On Sun, Jan 18, 2009 at 11:53 PM, Christian Heimes wrote: >>> >>> Brett Cannon schrieb: >>> > Fine by me. People should be using the context manager for guaranteed >>> > file closure anyway IMO. >> >> Yes they should. (how I really really wish i didn't have to use 2.4 anymore >> ;) > > Come on, the open-try-use-finally-close idiom isn't *that* bad... > >> But lets at least be clear that is never acceptable for a python >> implementation to leak file descriptors/handles (or other system resources), >> they should be closed and released whenever the particular GC implementation >> gets around to it. > > I would like to make a stronger promise. I think that for files open > for *writing*, all data written to the file should be flushed to disk > before the fd is closed. This is the real reason for having the > __del__: closing the fd is done by the C implementation of FileIO, but > since (until the rewrite in C) the buffer management is all in Python > (both the binary I/O buffer and the additional text I/O buffer), I > felt the downside of having a __del__ method was preferable over the > possibility of output files not being flushed (which is always > nightmarish to debug). > > Of course, once both layers of buffering are implemented in C, the > need for __del__ to do this goes away, and I would be fine with doing > it all in tp_alloc. > >>> You make a very good point! Perhaps we should stop promising that files >>> get closed as soon as possible and encourage people in using the with >>> statement. >>> >>> Christian >> >> eegads, do we actually -promise- that somewhere? If so I'll happily go >> update those docs with a caveat. > > I don't think we've promised that ever since the days when JPython > (with a P!) was young... > >> I regularly point out in code reviews that the very convenient and common >> idiom of open(name, 'w').write(data) doesn't guarantee when the file will be >> closed; its up to the GC implementation details. Good code should never >> depend on the GC for a timely release of scarce external resources (file >> descriptors/handles). > > And buffer flushing. While I don't want to guarantee that the buffer > is flushed ASAP, I do want to continue promising that it is flushed > before the object is GC'ed and before the fd is closed. Could we add a warning if the file has not been explicitly flushed? Consider removing the implicit flush later, if there's a sufficient implementation benefit to it. -- Adam Olsen, aka Rhamphoryncus From dickinsm at gmail.com Mon Jan 19 09:54:00 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Mon, 19 Jan 2009 08:54:00 +0000 Subject: [Python-Dev] Support for the Haiku OS In-Reply-To: References: <2ED628C3BF134CD5BDB6A05608F82ED4@RaymondLaptop1> <18798.26547.535845.514247@montanaro.dyndns.org> <20090115090056.GI1060@nexus.in-nomine.org> <20090115172335.GK1060@nexus.in-nomine.org> <496F9056.8060503@v.loewis.de> Message-ID: <5c6f2a5d0901190054g5df41eb8i56fef647e6ecfa81@mail.gmail.com> On Sun, Jan 18, 2009 at 11:03 PM, scott mc wrote: > I built 2.7 on Haiku, but am getting failures in the regression tests. > Many of them are in math related tests, failing in the 15th decimal > place on test_decimal and a few others like that, I posted a ticket on > Haiku's trac for that as it might be related to Haiku's built in math > lib? (libm is built into Haiku's libroot.so) > http://dev.haiku-os.org/ticket/3308 Most of these look like libm/libc precision problems to me, of varying severity. Some particular comments: - the test_float result is worrying: there are a good few places where Python depends on eval(repr(.)) round-tripping for floats, and it looks as though either the eval or the repr is losing significant accuracy. Actually, there's so much accuracy loss that I wonder whether something's being cast from double precision to single precision at some point. - test_decimal failing was a bit of a surprise until I saw which test was failing: the decimal module quite deliberately does all computation using integer arithmetic, and avoids floating-point like the plague, so it should be ultra-portable. Except, of course, the recently added from_float method, which converts from floats to decimals. So fix up the floating-point and test_decimal should pass again. - I don't understand where the test_marshall and test_random failures are coming from. These could be Python problems (though I think it's more likely that they're Haiku floating-point problems). I'd be interested to see short code-snippets that reproduce these issues. - I wouldn't worry so much about the test_math and test_cmath failures until you get the others sorted out; the tests are probably stricter than they need to be. Mark From gangadharan at gmail.com Mon Jan 19 11:33:41 2009 From: gangadharan at gmail.com (Gangadharan S.A.) Date: Mon, 19 Jan 2009 16:03:41 +0530 Subject: [Python-Dev] Child process freezes during fork pipe exec Message-ID: Hi, Summary: * In my organization, we have a *multi threaded* (threading library) python (python 2.4.1) daemon on Linux, which starts up various processes using the fork pipe exec model. * We use this fork , wait on pipe , exec model as a form of handshake between the parent and child processes. We want the child to go ahead only after the parent has noted down the fact that the child has been forked and what it's pid is. * This usually works fine, but for about 1 in every 20,000 processes started, the child process just freezes somewhere after the fork, before the exec. It does not die. It is alive and stuck. * Why does this happen? * Is there a better way for us to write a fork-wait_for_start_signal-exec construct? Here is what we do: One of the threads of the multi threaded python daemon does the following 1) Fork out a child process 2) Child process waits for a pipe message from parent --- Parent sends pipe message to child after noting down child details : pid, start time etc. --- 3) Child process prints various debug messages, including looking at os.environ values 4) Child process execs the right script Here it is again, in pseudo code: def start_job(): read_pipefd, write_pipefd = os.pipe() # 1) Fork out a child process pid = os.fork() if pid == 0: # 2) wait for excepted message on pipe os.close(write_pipefd) read_set, write_set, exp_set = select.select([read_pipefd], [], [], 300) if os.read(read_pipefd, len("expected message") <> "expected message": os._exit(1) os.close(read_pipefd) # 3) print various debug messages, including os.environ values print >> sys.stderr, "we print various debug messages here, including os.evniron values" # 4) go ahead with exec os.execve(path, args, env) else: # parent process sends pipe message to child at the right time The problem: * Things work fine most of the time, but rarely, the process gets "stuck" after fork, before exec (In steps 2 or 3 above). Process makes no progress and does not die either. * When I do a gdb (gdb 6.5) attach on the process, bt fails as follows: (gdb) bt #0 0x00002ba9fd5c6a68 in __lll_mutex_lock_wait () from /lib64/libpthread.so.0 #1 0x00002ba9fd5c2a78 in _L_mutex_lock_106 () from /lib64/libpthread.so.0 dwarf2-frame.c:521: internal-error: Unknown CFI encountered. A problem internal to GDB has been detected, further debugging may prove unreliable. Quit this debugging session? (y or n) I looked into this error and found that pre-6.6 gdb throws this error when looking at the stack trace of a deadlocked process. This is certainly not a dead lock in my code as there is no locking involved in this area of code. * This problem happens for about 1 process in every 20,000. This statistics is gathered across about 80 machines in our cluster, so its not the case of a single machine having some hardware issue. * Note that the child is forked out by a *multi threaded* python application. I noticed some forums discussing how multi threaded (pthreads library) processes doing things between a fork and an exec can rarely get into a deadlock. I know that python ( atleast 2.4.1 ) multi threading does not use pthreads, but probably the python interpreter itself does use pthreads? Questions: * Why does this happen? * Is there a better way for us to write a fork-wait_for_start_signal-exec construct in a multi threaded application? Thanks, Gangadharan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ncoghlan at gmail.com Mon Jan 19 13:21:50 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 19 Jan 2009 22:21:50 +1000 Subject: [Python-Dev] Child process freezes during fork pipe exec In-Reply-To: References: Message-ID: <4974705E.8050601@gmail.com> Gangadharan S.A. wrote: > Hi, > > Summary: > * In my organization, we have a *multi threaded* (threading library) > python (python 2.4.1) daemon on Linux, which starts up various processes > using the fork pipe exec model. The fork+threading combination had some fairly major issues that weren't resolved until the multiprocessing module was added for 2.6/3.0 [1] If you're able to upgrade to 2.5.4 things should be much better since many of the fork+threading fixes were backported to the 2.5 maintenance branch. For 2.4 I think you're pretty much out of luck though - it was already into security fix only mode by the time the child process deadlock problems in the fork/threading interaction were worked out. I don't personally know if your implementation could be modified in any way to make the deadlock less likely with Python 2.4, but the only way to eliminate the problem entirely (modulo any platform specific fork+threading problems) is to upgrade to the latest version of Python 2.5. Regards, Nick. [1] The bug you're probably encountering: http://bugs.python.org/issue874900 -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From gerald.britton at gmail.com Mon Jan 19 16:10:00 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Mon, 19 Jan 2009 10:10:00 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions Message-ID: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Please find below PEP 3142: Add a "while" clause to generator expressions. I'm looking for feedback and discussion. PEP: 3142 Title: Add a "while" clause to generator expressions Version: $Revision: 68715 $ Last-Modified: $Date: 2009-01-18 11:28:20 +0100 (So, 18. Jan 2009) $ Author: Gerald Britton Status: Draft Type: Standards Track Content-Type: text/plain Created: 12-Jan-2009 Python-Version: 3.0 Post-History: Abstract This PEP proposes an enhancement to generator expressions, adding a "while" clause to complement the existing "if" clause. Rationale A generator expression (PEP 289 [1]) is a concise method to serve dynamically-generated objects to list comprehensions (PEP 202 [2]). Current generator expressions allow for an "if" clause to filter the objects that are returned to those meeting some set of criteria. However, since the "if" clause is evaluated for every object that may be returned, in some cases it is possible that all objects would be rejected after a certain point. For example: g = (n for n in range(100) if n*n < 50) which is equivalent to the using a generator function (PEP 255 [3]): def __gen(exp): for n in exp: if n*n < 50: yield n g = __gen(iter(range(10))) would yield 0, 1, 2, 3, 4, 5, 6 and 7, but would also consider the numbers from 8 to 99 and reject them all since n*n >= 50 for numbers in that range. Allowing for a "while" clause would allow the redundant tests to be short-circuited: g = (n for n in range(100) while n*n < 50) would also yield 0, 1, 2, 3, 4, 5, 6 and 7, but would stop at 8 since the condition (n*n < 50) is no longer true. This would be equivalent to the generator function: def __gen(exp): for n in exp: if n*n < 50: yield n else: break g = __gen(iter(range(100))) Currently, in order to achieve the same result, one would need to either write a generator function such as the one above or use the takewhile function from itertools: from itertools import takewhile g = takewhile(lambda n: n*n < 50, range(100)) The takewhile code achieves the same result as the proposed syntax, albeit in a longer (some would say "less-elegant") fashion. Also, the takewhile version requires an extra function call (the lambda in the example above) with the associated performance penalty. A simple test shows that: for n in (n for n in range(100) if 1): pass performs about 10% better than: for n in takewhile(lambda n: 1, range(100)): pass though they achieve similar results. (The first example uses a generator; takewhile is an iterator). If similarly implemented, a "while" clause should perform about the same as the "if" clause does today. The reader may ask if the "if" and "while" clauses should be mutually exclusive. There are good examples that show that there are times when both may be used to good advantage. For example: p = (p for p in primes() if p > 100 while p < 1000) should return prime numbers found between 100 and 1000, assuming I have a primes() generator that yields prime numbers. Of course, this could also be achieved like this: p = (p for p in (p for p in primes() if p > 100) while p < 1000) which is syntactically simpler. Some may also ask if it is possible to cover dropwhile() functionality in a similar way. I initially thought of: p = (p for p in primes() not while p < 100) but I am not sure that I like it since it uses "not" in a non-pythonic fashion, I think. Adding a "while" clause to generator expressions maintains the compact form while adding a useful facility for short-circuiting the expression. Implementation: I am willing to assist in the implementation of this feature, although I have not contributed to Python thus far and would definitely need mentoring. (At this point I am not quite sure where to begin.) Presently though, I would find it challenging to fit this work into my existing workload. Acknowledgements Raymond Hettinger first proposed the concept of generator expressions in January 2002. References [1] PEP 289: Generator Expressions http://www.python.org/dev/peps/pep-0289/ [2] PEP 202: List Comprehensions http://www.python.org/dev/peps/pep-0202/ [3] PEP 255: Simple Generators http://www.python.org/dev/peps/pep-0255/ Copyright This document has been placed in the public domain. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: -------------- next part -------------- PEP: 3142 Title: Add a "while" clause to generator expressions Version: $Revision: 68715 $ Last-Modified: $Date: 2009-01-18 11:28:20 +0100 (So, 18. Jan 2009) $ Author: Gerald Britton Status: Draft Type: Standards Track Content-Type: text/plain Created: 12-Jan-2009 Python-Version: 3.0 Post-History: Abstract This PEP proposes an enhancement to generator expressions, adding a "while" clause to complement the existing "if" clause. Rationale A generator expression (PEP 289 [1]) is a concise method to serve dynamically-generated objects to list comprehensions (PEP 202 [2]). Current generator expressions allow for an "if" clause to filter the objects that are returned to those meeting some set of criteria. However, since the "if" clause is evaluated for every object that may be returned, in some cases it is possible that all objects would be rejected after a certain point. For example: g = (n for n in range(100) if n*n < 50) which is equivalent to the using a generator function (PEP 255 [3]): def __gen(exp): for n in exp: if n*n < 50: yield n g = __gen(iter(range(10))) would yield 0, 1, 2, 3, 4, 5, 6 and 7, but would also consider the numbers from 8 to 99 and reject them all since n*n >= 50 for numbers in that range. Allowing for a "while" clause would allow the redundant tests to be short-circuited: g = (n for n in range(100) while n*n < 50) would also yield 0, 1, 2, 3, 4, 5, 6 and 7, but would stop at 8 since the condition (n*n < 50) is no longer true. This would be equivalent to the generator function: def __gen(exp): for n in exp: if n*n < 50: yield n else: break g = __gen(iter(range(100))) Currently, in order to achieve the same result, one would need to either write a generator function such as the one above or use the takewhile function from itertools: from itertools import takewhile g = takewhile(lambda n: n*n < 50, range(100)) The takewhile code achieves the same result as the proposed syntax, albeit in a longer (some would say "less-elegant") fashion. Also, the takewhile version requires an extra function call (the lambda in the example above) with the associated performance penalty. A simple test shows that: for n in (n for n in range(100) if 1): pass performs about 10% better than: for n in takewhile(lambda n: 1, range(100)): pass though they achieve similar results. (The first example uses a generator; takewhile is an iterator). If similarly implemented, a "while" clause should perform about the same as the "if" clause does today. The reader may ask if the "if" and "while" clauses should be mutually exclusive. There are good examples that show that there are times when both may be used to good advantage. For example: p = (p for p in primes() if p > 100 while p < 1000) should return prime numbers found between 100 and 1000, assuming I have a primes() generator that yields prime numbers. Adding a "while" clause to generator expressions maintains the compact form while adding a useful facility for short-circuiting the expression. Acknowledgements Raymond Hettinger first proposed the concept of generator expressions in January 2002. References [1] PEP 289: Generator Expressions http://www.python.org/dev/peps/pep-0289/ [2] PEP 202: List Comprehensions http://www.python.org/dev/peps/pep-0202/ [3] PEP 255: Simple Generators http://www.python.org/dev/peps/pep-0255/ Copyright This document has been placed in the public domain. Local Variables: mode: indented-text indent-tabs-mode: nil sentence-end-double-space: t fill-column: 70 coding: utf-8 End: From benjamin at python.org Mon Jan 19 17:03:31 2009 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 19 Jan 2009 10:03:31 -0600 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: <1afaf6160901190803i448bf884nefb159c76f82ea0f@mail.gmail.com> On Mon, Jan 19, 2009 at 9:10 AM, Gerald Britton wrote: > Please find below PEP 3142: Add a "while" clause to generator > expressions. I'm looking for feedback and discussion. > > > PEP: 3142 > Title: Add a "while" clause to generator expressions > Version: $Revision: 68715 $ > Last-Modified: $Date: 2009-01-18 11:28:20 +0100 (So, 18. Jan 2009) $ > Author: Gerald Britton > Status: Draft > Type: Standards Track > Content-Type: text/plain > Created: 12-Jan-2009 > Python-Version: 3.0 Since 3.0 has already been released, the only versions this feature can be added to are 2.7 and 3.1. Do you intend this to be a 3.x only feature? -- Regards, Benjamin From daniel at stutzbachenterprises.com Mon Jan 19 17:15:38 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Mon, 19 Jan 2009 10:15:38 -0600 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: On Mon, Jan 19, 2009 at 9:10 AM, Gerald Britton wrote: > g = (n for n in range(100) if n*n < 50) > > would yield 0, 1, 2, 3, 4, 5, 6 and 7, but would also consider > the numbers from 8 to 99 and reject them all since n*n >= 50 for > numbers in that range. Allowing for a "while" clause would allow > the redundant tests to be short-circuited: > Instead of using a "while" clause, the above example could simply be rewritten: g = (n for n in range(8)) I appreciate that this is a toy example to illustrate the syntax. Do you have some slightly more complex examples, that could not be rewritten by altering the "in" clause? -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From ironfroggy at gmail.com Mon Jan 19 17:29:28 2009 From: ironfroggy at gmail.com (Calvin Spealman) Date: Mon, 19 Jan 2009 11:29:28 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: <76fd5acf0901190829r3c0133e6i49154bae5163483f@mail.gmail.com> I am really unconvinced of the utility of this proposal and quite convinced of the confusing factor it may well add to the current syntax. I would like to see more applicable examples. It would replace uses of takewhile, but that isn't a really often used function. So, is there any evidence to support that making this a new syntax would find so many more uses of the construct to be worth it? I believe not. On Mon, Jan 19, 2009 at 10:10 AM, Gerald Britton wrote: > Please find below PEP 3142: Add a "while" clause to generator > expressions. I'm looking for feedback and discussion. > > > PEP: 3142 > Title: Add a "while" clause to generator expressions > Version: $Revision: 68715 $ > Last-Modified: $Date: 2009-01-18 11:28:20 +0100 (So, 18. Jan 2009) $ > Author: Gerald Britton > Status: Draft > Type: Standards Track > Content-Type: text/plain > Created: 12-Jan-2009 > Python-Version: 3.0 > Post-History: > > > Abstract > > This PEP proposes an enhancement to generator expressions, adding a > "while" clause to complement the existing "if" clause. > > > Rationale > > A generator expression (PEP 289 [1]) is a concise method to serve > dynamically-generated objects to list comprehensions (PEP 202 [2]). > Current generator expressions allow for an "if" clause to filter > the objects that are returned to those meeting some set of > criteria. However, since the "if" clause is evaluated for every > object that may be returned, in some cases it is possible that all > objects would be rejected after a certain point. For example: > > g = (n for n in range(100) if n*n < 50) > > which is equivalent to the using a generator function > (PEP 255 [3]): > > def __gen(exp): > for n in exp: > if n*n < 50: > yield n > g = __gen(iter(range(10))) > > would yield 0, 1, 2, 3, 4, 5, 6 and 7, but would also consider > the numbers from 8 to 99 and reject them all since n*n >= 50 for > numbers in that range. Allowing for a "while" clause would allow > the redundant tests to be short-circuited: > > g = (n for n in range(100) while n*n < 50) > > would also yield 0, 1, 2, 3, 4, 5, 6 and 7, but would stop at 8 > since the condition (n*n < 50) is no longer true. This would be > equivalent to the generator function: > > def __gen(exp): > for n in exp: > if n*n < 50: > yield n > else: > break > g = __gen(iter(range(100))) > > Currently, in order to achieve the same result, one would need to > either write a generator function such as the one above or use the > takewhile function from itertools: > > from itertools import takewhile > g = takewhile(lambda n: n*n < 50, range(100)) > > The takewhile code achieves the same result as the proposed syntax, > albeit in a longer (some would say "less-elegant") fashion. Also, > the takewhile version requires an extra function call (the lambda > in the example above) with the associated performance penalty. > A simple test shows that: > > for n in (n for n in range(100) if 1): pass > > performs about 10% better than: > > for n in takewhile(lambda n: 1, range(100)): pass > > though they achieve similar results. (The first example uses a > generator; takewhile is an iterator). If similarly implemented, > a "while" clause should perform about the same as the "if" clause > does today. > > The reader may ask if the "if" and "while" clauses should be > mutually exclusive. There are good examples that show that there > are times when both may be used to good advantage. For example: > > p = (p for p in primes() if p > 100 while p < 1000) > > should return prime numbers found between 100 and 1000, assuming > I have a primes() generator that yields prime numbers. Of course, this > could also be achieved like this: > > p = (p for p in (p for p in primes() if p > 100) while p < 1000) > > which is syntactically simpler. Some may also ask if it is possible > to cover dropwhile() functionality in a similar way. I initially thought > of: > > p = (p for p in primes() not while p < 100) > > but I am not sure that I like it since it uses "not" in a non-pythonic > fashion, I think. > > Adding a "while" clause to generator expressions maintains the > compact form while adding a useful facility for short-circuiting > the expression. > > Implementation: > > I am willing to assist in the implementation of this feature, although I have > not contributed to Python thus far and would definitely need mentoring. (At > this point I am not quite sure where to begin.) Presently though, I would > find it challenging to fit this work into my existing workload. > > > Acknowledgements > > Raymond Hettinger first proposed the concept of generator > expressions in January 2002. > > > References > > [1] PEP 289: Generator Expressions > http://www.python.org/dev/peps/pep-0289/ > > [2] PEP 202: List Comprehensions > http://www.python.org/dev/peps/pep-0202/ > > [3] PEP 255: Simple Generators > http://www.python.org/dev/peps/pep-0255/ > > > Copyright > > This document has been placed in the public domain. > > > Local Variables: > mode: indented-text > indent-tabs-mode: nil > sentence-end-double-space: t > fill-column: 70 > coding: utf-8 > End: > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com > > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From gerald.britton at gmail.com Mon Jan 19 17:37:23 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Mon, 19 Jan 2009 11:37:23 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: <5d1a32000901190837v70242228l381f3801ea1866bb@mail.gmail.com> Sure: Say I implement the sieve of Eratosthenes as a prime number generator. I want some primes for my application but there are an infinite number of primes. So I would like to write: prime = (p for p in sieve() while p < 1000) instead of: import itertools prime = takewhile(lamda p:p<1000, sieve()) to get the primes under 1000. On Mon, Jan 19, 2009 at 11:15 AM, Daniel Stutzbach wrote: > On Mon, Jan 19, 2009 at 9:10 AM, Gerald Britton > wrote: >> >> g = (n for n in range(100) if n*n < 50) >> >> would yield 0, 1, 2, 3, 4, 5, 6 and 7, but would also consider >> the numbers from 8 to 99 and reject them all since n*n >= 50 for >> numbers in that range. Allowing for a "while" clause would allow >> the redundant tests to be short-circuited: > > Instead of using a "while" clause, the above example could simply be > rewritten: > > g = (n for n in range(8)) > > I appreciate that this is a toy example to illustrate the syntax. Do you > have some slightly more complex examples, that could not be rewritten by > altering the "in" clause? > > -- > Daniel Stutzbach, Ph.D. > President, Stutzbach Enterprises, LLC From gerald.britton at gmail.com Mon Jan 19 17:41:00 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Mon, 19 Jan 2009 11:41:00 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <76fd5acf0901190829r3c0133e6i49154bae5163483f@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <76fd5acf0901190829r3c0133e6i49154bae5163483f@mail.gmail.com> Message-ID: <5d1a32000901190841k18450400nbd61fdf878522e59@mail.gmail.com> Thanks Calvin, Could you please expand on your thoughts about possible confusion? That is, how do you see a programmer becoming confused if this option were added to the syntax. On Mon, Jan 19, 2009 at 11:29 AM, Calvin Spealman wrote: > I am really unconvinced of the utility of this proposal and quite > convinced of the confusing factor it may well add to the current > syntax. I would like to see more applicable examples. It would replace > uses of takewhile, but that isn't a really often used function. So, is > there any evidence to support that making this a new syntax would find > so many more uses of the construct to be worth it? I believe not. > > On Mon, Jan 19, 2009 at 10:10 AM, Gerald Britton > wrote: >> Please find below PEP 3142: Add a "while" clause to generator >> expressions. I'm looking for feedback and discussion. >> >> >> PEP: 3142 >> Title: Add a "while" clause to generator expressions >> Version: $Revision: 68715 $ >> Last-Modified: $Date: 2009-01-18 11:28:20 +0100 (So, 18. Jan 2009) $ >> Author: Gerald Britton >> Status: Draft >> Type: Standards Track >> Content-Type: text/plain >> Created: 12-Jan-2009 >> Python-Version: 3.0 >> Post-History: >> >> >> Abstract >> >> This PEP proposes an enhancement to generator expressions, adding a >> "while" clause to complement the existing "if" clause. >> >> >> Rationale >> >> A generator expression (PEP 289 [1]) is a concise method to serve >> dynamically-generated objects to list comprehensions (PEP 202 [2]). >> Current generator expressions allow for an "if" clause to filter >> the objects that are returned to those meeting some set of >> criteria. However, since the "if" clause is evaluated for every >> object that may be returned, in some cases it is possible that all >> objects would be rejected after a certain point. For example: >> >> g = (n for n in range(100) if n*n < 50) >> >> which is equivalent to the using a generator function >> (PEP 255 [3]): >> >> def __gen(exp): >> for n in exp: >> if n*n < 50: >> yield n >> g = __gen(iter(range(10))) >> >> would yield 0, 1, 2, 3, 4, 5, 6 and 7, but would also consider >> the numbers from 8 to 99 and reject them all since n*n >= 50 for >> numbers in that range. Allowing for a "while" clause would allow >> the redundant tests to be short-circuited: >> >> g = (n for n in range(100) while n*n < 50) >> >> would also yield 0, 1, 2, 3, 4, 5, 6 and 7, but would stop at 8 >> since the condition (n*n < 50) is no longer true. This would be >> equivalent to the generator function: >> >> def __gen(exp): >> for n in exp: >> if n*n < 50: >> yield n >> else: >> break >> g = __gen(iter(range(100))) >> >> Currently, in order to achieve the same result, one would need to >> either write a generator function such as the one above or use the >> takewhile function from itertools: >> >> from itertools import takewhile >> g = takewhile(lambda n: n*n < 50, range(100)) >> >> The takewhile code achieves the same result as the proposed syntax, >> albeit in a longer (some would say "less-elegant") fashion. Also, >> the takewhile version requires an extra function call (the lambda >> in the example above) with the associated performance penalty. >> A simple test shows that: >> >> for n in (n for n in range(100) if 1): pass >> >> performs about 10% better than: >> >> for n in takewhile(lambda n: 1, range(100)): pass >> >> though they achieve similar results. (The first example uses a >> generator; takewhile is an iterator). If similarly implemented, >> a "while" clause should perform about the same as the "if" clause >> does today. >> >> The reader may ask if the "if" and "while" clauses should be >> mutually exclusive. There are good examples that show that there >> are times when both may be used to good advantage. For example: >> >> p = (p for p in primes() if p > 100 while p < 1000) >> >> should return prime numbers found between 100 and 1000, assuming >> I have a primes() generator that yields prime numbers. Of course, this >> could also be achieved like this: >> >> p = (p for p in (p for p in primes() if p > 100) while p < 1000) >> >> which is syntactically simpler. Some may also ask if it is possible >> to cover dropwhile() functionality in a similar way. I initially thought >> of: >> >> p = (p for p in primes() not while p < 100) >> >> but I am not sure that I like it since it uses "not" in a non-pythonic >> fashion, I think. >> >> Adding a "while" clause to generator expressions maintains the >> compact form while adding a useful facility for short-circuiting >> the expression. >> >> Implementation: >> >> I am willing to assist in the implementation of this feature, although I have >> not contributed to Python thus far and would definitely need mentoring. (At >> this point I am not quite sure where to begin.) Presently though, I would >> find it challenging to fit this work into my existing workload. >> >> >> Acknowledgements >> >> Raymond Hettinger first proposed the concept of generator >> expressions in January 2002. >> >> >> References >> >> [1] PEP 289: Generator Expressions >> http://www.python.org/dev/peps/pep-0289/ >> >> [2] PEP 202: List Comprehensions >> http://www.python.org/dev/peps/pep-0202/ >> >> [3] PEP 255: Simple Generators >> http://www.python.org/dev/peps/pep-0255/ >> >> >> Copyright >> >> This document has been placed in the public domain. >> >> >> Local Variables: >> mode: indented-text >> indent-tabs-mode: nil >> sentence-end-double-space: t >> fill-column: 70 >> coding: utf-8 >> End: >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com >> >> > > > > -- > Read my blog! I depend on your acceptance of my opinion! I am interesting! > http://techblog.ironfroggy.com/ > Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy > From facundobatista at gmail.com Mon Jan 19 17:43:35 2009 From: facundobatista at gmail.com (Facundo Batista) Date: Mon, 19 Jan 2009 14:43:35 -0200 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901190841k18450400nbd61fdf878522e59@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <76fd5acf0901190829r3c0133e6i49154bae5163483f@mail.gmail.com> <5d1a32000901190841k18450400nbd61fdf878522e59@mail.gmail.com> Message-ID: 2009/1/19 Gerald Britton : > Could you please expand on your thoughts about possible confusion? > That is, how do you see a programmer becoming confused if this option > were added to the syntax. My main concern about confusion is that you're adding a "while" that actually will behave like a "break" in the "for". I know that with "while" you read it better, but the meaning is different in the generator/list comprehension "for" loop. Regards, -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From daniel at stutzbachenterprises.com Mon Jan 19 17:44:03 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Mon, 19 Jan 2009 10:44:03 -0600 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901190837v70242228l381f3801ea1866bb@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901190837v70242228l381f3801ea1866bb@mail.gmail.com> Message-ID: On Mon, Jan 19, 2009 at 10:37 AM, Gerald Britton wrote: > prime = (p for p in sieve() while p < 1000) > prime = takewhile(lamda p:p<1000, sieve()) > I'm pretty sure the extra cost of evaluating the lambda at each step is tiny compared to the cost of the sieve, so I don't you can make a convincing argument on performance. Also, you know the latter is actually fewer characters, right? :-) -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From skip at pobox.com Mon Jan 19 17:51:37 2009 From: skip at pobox.com (skip at pobox.com) Date: Mon, 19 Jan 2009 10:51:37 -0600 (CST) Subject: [Python-Dev] Hmmm... __PyObject_NextNotImplemented when tracing lines Message-ID: <20090119165137.0991CD33C78@montanaro.dyndns.org> I see output like this in several tests on my Mac: test_array skipped -- dlopen(/Users/skip/src/python/trunk/build/lib.macosx-10.3-i386-2.7/cPickle.so, 2): Symbol not found: __PyObject_NextNotImplemented Referenced from: /Users/skip/src/python/trunk/build/lib.macosx-10.3-i386-2.7/cPickle.so Expected in: dynamic lookup This is in an up-to-date trunk sandbox running ./python.exe Lib/test/regrtest.py -T -D cover Didn't see that in a non-coverage pass. The following tests give this message: test_exceptions test_array test_collections test_copy_reg test_cpickle test_datetime test_deque test_fractions test_logging test_multiprocessing test_pickle test_pickletools test_re test_slice test_xpickle I see other weirdness as well including a failure of test_sys and the apparent inability to actually write out any useful coverage info, but this is the standout. In general it appears the tracing support in regrtest.py causes a bunch of problems. Skip From gerald.britton at gmail.com Mon Jan 19 17:59:35 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Mon, 19 Jan 2009 11:59:35 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901190837v70242228l381f3801ea1866bb@mail.gmail.com> Message-ID: <5d1a32000901190859h6720205o585fee8d19607f2@mail.gmail.com> The sieve is just one example. The basic idea is that for some infinite generator (even a very simple one) you want to cut it off after some point. As for the number of characters, I spelled lambda incorrectly (left out a b) and there should be a space after the colon to conform to design guides. So, actually the takewhile version is two characters longer, not counting "import itertools" of course! On Mon, Jan 19, 2009 at 11:44 AM, Daniel Stutzbach wrote: > On Mon, Jan 19, 2009 at 10:37 AM, Gerald Britton > wrote: >> >> prime = (p for p in sieve() while p < 1000) >> prime = takewhile(lamda p:p<1000, sieve()) > > I'm pretty sure the extra cost of evaluating the lambda at each step is tiny > compared to the cost of the sieve, so I don't you can make a convincing > argument on performance. > > Also, you know the latter is actually fewer characters, right? :-) > > -- > Daniel Stutzbach, Ph.D. > President, Stutzbach Enterprises, LLC From ironfroggy at gmail.com Mon Jan 19 17:47:07 2009 From: ironfroggy at gmail.com (Calvin Spealman) Date: Mon, 19 Jan 2009 11:47:07 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901190841k18450400nbd61fdf878522e59@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <76fd5acf0901190829r3c0133e6i49154bae5163483f@mail.gmail.com> <5d1a32000901190841k18450400nbd61fdf878522e59@mail.gmail.com> Message-ID: <76fd5acf0901190847y1c28b659o3e3fb7e4549cabcc@mail.gmail.com> On Mon, Jan 19, 2009 at 11:41 AM, Gerald Britton wrote: > Thanks Calvin, > > Could you please expand on your thoughts about possible confusion? > That is, how do you see a programmer becoming confused if this option > were added to the syntax. I think that the difference between these two lines is too easy to miss among other code: prime = (p for p in sieve() while p < 1000) and: prime = (p for p in sieve() if p < 1000) The very distinctly different behavior is two important to look so similar at a glance. And, again, I just don't see takewhile being used all that often. Not nearly often enough to warrant replacing it with a special syntax! Only 178 results from Google CodeSearch, while chain, groupby, and repeat get 4000, 3000, and 1000 respectively. Should those be given their own syntax? > On Mon, Jan 19, 2009 at 11:29 AM, Calvin Spealman wrote: >> I am really unconvinced of the utility of this proposal and quite >> convinced of the confusing factor it may well add to the current >> syntax. I would like to see more applicable examples. It would replace >> uses of takewhile, but that isn't a really often used function. So, is >> there any evidence to support that making this a new syntax would find >> so many more uses of the construct to be worth it? I believe not. >> >> On Mon, Jan 19, 2009 at 10:10 AM, Gerald Britton >> wrote: >>> Please find below PEP 3142: Add a "while" clause to generator >>> expressions. I'm looking for feedback and discussion. >>> >>> >>> PEP: 3142 >>> Title: Add a "while" clause to generator expressions >>> Version: $Revision: 68715 $ >>> Last-Modified: $Date: 2009-01-18 11:28:20 +0100 (So, 18. Jan 2009) $ >>> Author: Gerald Britton >>> Status: Draft >>> Type: Standards Track >>> Content-Type: text/plain >>> Created: 12-Jan-2009 >>> Python-Version: 3.0 >>> Post-History: >>> >>> >>> Abstract >>> >>> This PEP proposes an enhancement to generator expressions, adding a >>> "while" clause to complement the existing "if" clause. >>> >>> >>> Rationale >>> >>> A generator expression (PEP 289 [1]) is a concise method to serve >>> dynamically-generated objects to list comprehensions (PEP 202 [2]). >>> Current generator expressions allow for an "if" clause to filter >>> the objects that are returned to those meeting some set of >>> criteria. However, since the "if" clause is evaluated for every >>> object that may be returned, in some cases it is possible that all >>> objects would be rejected after a certain point. For example: >>> >>> g = (n for n in range(100) if n*n < 50) >>> >>> which is equivalent to the using a generator function >>> (PEP 255 [3]): >>> >>> def __gen(exp): >>> for n in exp: >>> if n*n < 50: >>> yield n >>> g = __gen(iter(range(10))) >>> >>> would yield 0, 1, 2, 3, 4, 5, 6 and 7, but would also consider >>> the numbers from 8 to 99 and reject them all since n*n >= 50 for >>> numbers in that range. Allowing for a "while" clause would allow >>> the redundant tests to be short-circuited: >>> >>> g = (n for n in range(100) while n*n < 50) >>> >>> would also yield 0, 1, 2, 3, 4, 5, 6 and 7, but would stop at 8 >>> since the condition (n*n < 50) is no longer true. This would be >>> equivalent to the generator function: >>> >>> def __gen(exp): >>> for n in exp: >>> if n*n < 50: >>> yield n >>> else: >>> break >>> g = __gen(iter(range(100))) >>> >>> Currently, in order to achieve the same result, one would need to >>> either write a generator function such as the one above or use the >>> takewhile function from itertools: >>> >>> from itertools import takewhile >>> g = takewhile(lambda n: n*n < 50, range(100)) >>> >>> The takewhile code achieves the same result as the proposed syntax, >>> albeit in a longer (some would say "less-elegant") fashion. Also, >>> the takewhile version requires an extra function call (the lambda >>> in the example above) with the associated performance penalty. >>> A simple test shows that: >>> >>> for n in (n for n in range(100) if 1): pass >>> >>> performs about 10% better than: >>> >>> for n in takewhile(lambda n: 1, range(100)): pass >>> >>> though they achieve similar results. (The first example uses a >>> generator; takewhile is an iterator). If similarly implemented, >>> a "while" clause should perform about the same as the "if" clause >>> does today. >>> >>> The reader may ask if the "if" and "while" clauses should be >>> mutually exclusive. There are good examples that show that there >>> are times when both may be used to good advantage. For example: >>> >>> p = (p for p in primes() if p > 100 while p < 1000) >>> >>> should return prime numbers found between 100 and 1000, assuming >>> I have a primes() generator that yields prime numbers. Of course, this >>> could also be achieved like this: >>> >>> p = (p for p in (p for p in primes() if p > 100) while p < 1000) >>> >>> which is syntactically simpler. Some may also ask if it is possible >>> to cover dropwhile() functionality in a similar way. I initially thought >>> of: >>> >>> p = (p for p in primes() not while p < 100) >>> >>> but I am not sure that I like it since it uses "not" in a non-pythonic >>> fashion, I think. >>> >>> Adding a "while" clause to generator expressions maintains the >>> compact form while adding a useful facility for short-circuiting >>> the expression. >>> >>> Implementation: >>> >>> I am willing to assist in the implementation of this feature, although I have >>> not contributed to Python thus far and would definitely need mentoring. (At >>> this point I am not quite sure where to begin.) Presently though, I would >>> find it challenging to fit this work into my existing workload. >>> >>> >>> Acknowledgements >>> >>> Raymond Hettinger first proposed the concept of generator >>> expressions in January 2002. >>> >>> >>> References >>> >>> [1] PEP 289: Generator Expressions >>> http://www.python.org/dev/peps/pep-0289/ >>> >>> [2] PEP 202: List Comprehensions >>> http://www.python.org/dev/peps/pep-0202/ >>> >>> [3] PEP 255: Simple Generators >>> http://www.python.org/dev/peps/pep-0255/ >>> >>> >>> Copyright >>> >>> This document has been placed in the public domain. >>> >>> >>> Local Variables: >>> mode: indented-text >>> indent-tabs-mode: nil >>> sentence-end-double-space: t >>> fill-column: 70 >>> coding: utf-8 >>> End: >>> >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> http://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com >>> >>> >> >> >> >> -- >> Read my blog! I depend on your acceptance of my opinion! I am interesting! >> http://techblog.ironfroggy.com/ >> Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy >> > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From alexander.belopolsky at gmail.com Mon Jan 19 18:23:59 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 19 Jan 2009 12:23:59 -0500 Subject: [Python-Dev] Hmmm... __PyObject_NextNotImplemented when tracing lines In-Reply-To: <20090119165137.0991CD33C78@montanaro.dyndns.org> References: <20090119165137.0991CD33C78@montanaro.dyndns.org> Message-ID: On Mon, Jan 19, 2009 at 11:51 AM, wrote: > I see output like this in several tests on my Mac: > > test_array skipped -- dlopen(/Users/skip/src/python/trunk/build/lib.macosx-10.3-i386-2.7/cPickle.so, 2): Symbol not found: __PyObject_NextNotImplemented > Referenced from: /Users/skip/src/python/trunk/build/lib.macosx-10.3-i386-2.7/cPickle.so > Expected in: dynamic lookup > > This is in an up-to-date trunk sandbox running > > ./python.exe Lib/test/regrtest.py -T -D cover .. I cannot reproduce this on my Mac. It looks like you may have an out of date python.exe in your sandbox. Please check that $ nm python.exe | grep PyObject_NextNotImplemented 00052940 T __PyObject_NextNotImplemented has a "T" in the second column. Note that this symbol seem to have been added recently: $ svn blame -v Objects/object.c .. 68560 amaury.forgeotdarc 2009-01-12 18:36:55 -0500 (Mon, 12 Jan 2009) _PyObject_NextNotImplemented(PyObject *self) 68560 amaury.forgeotdarc 2009-01-12 18:36:55 -0500 (Mon, 12 Jan 2009) { 68560 amaury.forgeotdarc 2009-01-12 18:36:55 -0500 (Mon, 12 Jan 2009) PyErr_Format(PyExc_TypeError, 68560 amaury.forgeotdarc 2009-01-12 18:36:55 -0500 (Mon, 12 Jan 2009) "'%.200s' object is not iterable", 68560 amaury.forgeotdarc 2009-01-12 18:36:55 -0500 (Mon, 12 Jan 2009) Py_TYPE(self)->tp_name); 68560 amaury.forgeotdarc 2009-01-12 18:36:55 -0500 (Mon, 12 Jan 2009) return NULL; 68560 amaury.forgeotdarc 2009-01-12 18:36:55 -0500 (Mon, 12 Jan 2009) } 68560 amaury.forgeotdarc 2009-01-12 18:36:55 -0500 (Mon, 12 Jan 2009) .. So it is possible that you did not pick it up in your build yet. I am puzzled, however, that you don't see problems in a non-coverage run. From steven.bethard at gmail.com Mon Jan 19 18:28:53 2009 From: steven.bethard at gmail.com (Steven Bethard) Date: Mon, 19 Jan 2009 09:28:53 -0800 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: On Mon, Jan 19, 2009 at 7:10 AM, Gerald Britton wrote: > PEP: 3142 > Title: Add a "while" clause to generator expressions [snip] > numbers in that range. Allowing for a "while" clause would allow > the redundant tests to be short-circuited: > > g = (n for n in range(100) while n*n < 50) > > would also yield 0, 1, 2, 3, 4, 5, 6 and 7, but would stop at 8 > since the condition (n*n < 50) is no longer true. This would be > equivalent to the generator function: > > def __gen(exp): > for n in exp: > if n*n < 50: > yield n > else: > break > g = __gen(iter(range(100))) -1. As I pointed out on python-ideas, this proposal makes "while" mean something different in a generator expression. Currently, you can read any generator expression as a regular generator by simply indenting each clause and adding a yield statement. For example: (n for n in range(100) if n*n < 50) turns into: for n in range(100): if n*n < 50: yield n Applying that nice correspondence to the proposed "while" generator expression doesn't work though. For example: (n for n in range(100) while n*n < 50) is, under the proposal, *not* equivalent to: for n in range(100): while n*n < 50: yield n I'm strongly against making "while" mean something different in a generator expression than it does in a "while" statement. Steve -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy From tjreedy at udel.edu Mon Jan 19 18:51:33 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 19 Jan 2009 12:51:33 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: Gerald Britton wrote: > Please find below PEP 3142: Add a "while" clause to generator > expressions. I'm looking for feedback and discussion. This was already discussed on python-ideas where it got negative feedback. One objection, mentioned by Mathias Panzerbock and Georg Brandl, is that it is redundant with takewhile(). You did mention that in the PEP. The other, posted by Steven Bethard, is that it fundamentally breaks the current semantics of abbreviating (except for iteration variable scoping) an 'equivalent' for loop. This should have been listed in the PEP under Objections (or whatever the section. I did not bother to second his objection there but will now. -1 Steven summary: "I'm probably just repeating myself here, but the reason not to do it is that the current generator expressions translate almost directly into the corresponding generator statements. Using "while" in the way you've suggested breaks this symmetry, and would make Python harder to learn." Longer presentation: "I think this could end up being confusing. Current generator expressions turn into an equivalent generator function by simply indenting the clauses and adding a yield, for example: (i for i in range(100) if i % 2 == 0) is equivalent to: def gen(): for i in range(100): if i % 2 == 0: yield i Now you're proposing syntax that would no longer work like this. Taking your example: (i for i in range(100) while i <= 50) I would expect this to mean: [meaning, one would expect this to mean, using current rules = tjr] def gen(): for i in range(100): while i <= 50: yield i In short, -1. You're proposing to use an existing keyword in a new way that doesn't match how generator expressions are evaluated." Terry Jan Reedy From gerald.britton at gmail.com Mon Jan 19 19:03:47 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Mon, 19 Jan 2009 13:03:47 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: <5d1a32000901191003v5b67b5faw9557d3658cd4b9c9@mail.gmail.com> Duly noted and thanks for the feedback! (just what I was looking for actually). I do disagree with the idea that the proposal, if implemented, would make Python harder to learn. Not sure who would find it harder. Having to find and use takewhile was harder for me. I still find that one counter-intuitive. I would have expected the parameters in the reverse order (take something, while something else is true). Tripped me up a few times, which got me thinking about an alternative. On Mon, Jan 19, 2009 at 12:51 PM, Terry Reedy wrote: > Gerald Britton wrote: >> >> Please find below PEP 3142: Add a "while" clause to generator >> expressions. I'm looking for feedback and discussion. > > This was already discussed on python-ideas where it got negative feedback. > > One objection, mentioned by Mathias Panzerbock and Georg Brandl, is that it > is redundant with takewhile(). You did mention that in the PEP. > > The other, posted by Steven Bethard, is that it fundamentally breaks the > current semantics of abbreviating (except for iteration variable scoping) an > 'equivalent' for loop. This should have been listed in the PEP under > Objections (or whatever the section. I did not bother to second his > objection there but will now. > > -1 > > Steven summary: > "I'm probably just repeating myself here, but the reason not to do it > is that the current generator expressions translate almost directly > into the corresponding generator statements. Using "while" in the way > you've suggested breaks this symmetry, and would make Python harder to > learn." > > Longer presentation: > "I think this could end up being confusing. Current generator > expressions turn into an equivalent generator function by simply > indenting the clauses and adding a yield, for example: > > (i for i in range(100) if i % 2 == 0) > > is equivalent to: > > def gen(): > for i in range(100): > if i % 2 == 0: > yield i > > Now you're proposing syntax that would no longer work like this. > Taking your example: > > (i for i in range(100) while i <= 50) > > I would expect this to mean: > [meaning, one would expect this to mean, using current rules = tjr] > > def gen(): > for i in range(100): > while i <= 50: > yield i > > In short, -1. You're proposing to use an existing keyword in a new way > that doesn't match how generator expressions are evaluated." > > Terry Jan Reedy > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/gerald.britton%40gmail.com > From algorias at yahoo.com Mon Jan 19 19:22:51 2009 From: algorias at yahoo.com (Vitor Bosshard) Date: Mon, 19 Jan 2009 10:22:51 -0800 (PST) Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901191003v5b67b5faw9557d3658cd4b9c9@mail.gmail.com> Message-ID: <34210.53528.qm@web54404.mail.yahoo.com> ----- Mensaje original ---- > De: Gerald Britton > Para: Terry Reedy > CC: python-dev at python.org > Enviado: lunes, 19 de enero, 2009 15:03:47 > Asunto: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions > > Duly noted and thanks for the feedback!? (just what I was looking for > actually).? I do disagree with the idea that the proposal, if > implemented, would make Python harder to learn.? Not sure who would > find it harder.? Having to find and use takewhile was harder for me. > I still find that one counter-intuitive.? I would have expected the > parameters in the reverse order (take something, while something else > is true).? Tripped me up a few times, which got me thinking about an > alternative. Are you even sure the list comprehension doesn't already shortcut evaluation? This quick test in 2.6 hints otherwise: >>> a = (i for i in range(10) if i**2<10) >>> a.next() 0 >>> a.next() 1 >>> a.next() 2 >>> a.next() 3 >>> a.next() Traceback (most recent call last): ? File "", line 1, in ??? a.next() StopIteration ?Todo sobre la Liga Mexicana de f?tbol! Estadisticas, resultados, calendario, fotos y m?s:< http://espanol.sports.yahoo.com/ From scott+python-dev at scottdial.com Mon Jan 19 19:41:57 2009 From: scott+python-dev at scottdial.com (Scott Dial) Date: Mon, 19 Jan 2009 13:41:57 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <34210.53528.qm@web54404.mail.yahoo.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901191003v5b67b5faw9557d3658cd4b9c9@mail.gmail.com> <34210.53528.qm@web54404.mail.yahoo.com> Message-ID: <4974C975.6010806@scottdial.com> Vitor Bosshard wrote: > Are you even sure the list comprehension doesn't already shortcut evaluation? It does not. The body of the comprehension is evaluated all the way to completion, despite the fact that a.next() does not return until there is a successful test of the if expression. >>> def print_range(n): ... for i in range(n): ... print(i) ... yield i ... >>> a = (i for i in print_range(10) if i**2<10) >>> a.next() 0 0 >>> a.next() 1 1 >>> a.next() 2 2 >>> a.next() 3 3 >>> a.next() 4 5 6 7 8 9 Traceback (most recent call last): File "", line 1, in StopIteration -- Scott Dial scott at scottdial.com scodial at cs.indiana.edu From gjcarneiro at gmail.com Mon Jan 19 19:43:42 2009 From: gjcarneiro at gmail.com (Gustavo Carneiro) Date: Mon, 19 Jan 2009 18:43:42 +0000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <34210.53528.qm@web54404.mail.yahoo.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901191003v5b67b5faw9557d3658cd4b9c9@mail.gmail.com> <34210.53528.qm@web54404.mail.yahoo.com> Message-ID: 2009/1/19 Vitor Bosshard > > > > > ----- Mensaje original ---- > > De: Gerald Britton > > Para: Terry Reedy > > CC: python-dev at python.org > > Enviado: lunes, 19 de enero, 2009 15:03:47 > > Asunto: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator > expressions > > > > Duly noted and thanks for the feedback! (just what I was looking for > > actually). I do disagree with the idea that the proposal, if > > implemented, would make Python harder to learn. Not sure who would > > find it harder. Having to find and use takewhile was harder for me. > > I still find that one counter-intuitive. I would have expected the > > parameters in the reverse order (take something, while something else > > is true). Tripped me up a few times, which got me thinking about an > > alternative. > > > Are you even sure the list comprehension doesn't already shortcut > evaluation? > > This quick test in 2.6 hints otherwise: > > > >>> a = (i for i in range(10) if i**2<10) > >>> a.next() > 0 > >>> a.next() > 1 > >>> a.next() > 2 > >>> a.next() > 3 > >>> a.next() > Traceback (most recent call last): > File "", line 1, in > a.next() > StopIteration Does not prove anything. ---- test.py ---- source = iter(xrange(10)) a = (i for i in source if i**2<10) print list(a) print list(source) ---- output --- $ python /tmp/test.py [0, 1, 2, 3] [] While 'a' is being evaluated, the source iterator is first completely exhausted. If the source iterator is infinite, an infinite loop is created and the program doesn't terminate. > > > ?Todo sobre la Liga Mexicana de f?tbol! Estadisticas, resultados, > calendario, fotos y m?s:< > http://espanol.sports.yahoo.com/ > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/gjcarneiro%40gmail.com > -- Gustavo J. A. M. Carneiro INESC Porto, Telecommunications and Multimedia Unit "The universe is always one step beyond logic." -- Frank Herbert -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.belopolsky at gmail.com Mon Jan 19 19:53:36 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Mon, 19 Jan 2009 13:53:36 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <4974C975.6010806@scottdial.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901191003v5b67b5faw9557d3658cd4b9c9@mail.gmail.com> <34210.53528.qm@web54404.mail.yahoo.com> <4974C975.6010806@scottdial.com> Message-ID: On Mon, Jan 19, 2009 at 1:41 PM, Scott Dial wrote: > Vitor Bosshard wrote: >> Are you even sure the list comprehension doesn't already shortcut evaluation? > > It does not. The body of the comprehension is evaluated all the way to > completion, .. In addition, the test is evaluated on all items as well: >>> def test(i): ... print "testing", i ... return i**2 < 10 ... >>> a = (i for i in range(10) if test(i)) >>> a.next() testing 0 0 >>> a.next() testing 1 1 >>> a.next() testing 2 2 >>> a.next() testing 3 3 >>> a.next() testing 4 testing 5 testing 6 testing 7 testing 8 testing 9 Traceback (most recent call last): File "", line 1, in StopIteration From p.f.moore at gmail.com Mon Jan 19 20:30:47 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Mon, 19 Jan 2009 19:30:47 +0000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <34210.53528.qm@web54404.mail.yahoo.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901191003v5b67b5faw9557d3658cd4b9c9@mail.gmail.com> <34210.53528.qm@web54404.mail.yahoo.com> Message-ID: <79990c6b0901191130g7c6e8bc7ha51bf29eae926a6a@mail.gmail.com> 2009/1/19 Vitor Bosshard : > Are you even sure the list comprehension doesn't already shortcut evaluation? > > This quick test in 2.6 hints otherwise: > > >>>> a = (i for i in range(10) if i**2<10) Yes, but your test, once it becomes true, remains so. Consider >>> list(n for n in range(10) if n%2 == 0) [0, 2, 4, 6, 8] I assume that the intention of the while syntax is: >>> list(n for n in range(10) while n%2 == 0) [0] because 1%2 != 0 so the loop stops. Having said that, I'm -1 on the proposal. The requirement is rare enough that the correct place for it *is* in a module - and that's what itertools provides (along with a number of other, equally valid, manipulations). I certainly don't see it as justifying new syntax. Paul. From brett at python.org Mon Jan 19 20:31:33 2009 From: brett at python.org (Brett Cannon) Date: Mon, 19 Jan 2009 11:31:33 -0800 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901191003v5b67b5faw9557d3658cd4b9c9@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901191003v5b67b5faw9557d3658cd4b9c9@mail.gmail.com> Message-ID: On Mon, Jan 19, 2009 at 10:03, Gerald Britton wrote: > Duly noted and thanks for the feedback! (just what I was looking for > actually). I do disagree with the idea that the proposal, if > implemented, would make Python harder to learn. Not sure who would > find it harder. Having to find and use takewhile was harder for me. > I still find that one counter-intuitive. I would have expected the > parameters in the reverse order (take something, while something else > is true). Tripped me up a few times, which got me thinking about an > alternative. > The reason Python would be harder to learn is there is something more to learn. Removing confusion for something from the standard library by adding syntax does not warrant making the language easier. For something to be considered making the language easier it needs to solve a very common idiom in a way that is an improvement over the alternatives. In this instance I don't think the idiom is common enough to warrant the change. -Brett > On Mon, Jan 19, 2009 at 12:51 PM, Terry Reedy wrote: >> Gerald Britton wrote: >>> >>> Please find below PEP 3142: Add a "while" clause to generator >>> expressions. I'm looking for feedback and discussion. >> >> This was already discussed on python-ideas where it got negative feedback. >> >> One objection, mentioned by Mathias Panzerbock and Georg Brandl, is that it >> is redundant with takewhile(). You did mention that in the PEP. >> >> The other, posted by Steven Bethard, is that it fundamentally breaks the >> current semantics of abbreviating (except for iteration variable scoping) an >> 'equivalent' for loop. This should have been listed in the PEP under >> Objections (or whatever the section. I did not bother to second his >> objection there but will now. >> >> -1 >> >> Steven summary: >> "I'm probably just repeating myself here, but the reason not to do it >> is that the current generator expressions translate almost directly >> into the corresponding generator statements. Using "while" in the way >> you've suggested breaks this symmetry, and would make Python harder to >> learn." >> >> Longer presentation: >> "I think this could end up being confusing. Current generator >> expressions turn into an equivalent generator function by simply >> indenting the clauses and adding a yield, for example: >> >> (i for i in range(100) if i % 2 == 0) >> >> is equivalent to: >> >> def gen(): >> for i in range(100): >> if i % 2 == 0: >> yield i >> >> Now you're proposing syntax that would no longer work like this. >> Taking your example: >> >> (i for i in range(100) while i <= 50) >> >> I would expect this to mean: >> [meaning, one would expect this to mean, using current rules = tjr] >> >> def gen(): >> for i in range(100): >> while i <= 50: >> yield i >> >> In short, -1. You're proposing to use an existing keyword in a new way >> that doesn't match how generator expressions are evaluated." >> >> Terry Jan Reedy >> >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/gerald.britton%40gmail.com >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org > From greg at krypto.org Mon Jan 19 20:53:13 2009 From: greg at krypto.org (Gregory P. Smith) Date: Mon, 19 Jan 2009 11:53:13 -0800 Subject: [Python-Dev] stuck with dlopen... In-Reply-To: <18798.48005.285151.445565@montanaro.dyndns.org> References: <18798.48005.285151.445565@montanaro.dyndns.org> Message-ID: <52dc1c820901191153k4cec7a47ld641d1cf87f397f9@mail.gmail.com> If you run your python.exe under gdb you should be able to set a future breakpoint on your _PyEval_EvalMiniFrameEx function and debug from there. On Wed, Jan 14, 2009 at 8:28 PM, wrote: > > I've recently been working on generating C functions on-the-fly which > inline > the C code necessary to implement the bytecode in a given Python function. > For example, this bytecode: > > >>> dis.dis(f) > 2 0 LOAD_FAST 0 (a) > 3 LOAD_CONST 1 (1) > 6 BINARY_ADD > 7 RETURN_VALUE > > is transformed into this rather boring bit of C code: > > #include "Python.h" > > #include "code.h" > #include "frameobject.h" > #include "eval.h" > #include "opcode.h" > #include "structmember.h" > > #include "opcode_mini.h" > > PyObject * > _PyEval_EvalMiniFrameEx(PyFrameObject *f, int throwflag) > { > > static int jitting = 1; > > PyEval_EvalFrameEx_PROLOG1(); > co = f->f_code; > PyEval_EvalFrameEx_PROLOG2(); > > oparg = 0; > LOAD_FAST_IMPL(oparg); > oparg = 1; > LOAD_CONST_IMPL(oparg); > BINARY_ADD_IMPL(); > RETURN_VALUE_IMPL(); > > PyEval_EvalFrameEx_EPILOG(); > } > > The PROLOG1, PROLOG2 and EPILOG macros are just chunks of code from > PyEval_EvalFrameEx. > > I have the code compiling and linking, and dlopen and dlsym seem to work, > returning apparently valid pointers, but when I try to call the function I > get > > Program received signal EXC_BAD_ACCESS, Could not access memory. > Reason: KERN_PROTECTION_FAILURE at address: 0x0000000c > 0x0058066d in _PyEval_EvalMiniFrameEx (f=0x230d30, throwflag=0) at > MwDLSf.c:17 > > Line 17 is the PROLOG1 macro. I presume it's probably barfed on the very > first instruction. (This is all on an Intel Mac running Leopard BTW.) > > Here are the commands generated to compile and link the C code: > > gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall \ > -Wstrict-prototypes -g -DPy_BUILD_CORE -DNDEBUG \ > -I/Users/skip/src/python/py3k-t/Include \ > -I/Users/skip/src/python/py3k-t -c dTd5cl.c \ > -o /tmp/MwDLSf.o > gcc -L/opt/local/lib -bundle -undefined dynamic_lookup -g \ > /tmp/dTd5cl.o -L/Users/skip/src/python/py3k-t -lpython3.1 \ > -o /tmp/MwDLSf.so > > (It just uses the distutils compiler module to build .so files.) The .so > file looks more-or-less ok: > > % otool -L /tmp/MwDLSf.so > /tmp/MwDLSf.so: > /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current > version 111.1.3) > > though nm doesn't show that any undefined _Py* symbols so I suspect I'm not > linking it correctly. The Python executable was built without > --enable-shared. I've tried building with that config flag, but that just > gives me fits during debugging because it always wants to find libpython in > the installation directory even if I'm running python.exe from the build > directory. Installing is a little tedious because it relies on a properly > functioning interpreter. > > dlopen is called very simply: > > handle = dlopen(shared, RTLD_NOW); > > I used RTLD_NOW because that's what sys.getdlopenflags() returns. I'm not > calling dlclose for the time being. > > I'm not exactly sure where I should go from here. I'd be more than happy > to > open an item in the issue tracker. I was hoping to get something a bit > closer to working before doing that though. The failure to properly load > the compiled function makes it pretty much impossble to debug the generated > code beyond what the compiler can tell me. > > Any suggestions? > > Skip > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sturla at molden.no Mon Jan 19 20:29:31 2009 From: sturla at molden.no (Sturla Molden) Date: Mon, 19 Jan 2009 20:29:31 +0100 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: <4974D49B.3050409@molden.no> On 1/19/2009 6:51 PM, Terry Reedy wrote: > The other, posted by Steven Bethard, is that it fundamentally breaks the > current semantics of abbreviating (except for iteration variable > scoping) an 'equivalent' for loop. The proposed syntax would suggest that this should be legal as well: for i in iterable while cond: blahblah or perhaps: while cond for i in iterable: blahblah A while-for or for-while loop would be a novel invention, not seen in any other language that I know of. I seriously doubt its usefulness though... Sturla Molden From ncoghlan at gmail.com Mon Jan 19 21:41:18 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 20 Jan 2009 06:41:18 +1000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: <4974E56E.2010209@gmail.com> Steven Bethard wrote: > -1. As I pointed out on python-ideas, this proposal makes "while" mean > something different in a generator expression. While I initially found the suggestion in the PEP rather cute, that isn't enough to make it a good idea as a language addition. So, -1 for a few reasons: - the 'idiom' being replaced isn't common enough to justify dedicated syntax - the proposed syntax change isn't significantly easier to understand or significantly faster than the existing alternatives - Steven's more substantial objection that it would break the parallel between generator expressions/list comprehensions and the corresponding statements. Cheers, Nick. P.S. Some examples of past syntax changes that *were* accepted (and why): http://mail.python.org/pipermail/python-dev/2008-October/082831.html -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From lkcl at lkcl.net Mon Jan 19 21:53:07 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Mon, 19 Jan 2009 20:53:07 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) using msvcr80 assemblies Message-ID: folks, hi, after some quiet advice i've tracked down a method for compiling python2.5 using msvcr80 that _will_ actually work both under native win32 and also under wine, but it's a _bit_ dodgy, as i couldn't track down where you're supposed to put Microsoft.VC80.CRT, except in the path of the application where it's running from. so, instead, i put the _contents_ of Microsoft.VC80.CRT.manifest into the manifest for the file, and this _does_ actually seem to work. i'm thinking of adding the Microsoft.VC80.CRT.manifest to the rc file (for compilation as a resource) to see if _that_ works, and will report back, but first i wanted to describe what i've done and see what people think: 1) created python_2.5_8.0_mingw_exe.manifest contents as follows, this is _normally_ what is in Microsoft.VC80.CRT _not_ in the .exe.manifest n9On8FItNsK/DmT8UQxu6jYDtWQ= 0KJ/VTwP4OUHx98HlIW2AdW1kuY= YJuB+9Os2oxW4mY+2oC/r8lICZE= 2) created python_2.5_8.0_mingw_exe.rc contents as follows: #include "winuser.h" 2 RT_MANIFEST PC/python_2.5_8.0_mingw_exe.manifest you could get away with 2 24 PC/..... and could exclude the #include 3) added a rule to Makefile.pre.in to create the .res as a binary: # This rule builds the .res file for the Python EXE, required when # linking and using msvcrt80 or above. good luck to us all... $(PYTHONEXEMSVRES): $(srcdir)/PC/python_$(VERSION)_$(MSRTVER)_exe.manifest \ $(srcdir)/PC/python_$(VERSION)_$(MSRTVER)_mingw_exe.rc windres --input $(srcdir)/PC/python_$(VERSION)_$(MSRTVER)_mingw_exe.rc \ --output $(PYTHONEXEMSVRES) \ --output-format=coff 4) added $(PYTHONEXEMSVRES) to the objects to be linked. stunningly, this actually works (of course, you need an msvcr80.dll for it to work duh). i tried finding a location to place the Microsoft.VC80.CRT.Manifest, prior to this hack - a wine dump showed this: 0009:trace:actctx:lookup_assembly looking for name=L"Microsoft.VC80.CRT" version=8.0.50727.762 arch=L"x86" 0009:trace:heap:RtlAllocateHeap (0x110000,00000002,00000038): returning 0x115148 0009:trace:file:RtlDosPathNameToNtPathName_U (L"C:\\windows\\winsxs\\manifests",0xff8c7d08,(nil),(nil)) 0009:trace:file:RtlGetFullPathName_U (L"C:\\windows\\winsxs\\manifests" 520 0xff8c79f4 (nil)) attempts to copy the manifest into that directory resulted in "no joy". so, i'm a bit stuck, and would appreciate some advice on whether the above is acceptable (yes i know it makes sure that python.exe can only use one _very_ specific version of msvcr80.dll - and there are currently two: 8.0.50727.762 and 8.0.50608.0. also i'd appreciate some advice on what the _really_ best way to do this is. and on where the hell i'm supposed to put the VC80.CRT manifest so it will actually... _do_ something! l. From kristjan at ccpgames.com Mon Jan 19 22:28:39 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Mon, 19 Jan 2009 21:28:39 +0000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: <930F189C8A437347B80DF2C156F7EC7F04D7C90D51@exchis.ccp.ad.local> Are you all certain that this mapping from a generator expression to a foor loop isn't just a happy coincidence? After all, the generator statement is just a generalization of the list comprehension and that doesn't map quite so directly. I have always taken both expressions at face value, and not tried to map them into something else. Why should you, since they are designed to match the grammar of the english language and make perfect sense if you read them as you would a herring recipe. The suggested "while" keyword expands on this easy to understand theme, and the fact that it doesn't fit a mapping that was probably never intentional shouldn't detract from that. K -----Original Message----- From: python-dev-bounces+kristjan=ccpgames.com at python.org [mailto:python-dev-bounces+kristjan=ccpgames.com at python.org] On Behalf Of Terry Reedy Sent: 19. jan?ar 2009 17:52 To: python-dev at python.org Subject: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions The other, posted by Steven Bethard, is that it fundamentally breaks the current semantics of abbreviating (except for iteration variable scoping) an 'equivalent' for loop. This should have been listed in the PEP under Objections (or whatever the section. I did not bother to second his objection there but will now. -1 From skip at pobox.com Mon Jan 19 22:52:04 2009 From: skip at pobox.com (skip at pobox.com) Date: Mon, 19 Jan 2009 15:52:04 -0600 Subject: [Python-Dev] Hmmm... __PyObject_NextNotImplemented when tracing lines In-Reply-To: References: <20090119165137.0991CD33C78@montanaro.dyndns.org> Message-ID: <18804.62980.134343.770744@montanaro.dyndns.org> Alexander> I cannot reproduce this on my Mac. It looks like you may Alexander> have an out of date python.exe in your sandbox. Please check Alexander> that Alexander> $ nm python.exe | grep PyObject_NextNotImplemented Alexander> 00052940 T __PyObject_NextNotImplemented Alexander> has a "T" in the second column. Alexander> Note that this symbol seem to have been added recently: ... I see what the problem is now. I configured with --enable-shared. Even though my sandbox was up-to-date, my python.exe was current and my libpython2.7.dylib held the symbol in question the *installed* version of libpython2.7.dylib didn't have the symbol: % nm python.exe | grep PyObject_NextNotImplemented % nm libpython2.7.dylib | egrep PyObject_NextNotImplemented 00054200 T __PyObject_NextNotImplemented % nm /Users/skip/local/lib/libpython2.7.dylib | egrep PyObject_NextNotImplemented % I've been bitten by this before. When building with --enable-shared the executable appears to have the installed shared library location bound into it: % otool -L python.exepython.exe: /Users/skip/local/lib/libpython2.7.dylib (compatibility version 2.7.0, current version 2.7.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 111.1.3) Is this something that we can fix/work around? It seems to me like a wart, probably platform-dependent. Alexander> I am puzzled, however, that you don't see problems in a Alexander> non-coverage run. Yeah, that mystifies me as well, especially given the above output from nm and otool. Skip From LambertDW at Corning.com Mon Jan 19 20:00:02 2009 From: LambertDW at Corning.com (Lambert, David W (S&T)) Date: Mon, 19 Jan 2009 14:00:02 -0500 Subject: [Python-Dev] Add a "while" clause to generator expressions References: Message-ID: <84B204FFB016BA4984227335D8257FBA2736E7@CVCV0XI05.na.corning.com> The proposal is similar to the c do statement do statement while (expression); which for whatever reason (infrequency?) like the switch statement have rightly not been adopted into python. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/ms-tnef Size: 2686 bytes Desc: not available URL: From tjreedy at udel.edu Tue Jan 20 00:52:07 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 19 Jan 2009 18:52:07 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7C90D51@exchis.ccp.ad.local> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7C90D51@exchis.ccp.ad.local> Message-ID: Kristj?n Valur J?nsson wrote: > Are you all certain that this mapping from a generator expression to > a foor loop isn't just a happy coincidence? Yes, The manual *defines* the meaning of a comprehension in terms of the corresponding nested statements. "The comprehension consists of a single expression followed by at least one for clause and zero or more for or if clauses. In this case, the elements of the new container are those that would be produced by considering each of the for or if clauses a block, nesting from left to right, and evaluating the expression to produce an element each time the innermost block is reached." (3.0 doc) It was originally defined as exactly equivalent (with empty list initialization added). The only intended change of the slightly softer newer version is that the target name bindings do not escape the scope of the comprehension. The proposed change would BREAK the definition and intent of what a comprehension means. > After all, the generator statement is just a generalization > of the list comprehension I would call it a variation. Syntactically, a generator statement is a comprehension with parentheses instead of square brackets or curly braces. tjr From python at rcn.com Tue Jan 20 00:56:22 2009 From: python at rcn.com (Raymond Hettinger) Date: Mon, 19 Jan 2009 15:56:22 -0800 Subject: [Python-Dev] Copyright notices in modules Message-ID: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> Why does numbers.py say: # Copyright 2007 Google, Inc. All Rights Reserved. # Licensed to PSF under a Contributor Agreement. Weren't there multiple contributors including non-google people? Does Google want to be associated with code that was submitted with no tests? Do we want this sort of stuff in the code? If someone signs a contributor agreement, can we forgo the external copyright comments? Do we want to make a practice of every contributor commenting in the name of the company they were working for at the time (if so, I would have to add the comment to a lot of modules)? Does the copyright concept even apply to an abstract base class (I thought APIs were not subject to copyright, just like database layouts and language definitions)? Raymond From ironfroggy at gmail.com Tue Jan 20 01:42:12 2009 From: ironfroggy at gmail.com (Calvin Spealman) Date: Mon, 19 Jan 2009 19:42:12 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <76fd5acf0901190847y1c28b659o3e3fb7e4549cabcc@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <76fd5acf0901190829r3c0133e6i49154bae5163483f@mail.gmail.com> <5d1a32000901190841k18450400nbd61fdf878522e59@mail.gmail.com> <76fd5acf0901190847y1c28b659o3e3fb7e4549cabcc@mail.gmail.com> Message-ID: <76fd5acf0901191642q16d6b37cy25a1247aa73e7389@mail.gmail.com> OK, I still don't like the general idea, but I have a suggestion for what I think is a more favorable syntax, anyway. Basically, adding the allowance of an 'else break' when there is an if clause in a generator expression or list comprehension. I still dont think it should be done, but it is a more consistent syntax. It has the obvious problem of looking like it might allow an alternative expression, like the if-expression. prime = (p for p in sieve() if p < 1000 else break) On Mon, Jan 19, 2009 at 11:47 AM, Calvin Spealman wrote: > On Mon, Jan 19, 2009 at 11:41 AM, Gerald Britton > wrote: >> Thanks Calvin, >> >> Could you please expand on your thoughts about possible confusion? >> That is, how do you see a programmer becoming confused if this option >> were added to the syntax. > > I think that the difference between these two lines is too easy to > miss among other code: > > prime = (p for p in sieve() while p < 1000) > > and: > > prime = (p for p in sieve() if p < 1000) > > The very distinctly different behavior is two important to look so > similar at a glance. And, again, I just don't see takewhile being used > all that often. Not nearly often enough to warrant replacing it with a > special syntax! Only 178 results from Google CodeSearch, while chain, > groupby, and repeat get 4000, 3000, and 1000 respectively. Should > those be given their own syntax? > >> On Mon, Jan 19, 2009 at 11:29 AM, Calvin Spealman wrote: >>> I am really unconvinced of the utility of this proposal and quite >>> convinced of the confusing factor it may well add to the current >>> syntax. I would like to see more applicable examples. It would replace >>> uses of takewhile, but that isn't a really often used function. So, is >>> there any evidence to support that making this a new syntax would find >>> so many more uses of the construct to be worth it? I believe not. >>> >>> On Mon, Jan 19, 2009 at 10:10 AM, Gerald Britton >>> wrote: >>>> Please find below PEP 3142: Add a "while" clause to generator >>>> expressions. I'm looking for feedback and discussion. >>>> >>>> >>>> PEP: 3142 >>>> Title: Add a "while" clause to generator expressions >>>> Version: $Revision: 68715 $ >>>> Last-Modified: $Date: 2009-01-18 11:28:20 +0100 (So, 18. Jan 2009) $ >>>> Author: Gerald Britton >>>> Status: Draft >>>> Type: Standards Track >>>> Content-Type: text/plain >>>> Created: 12-Jan-2009 >>>> Python-Version: 3.0 >>>> Post-History: >>>> >>>> >>>> Abstract >>>> >>>> This PEP proposes an enhancement to generator expressions, adding a >>>> "while" clause to complement the existing "if" clause. >>>> >>>> >>>> Rationale >>>> >>>> A generator expression (PEP 289 [1]) is a concise method to serve >>>> dynamically-generated objects to list comprehensions (PEP 202 [2]). >>>> Current generator expressions allow for an "if" clause to filter >>>> the objects that are returned to those meeting some set of >>>> criteria. However, since the "if" clause is evaluated for every >>>> object that may be returned, in some cases it is possible that all >>>> objects would be rejected after a certain point. For example: >>>> >>>> g = (n for n in range(100) if n*n < 50) >>>> >>>> which is equivalent to the using a generator function >>>> (PEP 255 [3]): >>>> >>>> def __gen(exp): >>>> for n in exp: >>>> if n*n < 50: >>>> yield n >>>> g = __gen(iter(range(10))) >>>> >>>> would yield 0, 1, 2, 3, 4, 5, 6 and 7, but would also consider >>>> the numbers from 8 to 99 and reject them all since n*n >= 50 for >>>> numbers in that range. Allowing for a "while" clause would allow >>>> the redundant tests to be short-circuited: >>>> >>>> g = (n for n in range(100) while n*n < 50) >>>> >>>> would also yield 0, 1, 2, 3, 4, 5, 6 and 7, but would stop at 8 >>>> since the condition (n*n < 50) is no longer true. This would be >>>> equivalent to the generator function: >>>> >>>> def __gen(exp): >>>> for n in exp: >>>> if n*n < 50: >>>> yield n >>>> else: >>>> break >>>> g = __gen(iter(range(100))) >>>> >>>> Currently, in order to achieve the same result, one would need to >>>> either write a generator function such as the one above or use the >>>> takewhile function from itertools: >>>> >>>> from itertools import takewhile >>>> g = takewhile(lambda n: n*n < 50, range(100)) >>>> >>>> The takewhile code achieves the same result as the proposed syntax, >>>> albeit in a longer (some would say "less-elegant") fashion. Also, >>>> the takewhile version requires an extra function call (the lambda >>>> in the example above) with the associated performance penalty. >>>> A simple test shows that: >>>> >>>> for n in (n for n in range(100) if 1): pass >>>> >>>> performs about 10% better than: >>>> >>>> for n in takewhile(lambda n: 1, range(100)): pass >>>> >>>> though they achieve similar results. (The first example uses a >>>> generator; takewhile is an iterator). If similarly implemented, >>>> a "while" clause should perform about the same as the "if" clause >>>> does today. >>>> >>>> The reader may ask if the "if" and "while" clauses should be >>>> mutually exclusive. There are good examples that show that there >>>> are times when both may be used to good advantage. For example: >>>> >>>> p = (p for p in primes() if p > 100 while p < 1000) >>>> >>>> should return prime numbers found between 100 and 1000, assuming >>>> I have a primes() generator that yields prime numbers. Of course, this >>>> could also be achieved like this: >>>> >>>> p = (p for p in (p for p in primes() if p > 100) while p < 1000) >>>> >>>> which is syntactically simpler. Some may also ask if it is possible >>>> to cover dropwhile() functionality in a similar way. I initially thought >>>> of: >>>> >>>> p = (p for p in primes() not while p < 100) >>>> >>>> but I am not sure that I like it since it uses "not" in a non-pythonic >>>> fashion, I think. >>>> >>>> Adding a "while" clause to generator expressions maintains the >>>> compact form while adding a useful facility for short-circuiting >>>> the expression. >>>> >>>> Implementation: >>>> >>>> I am willing to assist in the implementation of this feature, although I have >>>> not contributed to Python thus far and would definitely need mentoring. (At >>>> this point I am not quite sure where to begin.) Presently though, I would >>>> find it challenging to fit this work into my existing workload. >>>> >>>> >>>> Acknowledgements >>>> >>>> Raymond Hettinger first proposed the concept of generator >>>> expressions in January 2002. >>>> >>>> >>>> References >>>> >>>> [1] PEP 289: Generator Expressions >>>> http://www.python.org/dev/peps/pep-0289/ >>>> >>>> [2] PEP 202: List Comprehensions >>>> http://www.python.org/dev/peps/pep-0202/ >>>> >>>> [3] PEP 255: Simple Generators >>>> http://www.python.org/dev/peps/pep-0255/ >>>> >>>> >>>> Copyright >>>> >>>> This document has been placed in the public domain. >>>> >>>> >>>> Local Variables: >>>> mode: indented-text >>>> indent-tabs-mode: nil >>>> sentence-end-double-space: t >>>> fill-column: 70 >>>> coding: utf-8 >>>> End: >>>> >>>> _______________________________________________ >>>> Python-Dev mailing list >>>> Python-Dev at python.org >>>> http://mail.python.org/mailman/listinfo/python-dev >>>> Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com >>>> >>>> >>> >>> >>> >>> -- >>> Read my blog! I depend on your acceptance of my opinion! I am interesting! >>> http://techblog.ironfroggy.com/ >>> Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy >>> >> > > > > -- > Read my blog! I depend on your acceptance of my opinion! I am interesting! > http://techblog.ironfroggy.com/ > Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From brett at python.org Tue Jan 20 03:24:42 2009 From: brett at python.org (Brett Cannon) Date: Mon, 19 Jan 2009 18:24:42 -0800 Subject: [Python-Dev] Questions/comments on documentation formatting Message-ID: I have been writing up the initial docs for importlib and four things struck me: 1. Why is three space indents the preferred indentation level? 2. Should we start using function annotations? 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, c=None]])``) really necessary when default argument values are present? And do we really need to nest the brackets when it is obvious that having on optional argument means the rest are optional as well? 4. The var directive is not working even though the docs list it as a valid directive; so is it still valid and something is broken, or the docs need to be updated? -Brett From stephen at xemacs.org Tue Jan 20 03:51:41 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Tue, 20 Jan 2009 11:51:41 +0900 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> Message-ID: <877i4qzo3m.fsf@xemacs.org> Raymond Hettinger writes: > Does the copyright concept even apply to an abstract base class (I > thought APIs were not subject to copyright, just like database > layouts and language definitions)? Yes, it does, although a public API per se is not subject to copyright, because there's only one way to do it. Any comments, internal implementation (eg, names of persistent state variables, members, and constants, and the very existence of those identifiers), and tests are subject to copyright because they are expressive. I believe that a private API also can be subject to copyright, though I'm not as sure of that. The point being that there are good APIs and bad APIs that expose the same functionality, so that API design is expressive. However, if you expose the API and license people to use it, that license makes it impossible to restrict them from using it thereafter. Caveat: IANAL, and this is under U.S. law. From cs at zip.com.au Tue Jan 20 03:54:49 2009 From: cs at zip.com.au (Cameron Simpson) Date: Tue, 20 Jan 2009 13:54:49 +1100 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <76fd5acf0901191642q16d6b37cy25a1247aa73e7389@mail.gmail.com> Message-ID: <20090120025449.GA3484@cskk.homeip.net> On 19Jan2009 19:42, Calvin Spealman wrote: | OK, I still don't like the general idea, but I have a suggestion for | what I think is a more favorable syntax, anyway. Basically, adding the | allowance of an 'else break' when there is an if clause in a generator | expression or list comprehension. I still dont think it should be | done, but it is a more consistent syntax. It has the obvious problem | of looking like it might allow an alternative expression, like the | if-expression. | | prime = (p for p in sieve() if p < 1000 else break) If I'm reading your suggestion correctly it changes feel of the "if" part quite fandamentally. A bare "if" acts as a filter, yielding only the items matching the condition. With an "else break" it yields only the items matching the condition up to the first which fails. For a range like "p < 1000" this isn't so different, but if the condition doesn't apply to just the leading items then this form becomes almost useless. Consider "p % 3 == 0". Normally that would yield every third item. With "else break" it would yield just one item. Any non-continuous filter has the same issue. In my opinion this makes the construct far less applicable than it would appear at first glance. So -1 from me. I saw someone else mention takewhile(), and that's my preferred way of doing the original suggestion ("[ ... while some-constraint ]"), and thus I'm -1 on the original suggestion too (also for the if/while confusion mentioned in the thread). Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ Drive Agressively Rash Magnificently - Nankai Leathers From benjamin at python.org Tue Jan 20 04:01:08 2009 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 19 Jan 2009 21:01:08 -0600 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: References: Message-ID: <1afaf6160901191901g1d634947h8d53862282c18b0e@mail.gmail.com> On Mon, Jan 19, 2009 at 8:24 PM, Brett Cannon wrote: > I have been writing up the initial docs for importlib and four things struck me: > > 1. Why is three space indents the preferred indentation level? Because it matches nicely up with the length of directives: .. somedirective:: blah ^^^ > > 2. Should we start using function annotations? No, I think that information is better stored in the function description. > > 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, > c=None]])``) really necessary when default argument values are > present? And do we really need to nest the brackets when it is obvious > that having on optional argument means the rest are optional as well? Actually, the defaults are usually documented in the description not the signature. > > 4. The var directive is not working even though the docs list it as a > valid directive; so is it still valid and something is broken, or the > docs need to be updated? The docs should be updated. "data" is the one to use now. -- Regards, Benjamin From scott+python-dev at scottdial.com Tue Jan 20 04:02:04 2009 From: scott+python-dev at scottdial.com (Scott Dial) Date: Mon, 19 Jan 2009 22:02:04 -0500 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: References: Message-ID: <49753EAC.6030901@scottdial.com> Brett Cannon wrote: > 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, > c=None]])``) really necessary when default argument values are > present? And do we really need to nest the brackets when it is obvious > that having on optional argument means the rest are optional as well? I can't think of an example off the top of my head, but I'm certain the point of nesting the brackets is to delimit the optional arguments into groups. Documenting your fxn() examples as "fxn(a [, b=None, c=None])" would imply that if you provide 'b' then you must provide 'c', or if we abandon nested brackets, it's ambiguous as to the requirements. Imagine seeing "foo(a [, b=None, c=None [, d=None]])" and I think the rationale for such notation becomes clear. -- Scott Dial scott at scottdial.com scodial at cs.indiana.edu From brett at python.org Tue Jan 20 04:11:31 2009 From: brett at python.org (Brett Cannon) Date: Mon, 19 Jan 2009 19:11:31 -0800 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: <1afaf6160901191901g1d634947h8d53862282c18b0e@mail.gmail.com> References: <1afaf6160901191901g1d634947h8d53862282c18b0e@mail.gmail.com> Message-ID: On Mon, Jan 19, 2009 at 19:01, Benjamin Peterson wrote: > On Mon, Jan 19, 2009 at 8:24 PM, Brett Cannon wrote: >> I have been writing up the initial docs for importlib and four things struck me: >> >> 1. Why is three space indents the preferred indentation level? > > Because it matches nicely up with the length of directives: > > .. somedirective:: blah > ^^^ > >> >> 2. Should we start using function annotations? > > No, I think that information is better stored in the function description. > Why? Putting it in the signature makes it very succinct and a simple glance at the doc to see what type/ABC is expected. >> >> 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, >> c=None]])``) really necessary when default argument values are >> present? And do we really need to nest the brackets when it is obvious >> that having on optional argument means the rest are optional as well? > > Actually, the defaults are usually documented in the description not > the signature. > OK, but that doesn't make it optimal. And that still doesn't answer my question of whether all of those nested brackets are truly necessary. >> >> 4. The var directive is not working even though the docs list it as a >> valid directive; so is it still valid and something is broken, or the >> docs need to be updated? > > The docs should be updated. "data" is the one to use now. So the 'data' directive turns into any variable, not just a module variables? -Brett From benjamin at python.org Tue Jan 20 04:19:17 2009 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 19 Jan 2009 21:19:17 -0600 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: References: <1afaf6160901191901g1d634947h8d53862282c18b0e@mail.gmail.com> Message-ID: <1afaf6160901191919t5d80dbeete4ee13b2c82772f2@mail.gmail.com> On Mon, Jan 19, 2009 at 9:11 PM, Brett Cannon wrote: > On Mon, Jan 19, 2009 at 19:01, Benjamin Peterson wrote: >> On Mon, Jan 19, 2009 at 8:24 PM, Brett Cannon wrote: >>> >>> 2. Should we start using function annotations? >> >> No, I think that information is better stored in the function description. >> > > Why? Putting it in the signature makes it very succinct and a simple > glance at the doc to see what type/ABC is expected. Well, I guess it's just not been explored. Feel free to try it out if you wish, though. > >>> >>> 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, >>> c=None]])``) really necessary when default argument values are >>> present? And do we really need to nest the brackets when it is obvious >>> that having on optional argument means the rest are optional as well? >> >> Actually, the defaults are usually documented in the description not >> the signature. >> > > OK, but that doesn't make it optimal. And that still doesn't answer my > question of whether all of those nested brackets are truly necessary. All I can say is that it is the style/convention. > >>> >>> 4. The var directive is not working even though the docs list it as a >>> valid directive; so is it still valid and something is broken, or the >>> docs need to be updated? >> >> The docs should be updated. "data" is the one to use now. > > So the 'data' directive turns into any variable, not just a module variables? "data" is for module level objects. If you're documenting properties or attributes in classes, use "attribute". -- Regards, Benjamin From python at rcn.com Tue Jan 20 04:23:22 2009 From: python at rcn.com (Raymond Hettinger) Date: Mon, 19 Jan 2009 19:23:22 -0800 Subject: [Python-Dev] Questions/comments on documentation formatting References: Message-ID: <096C7BCCCDD040149931A722E080E305@RaymondLaptop1> From: "Brett Cannon" > 1. Why is three space indents the preferred indentation level? I've also wondered about this. It is somewhat incovenient when bringing in code samples from files with four space indents. Raymond From benjamin at python.org Tue Jan 20 04:30:02 2009 From: benjamin at python.org (Benjamin Peterson) Date: Mon, 19 Jan 2009 21:30:02 -0600 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: <096C7BCCCDD040149931A722E080E305@RaymondLaptop1> References: <096C7BCCCDD040149931A722E080E305@RaymondLaptop1> Message-ID: <1afaf6160901191930w41f9e88i4ba6be9c98278618@mail.gmail.com> On Mon, Jan 19, 2009 at 9:23 PM, Raymond Hettinger wrote: > From: "Brett Cannon" >> >> 1. Why is three space indents the preferred indentation level? > > I've also wondered about this. It is somewhat incovenient > when bringing in code samples from files with four space indents. It's just reST indentation that is 3 spaces. Code examples in the reST can be 4 spaces. -- Regards, Benjamin From ben+python at benfinney.id.au Tue Jan 20 04:49:54 2009 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 20 Jan 2009 14:49:54 +1100 Subject: [Python-Dev] Questions/comments on documentation formatting References: <096C7BCCCDD040149931A722E080E305@RaymondLaptop1> <1afaf6160901191930w41f9e88i4ba6be9c98278618@mail.gmail.com> Message-ID: <87ocy24owt.fsf@benfinney.id.au> "Benjamin Peterson" writes: > On Mon, Jan 19, 2009 at 9:23 PM, Raymond Hettinger wrote: > > From: "Brett Cannon" > >> > >> 1. Why is three space indents the preferred indentation level? > > > > I've also wondered about this. It is somewhat incovenient > > when bringing in code samples from files with four space indents. > > It's just reST indentation that is 3 spaces. It doesn't have to be. When writing reST, I always make directives so they will line up nicely at 4-space indents: ===== Normal paragraph .. Comment .. foo:: bar ===== -- \ ?I was the kid next door's imaginary friend.? ?Emo Philips | `\ | _o__) | Ben Finney From python at rcn.com Tue Jan 20 04:50:25 2009 From: python at rcn.com (Raymond Hettinger) Date: Mon, 19 Jan 2009 19:50:25 -0800 Subject: [Python-Dev] Questions/comments on documentation formatting References: <096C7BCCCDD040149931A722E080E305@RaymondLaptop1> <1afaf6160901191930w41f9e88i4ba6be9c98278618@mail.gmail.com> Message-ID: <25D181D6496549F59FBADF8445F08C4E@RaymondLaptop1> I have another question about doc formatting. What controls whether section headers get urls with a custom named jump target instead of a default name like "id1"? In particular, look at the urls for: http://docs.python.org/dev/library/collections.html#id1 versus http://docs.python.org/dev/library/collections.html#abcs-abstract-base-classes I would like all of the targets to have meaningful names. Raymond From ben+python at benfinney.id.au Tue Jan 20 05:06:51 2009 From: ben+python at benfinney.id.au (Ben Finney) Date: Tue, 20 Jan 2009 15:06:51 +1100 Subject: [Python-Dev] Questions/comments on documentation formatting References: <096C7BCCCDD040149931A722E080E305@RaymondLaptop1> <1afaf6160901191930w41f9e88i4ba6be9c98278618@mail.gmail.com> <25D181D6496549F59FBADF8445F08C4E@RaymondLaptop1> Message-ID: <87k58q4o4k.fsf@benfinney.id.au> "Raymond Hettinger" writes: > What controls whether section headers get urls with a custom named > jump target instead of a default name like "id1"? > > In particular, look at the urls for: > http://docs.python.org/dev/library/collections.html#id1 versus Hmm. Immediately preceding the

element, there is also an empty element with a meaningful ID, "counter-objects", that can be used to get to the same position in the document: http://docs.python.org/dev/library/collections.html#counter-objects However, the problem is that URL isn't used for the ?Permalink to this headline? link. Perhaps a bug report to the Docutils folks is in order. -- \ ?Nature is trying very hard to make us succeed, but nature does | `\ not depend on us. We are not the only experiment.? ?Richard | _o__) Buckminster Fuller, 1978-04-30 | Ben Finney From brett at python.org Tue Jan 20 06:56:28 2009 From: brett at python.org (Brett Cannon) Date: Mon, 19 Jan 2009 21:56:28 -0800 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: <1afaf6160901191919t5d80dbeete4ee13b2c82772f2@mail.gmail.com> References: <1afaf6160901191901g1d634947h8d53862282c18b0e@mail.gmail.com> <1afaf6160901191919t5d80dbeete4ee13b2c82772f2@mail.gmail.com> Message-ID: On Mon, Jan 19, 2009 at 19:19, Benjamin Peterson wrote: > On Mon, Jan 19, 2009 at 9:11 PM, Brett Cannon wrote: >> On Mon, Jan 19, 2009 at 19:01, Benjamin Peterson wrote: >>> On Mon, Jan 19, 2009 at 8:24 PM, Brett Cannon wrote: >>>> >>>> 2. Should we start using function annotations? >>> >>> No, I think that information is better stored in the function description. >>> >> >> Why? Putting it in the signature makes it very succinct and a simple >> glance at the doc to see what type/ABC is expected. > > Well, I guess it's just not been explored. Feel free to try it out if > you wish, though. > I just might. >> >>>> >>>> 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, >>>> c=None]])``) really necessary when default argument values are >>>> present? And do we really need to nest the brackets when it is obvious >>>> that having on optional argument means the rest are optional as well? >>> >>> Actually, the defaults are usually documented in the description not >>> the signature. >>> >> >> OK, but that doesn't make it optimal. And that still doesn't answer my >> question of whether all of those nested brackets are truly necessary. > > All I can say is that it is the style/convention. > Right, which is why I am questioning it. =) >> >>>> >>>> 4. The var directive is not working even though the docs list it as a >>>> valid directive; so is it still valid and something is broken, or the >>>> docs need to be updated? >>> >>> The docs should be updated. "data" is the one to use now. >> >> So the 'data' directive turns into any variable, not just a module variables? > > "data" is for module level objects. If you're documenting properties > or attributes in classes, use "attribute". Then what are we supposed to use for arguments? Just ``literal``? -Brett From brett at python.org Tue Jan 20 06:58:25 2009 From: brett at python.org (Brett Cannon) Date: Mon, 19 Jan 2009 21:58:25 -0800 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: <25D181D6496549F59FBADF8445F08C4E@RaymondLaptop1> References: <096C7BCCCDD040149931A722E080E305@RaymondLaptop1> <1afaf6160901191930w41f9e88i4ba6be9c98278618@mail.gmail.com> <25D181D6496549F59FBADF8445F08C4E@RaymondLaptop1> Message-ID: On Mon, Jan 19, 2009 at 19:50, Raymond Hettinger wrote: > I have another question about doc formatting. > > What controls whether section headers get urls with a custom named jump > target instead of a default name like "id1"? > > In particular, look at the urls for: > http://docs.python.org/dev/library/collections.html#id1 versus > > http://docs.python.org/dev/library/collections.html#abcs-abstract-base-classes > I would like all of the targets to have meaningful names. Not sure from a sphinx perspective, but Docutils does this automatically. You can also always specify the anchor point manually:: .. _abcs-abstract-base-classes Abstract Base Classes ------------------------------ to get what you want. -Brett From brett at python.org Tue Jan 20 07:03:15 2009 From: brett at python.org (Brett Cannon) Date: Mon, 19 Jan 2009 22:03:15 -0800 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: <49753EAC.6030901@scottdial.com> References: <49753EAC.6030901@scottdial.com> Message-ID: On Mon, Jan 19, 2009 at 19:02, Scott Dial wrote: > Brett Cannon wrote: >> 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, >> c=None]])``) really necessary when default argument values are >> present? And do we really need to nest the brackets when it is obvious >> that having on optional argument means the rest are optional as well? > > I can't think of an example off the top of my head, but I'm certain the > point of nesting the brackets is to delimit the optional arguments into > groups. Documenting your fxn() examples as "fxn(a [, b=None, c=None])" > would imply that if you provide 'b' then you must provide 'c', or if we > abandon nested brackets, it's ambiguous as to the requirements. Imagine > seeing "foo(a [, b=None, c=None [, d=None]])" and I think the rationale > for such notation becomes clear. Well, that is such a rare case that I don't know if it warrants the line noise in the argument declaration. That argument also doesn't make sense in the face of ``fxn(a [, a=None [, b=None]])`` where 'b' almost always has no connection to 'a', but is still supposed to be listed that way because of positional arguments being optional. I understand using them for C functions where there is no such thing as a default argument, but it just doesn't make a ton of sense for Python code. I don't know of anyone who was confused by what help() spit out and not having fancy bracketing. -Brett From python at rcn.com Tue Jan 20 08:50:35 2009 From: python at rcn.com (Raymond Hettinger) Date: Mon, 19 Jan 2009 23:50:35 -0800 Subject: [Python-Dev] Questions/comments on documentation formatting References: <096C7BCCCDD040149931A722E080E305@RaymondLaptop1> <1afaf6160901191930w41f9e88i4ba6be9c98278618@mail.gmail.com> <25D181D6496549F59FBADF8445F08C4E@RaymondLaptop1> Message-ID: <04AD60E3A4594B78B93222A88E86B21A@RaymondLaptop1> >> In particular, look at the urls for: >> http://docs.python.org/dev/library/collections.html#id1 versus >> >> http://docs.python.org/dev/library/collections.html#abcs-abstract-base-classes >> I would like all of the targets to have meaningful names. [Brett] > Not sure from a sphinx perspective, but Docutils does this > automatically. You can also always specify the anchor point manually:: > > .. _abcs-abstract-base-classes > > Abstract Base Classes > ------------------------------ > > to get what you want. Thanks for the note. It pointed me to the real problem which was that manual anchor points can interfere with the automatically generated names if their names are the same. The solution was to *remove* the manually generated anchor points. Raymond From mal at egenix.com Tue Jan 20 10:17:37 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 20 Jan 2009 10:17:37 +0100 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> Message-ID: <497596B1.4060600@egenix.com> On 2009-01-20 00:56, Raymond Hettinger wrote: > Why does numbers.py say: > > # Copyright 2007 Google, Inc. All Rights Reserved. > # Licensed to PSF under a Contributor Agreement. Because that's where that file originated, I guess. This is part of what you have to do for things that are licensed to the PSF under a contributor agreement: http://www.python.org/psf/contrib/contrib-form/ """ Contributor shall identify each Contribution by placing the following notice in its source code adjacent to Contributor's valid copyright notice: "Licensed to PSF under a Contributor Agreement." """ > Weren't there multiple contributors including non-google people? The initial contribution was done by Google (Jeffrey Yasskin AFAIK) and that's where the above lines originated from. > Does Google want to be associated with code that > was submitted with no tests? Only Google can comment on this. > Do we want this sort of stuff in the code? Yes, it is required by the contrib forms. > If someone signs a contributor agreement, can we > forgo the external copyright comments? No. See above. Only the copyright owner can remove such notices. > Do we want to make a practice of every contributor > commenting in the name of the company they were > working for at the time (if so, I would have to add > the comment to a lot of modules)? That depends on the contract a contributor has with the company that funded the work. It's quite common for such contracts to include a clause stating that all IP generated during work time is owned by the employer. > Does the copyright concept even apply to an > abstract base class (I thought APIs were not > subject to copyright, just like database layouts > and language definitions)? It applies to the written program text. You are probably thinking about other IP rights such as patents or designs. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 20 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From fuzzyman at voidspace.org.uk Tue Jan 20 11:02:14 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 20 Jan 2009 10:02:14 +0000 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: <497596B1.4060600@egenix.com> References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> Message-ID: <4975A126.6070309@voidspace.org.uk> M.-A. Lemburg wrote: > [snip...] > >> Does the copyright concept even apply to an >> abstract base class (I thought APIs were not >> subject to copyright, just like database layouts >> and language definitions)? >> > > It applies to the written program text. You are probably > thinking about other IP rights such as patents or designs. > > You need to read Van Lindberg's excellent book on intellectual property rights and open source (which is about American law and European law will be different). Mere collections of facts are not copyrightable as they are not creative (the basis of copyright) and this is presumed to apply to parts of software like header files and interface descriptions - which could easily apply to ABCs in Python. I recommend his book by the way - I'm about half way through so far and it is highly readable Michael Foord -- http://www.ironpythoninaction.com/ From fuzzyman at voidspace.org.uk Tue Jan 20 11:06:38 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 20 Jan 2009 10:06:38 +0000 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: References: <49753EAC.6030901@scottdial.com> Message-ID: <4975A22E.3030402@voidspace.org.uk> Brett Cannon wrote: > On Mon, Jan 19, 2009 at 19:02, Scott Dial > wrote: > >> Brett Cannon wrote: >> >>> 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, >>> c=None]])``) really necessary when default argument values are >>> present? And do we really need to nest the brackets when it is obvious >>> that having on optional argument means the rest are optional as well? >>> >> I can't think of an example off the top of my head, but I'm certain the >> point of nesting the brackets is to delimit the optional arguments into >> groups. Documenting your fxn() examples as "fxn(a [, b=None, c=None])" >> would imply that if you provide 'b' then you must provide 'c', or if we >> abandon nested brackets, it's ambiguous as to the requirements. Imagine >> seeing "foo(a [, b=None, c=None [, d=None]])" and I think the rationale >> for such notation becomes clear. >> > > Well, that is such a rare case that I don't know if it warrants the > line noise in the argument declaration. That argument also doesn't > make sense in the face of ``fxn(a [, a=None [, b=None]])`` where 'b' > almost always has no connection to 'a', but is still supposed to be > listed that way because of positional arguments being optional. I > understand using them for C functions where there is no such thing as > a default argument, but it just doesn't make a ton of sense for Python > code. I don't know of anyone who was confused by what help() spit out > and not having fancy bracketing. > > I think the square bracketing is ugly and does nothing for clarity or readability. The sooner it can be phased out the better. Function annotations should probably only be used in API descriptions where those annotations actually exist - otherwise when there are real annotations you have a conflict on how to indicate that in the documentation. Michael > -Brett > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ From ncoghlan at gmail.com Tue Jan 20 11:15:49 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 20 Jan 2009 20:15:49 +1000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <930F189C8A437347B80DF2C156F7EC7F04D7C90D51@exchis.ccp.ad.local> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <930F189C8A437347B80DF2C156F7EC7F04D7C90D51@exchis.ccp.ad.local> Message-ID: <4975A455.6080208@gmail.com> Kristj?n Valur J?nsson wrote: > Are you all certain that this mapping from a generator expression to > a foor loop isn't just a happy coincidence? After all, the generator > statement is just a generalization of the list comprehension and that > doesn't map quite so directly. The mapping of the for and if clauses is identical for both generator expressions and the various flavours of comprehension. It's only the outer wrappings (creating a generator/dict/set/list) and the innermost loop body (yield statement/item assignment/set add/list append) that differ between the constructs. As Terry noted, it's even defined that way in the language reference - the expressions are pure syntactic sugar for the corresponding statements. While this doesn't often matter in practice (since people tend to switch to using the statement based versions rather than writing convoluted multiple clause comprehensions), it's still an important symmetry that matters greatly to Python VM implementers so any proposed changes need to take it into account. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From mal at egenix.com Tue Jan 20 13:28:54 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Tue, 20 Jan 2009 13:28:54 +0100 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: <4975A126.6070309@voidspace.org.uk> References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> <4975A126.6070309@voidspace.org.uk> Message-ID: <4975C386.10901@egenix.com> On 2009-01-20 11:02, Michael Foord wrote: > M.-A. Lemburg wrote: >> [snip...] >> >>> Does the copyright concept even apply to an >>> abstract base class (I thought APIs were not >>> subject to copyright, just like database layouts >>> and language definitions)? >>> >> >> It applies to the written program text. You are probably >> thinking about other IP rights such as patents or designs. >> >> > > You need to read Van Lindberg's excellent book on intellectual property > rights and open source (which is about American law and European law > will be different). Mere collections of facts are not copyrightable as > they are not creative (the basis of copyright) and this is presumed to > apply to parts of software like header files and interface descriptions > - which could easily apply to ABCs in Python. I doubt that you can make such assumptions in general. It's a case-by-case decision and also one that depends on the copyright law or convention you assume. See e.g. the WIPO copyright treaty: http://www.wipo.int/treaties/en/ip/wct/trtdocs_wo033.html#P56_5626 and the Berne Convention: http://www.wipo.int/treaties/en/ip/berne/trtdocs_wo001.html#P85_10661 and TRIPS: http://www.wto.org/english/docs_e/legal_e/27-trips_04_e.htm#1 That said, for numbers.py there's certainly enough creativity in that file to enjoy copyright protection. > I recommend his book by the way - I'm about half way through so far and > it is highly readable Thanks for the pointer. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 20 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From luke.leighton at googlemail.com Tue Jan 20 14:02:25 2009 From: luke.leighton at googlemail.com (Luke Kenneth Casson Leighton) Date: Tue, 20 Jan 2009 13:02:25 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now Message-ID: folks, hi, this is a fairly important issue for python development interoperability - martin mentioned that releases of mingw-compiled python, if done with a non-interoperable version of msvcrt, would cause much mayhem. well, compiling python on mingw with msvcr80 _can_ be done; using it can also be a simple matter of creating a python.exe.manifest file, but i can't actually do any testing because it doesn't work under wine. so, pending any further advice and guidance from anyone which allows me to successfully proceed, i'm not going to continue to compile - or release - python2.5 *or* python2.6 builds (when i get round to that) using msvcr80 or msvcr9X. one issue in favour of this decision is that the DLL that's produced by the autoconf build process is "libpython2.5.dll.a" - not "python2.5.dll". it has a different name. it should be abundantly clear to users and developers that "if name equals libpython2.5.dll.a then duh build equals different". additionally, the setup.py distutils all goes swimmingly well and lovely - using libpython2.5.dll.a. the only issue which _is_ going to throw a spanner in the works is that people who download win32-built precompiled c-based modules are going to find that hey, "it don't work!" and the answer will have to be "go get a version of that module, compiled with mingw, not MSVC". of course - if python for win32 ENTIRELY DROPPED msvc as a development platform, and went for an entirely free software development toolchain, then this problem goes away. thoughts, anyone? l. From tlesher at gmail.com Tue Jan 20 14:11:25 2009 From: tlesher at gmail.com (Tim Lesher) Date: Tue, 20 Jan 2009 08:11:25 -0500 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: Message-ID: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> On Tue, Jan 20, 2009 at 08:02, Luke Kenneth Casson Leighton wrote: > of course - if python for win32 ENTIRELY DROPPED msvc as a development > platform, and went for an entirely free software development > toolchain, then this problem goes away. That's a non-starter for anyone who incorporates Python in an existing MSVC-based development environment. When in Rome... -- Tim Lesher From fuzzyman at voidspace.org.uk Tue Jan 20 14:13:16 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 20 Jan 2009 13:13:16 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> Message-ID: <4975CDEC.8010408@voidspace.org.uk> Tim Lesher wrote: > On Tue, Jan 20, 2009 at 08:02, Luke Kenneth Casson Leighton > wrote: > >> of course - if python for win32 ENTIRELY DROPPED msvc as a development >> platform, and went for an entirely free software development >> toolchain, then this problem goes away. >> > > That's a non-starter for anyone who incorporates Python in an existing > MSVC-based development environment. > > When in Rome... > > There would also be a significant performance cost. The PGO (Profile Guided Optimisation) compilation of Visual Studio is impressive. Michael -- http://www.ironpythoninaction.com/ From lists at cheimes.de Tue Jan 20 14:44:03 2009 From: lists at cheimes.de (Christian Heimes) Date: Tue, 20 Jan 2009 14:44:03 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: Message-ID: Luke Kenneth Casson Leighton schrieb: > of course - if python for win32 ENTIRELY DROPPED msvc as a development > platform, and went for an entirely free software development > toolchain, then this problem goes away. > > thoughts, anyone? That's not going to happen anytime soon. As long as Microsoft Visual Studio support is feasible, we will stick to VS. WINE support is not a top priority for us. Christian From lkcl at lkcl.net Tue Jan 20 15:18:42 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Tue, 20 Jan 2009 14:18:42 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> Message-ID: On Tue, Jan 20, 2009 at 1:11 PM, Tim Lesher wrote: > On Tue, Jan 20, 2009 at 08:02, Luke Kenneth Casson Leighton > wrote: >> of course - if python for win32 ENTIRELY DROPPED msvc as a development >> platform, and went for an entirely free software development >> toolchain, then this problem goes away. > > That's a non-starter for anyone who incorporates Python in an existing > MSVC-based development environment. surely incorporating libpython2.5.dll.a or libpython2.6.dll.a, along with the .def and the importlib that's generated with dlldump, unless i'm missing something, would be a simple matter, yes? > When in Rome... yeah they said the same thing about "gas ovens", too. not the nazi gas ovens, the phrase my mother used to say "if someone stuck their head in a gas oven, would you do the same?". > There would also be a significant performance cost. > The PGO (Profile Guided Optimisation) compilation of > Visual Studio is impressive. i'd say "great" - but given a choice of "impressive profile guided optimisation plus a proprietary compiler, proprietary operating system _and_ being forced to purchase a system _capable_ of running said proprietary compiler, said proprietary operating system, _and_ giving up free software principles _and_ having to go through patch-pain, install-pain _and_ being forced to use a GUI-based IDE for compilation" or "free software tools and downloads the use of which means i am beholden to NOONE", it's a simple choice for me to make - maybe not for other people. l. From luke.leighton at googlemail.com Tue Jan 20 15:20:07 2009 From: luke.leighton at googlemail.com (Luke Kenneth Casson Leighton) Date: Tue, 20 Jan 2009 14:20:07 +0000 Subject: [Python-Dev] one last go at msvcr80 / msvcr90 assemblies - mingw build of python Message-ID: could someone kindly send me the assembly files that are created by a proprietary win32 build of python2.5, 2.6 and trunk, the ones used to create the dll _and_ the python.exe many thanks. From gerald.britton at gmail.com Tue Jan 20 15:24:32 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Tue, 20 Jan 2009 09:24:32 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <20090120141824.GN11140@ruber.office.udmvt.ru> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> Message-ID: <5d1a32000901200624t243fc943m404a887aef6d0853@mail.gmail.com> hmmm...doesn't: if n*n < 50 or raise StopIteration() really mean, "Return an integer in the range 0-99 if n-squared is less than fifty or the statement 'raise StopIteration()' returns True" ? I'm not sure that that will work. On Tue, Jan 20, 2009 at 9:18 AM, wrote: > On Mon, Jan 19, 2009 at 10:10:00AM -0500, Gerald Britton wrote: >> Please find below PEP 3142: Add a "while" clause to generator >> expressions. I'm looking for feedback and discussion. >> > ... >> g = (n for n in range(100) while n*n < 50) > > May I suggest you this variant? > > def raiseStopIteration(): > raise StopIteration > > g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) > > Well, there are more characters... > > But it is not using any syntax changes and does not require any approval > to be functional. Yet it is as fast as the proposed variant, does not require > modules and, I hope, will not confuse you or anyone else. > > > -- > Alexey G. Shpagin > From python-3000 at udmvt.ru Tue Jan 20 15:18:24 2009 From: python-3000 at udmvt.ru (python-3000 at udmvt.ru) Date: Tue, 20 Jan 2009 18:18:24 +0400 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> Message-ID: <20090120141824.GN11140@ruber.office.udmvt.ru> On Mon, Jan 19, 2009 at 10:10:00AM -0500, Gerald Britton wrote: > Please find below PEP 3142: Add a "while" clause to generator > expressions. I'm looking for feedback and discussion. > ... > g = (n for n in range(100) while n*n < 50) May I suggest you this variant? def raiseStopIteration(): raise StopIteration g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) Well, there are more characters... But it is not using any syntax changes and does not require any approval to be functional. Yet it is as fast as the proposed variant, does not require modules and, I hope, will not confuse you or anyone else. -- Alexey G. Shpagin From benjamin at python.org Tue Jan 20 15:26:38 2009 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 20 Jan 2009 08:26:38 -0600 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: References: <1afaf6160901191901g1d634947h8d53862282c18b0e@mail.gmail.com> <1afaf6160901191919t5d80dbeete4ee13b2c82772f2@mail.gmail.com> Message-ID: <1afaf6160901200626p66f731c8nfbbfce4d6d20eca1@mail.gmail.com> On Mon, Jan 19, 2009 at 11:56 PM, Brett Cannon wrote: > On Mon, Jan 19, 2009 at 19:19, Benjamin Peterson wrote: >> On Mon, Jan 19, 2009 at 9:11 PM, Brett Cannon wrote: >>> On Mon, Jan 19, 2009 at 19:01, Benjamin Peterson wrote: >>>> On Mon, Jan 19, 2009 at 8:24 PM, Brett Cannon wrote: >>>>> >>>>> 2. Should we start using function annotations? >>>> >>>> No, I think that information is better stored in the function description. >>>> >>> >>> Why? Putting it in the signature makes it very succinct and a simple >>> glance at the doc to see what type/ABC is expected. >> >> Well, I guess it's just not been explored. Feel free to try it out if >> you wish, though. >> > > I just might. We might be opening a can of worms, though. Do we document everything that takes a dictionary argument with collections.Mapping or everything that takes a integer numbers.Rationale? What if multiple types are possible? >>>>> 4. The var directive is not working even though the docs list it as a >>>>> valid directive; so is it still valid and something is broken, or the >>>>> docs need to be updated? >>>> >>>> The docs should be updated. "data" is the one to use now. >>> >>> So the 'data' directive turns into any variable, not just a module variables? >> >> "data" is for module level objects. If you're documenting properties >> or attributes in classes, use "attribute". > > Then what are we supposed to use for arguments? Just ``literal``? No, use *some_argument*. -- Regards, Benjamin From p.f.moore at gmail.com Tue Jan 20 15:39:37 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Tue, 20 Jan 2009 14:39:37 +0000 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: <1afaf6160901200626p66f731c8nfbbfce4d6d20eca1@mail.gmail.com> References: <1afaf6160901191901g1d634947h8d53862282c18b0e@mail.gmail.com> <1afaf6160901191919t5d80dbeete4ee13b2c82772f2@mail.gmail.com> <1afaf6160901200626p66f731c8nfbbfce4d6d20eca1@mail.gmail.com> Message-ID: <79990c6b0901200639y16d7d856o4917b38b92d5be52@mail.gmail.com> 2009/1/20 Benjamin Peterson : > We might be opening a can of worms, though. Do we document everything > that takes a dictionary argument with collections.Mapping or > everything that takes a integer numbers.Rationale? What if multiple > types are possible? No. Only document things as taking an ABC argument if they actually *do* only take that ABC. def f(dct): return dct['a'] does not require a collections.Mapping argument, just something that implements indexing-by-strings. Even with ABCs available, I thought that duck typing was still expected to be the norm. If a function does a type-test for an ABC, it makes sense to document it as requiring that ABC (to flag to users that they may need to register their own types with the ABC), Otherwise, it does not. Paul. From ronaldoussoren at mac.com Tue Jan 20 16:01:43 2009 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Tue, 20 Jan 2009 16:01:43 +0100 Subject: [Python-Dev] bundlebuilder broken in 2.6 In-Reply-To: References: <7043CB7C-18F4-4E16-AE0C-CDA6BA311044@barrys-emacs.org> Message-ID: On 18 Jan, 2009, at 18:10, Barry Scott wrote: >> >> >> While the build should be fixed for 2.6+ (I'll send a patch), note >> that >> bundlebuilder is gone in 3.0. > > What is the replacement for bundlebuilder for 3.0? Lack of > bundlebuilder becomes a serious porting problem for me. > I deliver pysvn WOrkbench as a bundle to simplify installation > by my users. py2app, which hasn't been ported to python 3.0 yet (AFAIK). Ronald -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2224 bytes Desc: not available URL: From arigo at tunes.org Tue Jan 20 15:56:40 2009 From: arigo at tunes.org (Armin Rigo) Date: Tue, 20 Jan 2009 15:56:40 +0100 Subject: [Python-Dev] Adapt test suite for other Python impls Message-ID: <20090120145640.GA4958@code0.codespeak.net> Hi all, There is a pending patch issue at http://bugs.python.org/issue4242 which proposes to tag, in the CPython test suite, which tests are general language tests (the vast majority) and which ones are specific to CPython. The patch would add a couple of helpful functions to test_support.py (http://bugs.python.org/file12718/test-impl-details-2.diff). A bientot, Armin. From curt at hagenlocher.org Tue Jan 20 16:29:54 2009 From: curt at hagenlocher.org (Curt Hagenlocher) Date: Tue, 20 Jan 2009 07:29:54 -0800 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> Message-ID: On Tue, Jan 20, 2009 at 6:18 AM, Luke Kenneth Casson Leighton wrote: > > yeah they said the same thing about "gas ovens", too. not the nazi > gas ovens, the phrase my mother used to say "if someone stuck their > head in a gas oven, would you do the same?". I don't know who is forcing you to use a platform that you hate so much, but I respectfully suggest that this person is not on any of these mailing lists. -- Curt Hagenlocher curt at hagenlocher.org From python-3000 at udmvt.ru Tue Jan 20 16:38:40 2009 From: python-3000 at udmvt.ru (Alexey G. Shpagin) Date: Tue, 20 Jan 2009 19:38:40 +0400 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901200624t243fc943m404a887aef6d0853@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <5d1a32000901200624t243fc943m404a887aef6d0853@mail.gmail.com> Message-ID: <20090120153840.GO11140@ruber.office.udmvt.ru> On Tue, Jan 20, 2009 at 09:24:32AM -0500, Gerald Britton wrote: > hmmm...doesn't: > > if n*n < 50 or raise StopIteration() > > really mean, "Return an integer in the range 0-99 if n-squared is less > than fifty or the statement 'raise StopIteration()' returns True" ? > > I'm not sure that that will work. Well, your variant will trigger syntax error (and will not work surely). To make it work we need a function, that raises StopIteration. exactly as I have suggested. > > On Tue, Jan 20, 2009 at 9:18 AM, wrote: > > On Mon, Jan 19, 2009 at 10:10:00AM -0500, Gerald Britton wrote: > >> Please find below PEP 3142: Add a "while" clause to generator > >> expressions. I'm looking for feedback and discussion. > >> > > ... > >> g = (n for n in range(100) while n*n < 50) > > > > May I suggest you this variant? > > > > def raiseStopIteration(): > > raise StopIteration > > > > g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) -- Alexey G. Shpagin From benjamin at python.org Tue Jan 20 16:43:33 2009 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 20 Jan 2009 09:43:33 -0600 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: <79990c6b0901200639y16d7d856o4917b38b92d5be52@mail.gmail.com> References: <1afaf6160901191901g1d634947h8d53862282c18b0e@mail.gmail.com> <1afaf6160901191919t5d80dbeete4ee13b2c82772f2@mail.gmail.com> <1afaf6160901200626p66f731c8nfbbfce4d6d20eca1@mail.gmail.com> <79990c6b0901200639y16d7d856o4917b38b92d5be52@mail.gmail.com> Message-ID: <1afaf6160901200743s4d7e43b0j640d55939e6e2595@mail.gmail.com> On Tue, Jan 20, 2009 at 8:39 AM, Paul Moore wrote: > 2009/1/20 Benjamin Peterson : >> We might be opening a can of worms, though. Do we document everything >> that takes a dictionary argument with collections.Mapping or >> everything that takes a integer numbers.Rationale? What if multiple >> types are possible? > > No. Only document things as taking an ABC argument if they actually > *do* only take that ABC. > > def f(dct): > return dct['a'] > > does not require a collections.Mapping argument, just something that > implements indexing-by-strings. Even with ABCs available, I thought > that duck typing was still expected to be the norm. That's exactly why I don't think ABCs would do much good. There are almost no functions which absolutely require a certain interface. So use of annotations would be rare. > > If a function does a type-test for an ABC, it makes sense to document > it as requiring that ABC (to flag to users that they may need to > register their own types with the ABC), Otherwise, it does not. > > Paul. > -- Regards, Benjamin From gerald.britton at gmail.com Tue Jan 20 16:45:27 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Tue, 20 Jan 2009 10:45:27 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <20090120153840.GO11140@ruber.office.udmvt.ru> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <5d1a32000901200624t243fc943m404a887aef6d0853@mail.gmail.com> <20090120153840.GO11140@ruber.office.udmvt.ru> Message-ID: <5d1a32000901200745i4d925e9alaa9daa7472d59198@mail.gmail.com> OK, so your suggestion: g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) really means "return in in the range 0-99 if n-squared is less than 50 or the function raiseStopIteration() returns True". How would this get the generator to stop once n*n >=50? It looks instead like the first time around, StopIteration will be raised and (presumably) the generator will terminate. On Tue, Jan 20, 2009 at 10:38 AM, Alexey G. Shpagin wrote: > On Tue, Jan 20, 2009 at 09:24:32AM -0500, Gerald Britton wrote: >> hmmm...doesn't: >> >> if n*n < 50 or raise StopIteration() >> >> really mean, "Return an integer in the range 0-99 if n-squared is less >> than fifty or the statement 'raise StopIteration()' returns True" ? >> >> I'm not sure that that will work. > Well, your variant will trigger syntax error (and will not work surely). > > To make it work we need a function, that raises StopIteration. > exactly as I have suggested. > >> >> On Tue, Jan 20, 2009 at 9:18 AM, wrote: >> > On Mon, Jan 19, 2009 at 10:10:00AM -0500, Gerald Britton wrote: >> >> Please find below PEP 3142: Add a "while" clause to generator >> >> expressions. I'm looking for feedback and discussion. >> >> >> > ... >> >> g = (n for n in range(100) while n*n < 50) >> > >> > May I suggest you this variant? >> > > >> > def raiseStopIteration(): >> > raise StopIteration >> > >> > g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) > > -- > Alexey G. Shpagin > From stephen at xemacs.org Tue Jan 20 16:54:57 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Wed, 21 Jan 2009 00:54:57 +0900 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: <4975C386.10901@egenix.com> References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> <4975A126.6070309@voidspace.org.uk> <4975C386.10901@egenix.com> Message-ID: <87fxjegege.fsf@xemacs.org> M.-A. Lemburg writes: > On 2009-01-20 11:02, Michael Foord wrote: > > Mere collections of facts are not copyrightable as they are not > > creative (the basis of copyright) That's incorrect in the U.S.; what is copyrightable is an *original work of expression fixed in some medium*. "Original" is closely related to "creative", but it's not the same. The emphasis is on novelty, not on the intellectual power involved. So, for example, you can copyright a set of paint splashes on paper, as long as they're *new* paint splashes. The real issue here, however, is "expression". What's important is whether there are different ways to say it. So you can indeed copyright the phone book or a dictionary, which *does* protect such things as unusual use of typefaces or color to aid understanding. What you can't do is prevent someone from publishing another phone book or dictionary based on the same facts, and since "put it in alphabetical order" hasn't been an original form of expression since Aristotle or so, they can alphabetize their phone book or dictionary, and it is going to look a lot like yours. On the other hand, ABCs are not a "mere collection of facts". They are subject to various forms of organization (top down, bottom up, alphabetical order, etc), and that organization will in general be copyrightable. Also, unless your ABCs are all independent of each other, you will be making choices about when to derive and when to define from scratch. That aspect of organization is expressive, and once written down ("fixed in a medium") it is copyrightable. > > I recommend his book by the way - I'm about half way through so far and > > it is highly readable Larry Rosen's book is also good. From python-3000 at udmvt.ru Tue Jan 20 16:57:55 2009 From: python-3000 at udmvt.ru (Alexey G. Shpagin) Date: Tue, 20 Jan 2009 19:57:55 +0400 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901200745i4d925e9alaa9daa7472d59198@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <5d1a32000901200624t243fc943m404a887aef6d0853@mail.gmail.com> <20090120153840.GO11140@ruber.office.udmvt.ru> <5d1a32000901200745i4d925e9alaa9daa7472d59198@mail.gmail.com> Message-ID: <20090120155755.GP11140@ruber.office.udmvt.ru> On Tue, Jan 20, 2009 at 10:45:27AM -0500, Gerald Britton wrote: > OK, so your suggestion: > > g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) > > really means "return in in the range 0-99 if n-squared is less than 50 > or the function raiseStopIteration() returns True". > > How would this get the generator to stop once n*n >=50? It looks > instead like the first time around, StopIteration will be raised and > (presumably) the generator will terminate. Just test it. After the generator is terminated, no one will call range(100).next() method, if I really understand you. Maybe (as suggested before with 'if ... else break`) we should rename function raiseStopIteration() to else_break(), since it looks to me being a 'if ... else break's implementation with functions. Example will look like g = (n for n in range(100) if n*n < 50 or else_break()) That's to the matter of taste, I think. -- Alexey G. Shpagin From gerald.britton at gmail.com Tue Jan 20 17:07:21 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Tue, 20 Jan 2009 11:07:21 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <20090120155755.GP11140@ruber.office.udmvt.ru> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <5d1a32000901200624t243fc943m404a887aef6d0853@mail.gmail.com> <20090120153840.GO11140@ruber.office.udmvt.ru> <5d1a32000901200745i4d925e9alaa9daa7472d59198@mail.gmail.com> <20090120155755.GP11140@ruber.office.udmvt.ru> Message-ID: <5d1a32000901200807x21b94817o614e7e4fbadd396b@mail.gmail.com> Yup, I tried your idea and it does work as I intended. It looks a little better than using takewhile, but not (to me anyway) as nice as my original suggestion. Still, if my idea is ultimately rejected (looks that way at the moment), this is a good alternative. On Tue, Jan 20, 2009 at 10:57 AM, Alexey G. Shpagin wrote: > On Tue, Jan 20, 2009 at 10:45:27AM -0500, Gerald Britton wrote: >> OK, so your suggestion: >> >> g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) >> >> really means "return in in the range 0-99 if n-squared is less than 50 >> or the function raiseStopIteration() returns True". >> >> How would this get the generator to stop once n*n >=50? It looks >> instead like the first time around, StopIteration will be raised and >> (presumably) the generator will terminate. > > Just test it. After the generator is terminated, no one will call > range(100).next() > method, if I really understand you. > > Maybe (as suggested before with 'if ... else break`) we should rename > function raiseStopIteration() to else_break(), > since it looks to me being a 'if ... else break's implementation with functions. > > Example will look like > g = (n for n in range(100) if n*n < 50 or else_break()) > > That's to the matter of taste, I think. > > -- > Alexey G. Shpagin > From sturla at molden.no Tue Jan 20 17:08:48 2009 From: sturla at molden.no (Sturla Molden) Date: Tue, 20 Jan 2009 17:08:48 +0100 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901200745i4d925e9alaa9daa7472d59198@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <5d1a32000901200624t243fc943m404a887aef6d0853@mail.gmail.com> <20090120153840.GO11140@ruber.office.udmvt.ru> <5d1a32000901200745i4d925e9alaa9daa7472d59198@mail.gmail.com> Message-ID: <4975F710.9030404@molden.no> On 1/20/2009 4:45 PM, Gerald Britton wrote: > OK, so your suggestion: > > g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) > > really means "return in in the range 0-99 if n-squared is less than 50 > or the function raiseStopIteration() returns True". > > How would this get the generator to stop once n*n >=50? It looks > instead like the first time around, StopIteration will be raised and > (presumably) the generator will terminate. I still find it odd to invent new syntax for simple things like def quit(): raise StopIteration gen = itertools.imap( lambda x: x if x <= 50 else quit(), (i for i in range(100)) ) for i in gen: print i Sturla Molden From algorias at yahoo.com Tue Jan 20 17:32:24 2009 From: algorias at yahoo.com (Vitor Bosshard) Date: Tue, 20 Jan 2009 08:32:24 -0800 (PST) Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> Message-ID: <616836.15316.qm@web54408.mail.yahoo.com> ----- Mensaje original ---- > De: "python-3000 at udmvt.ru" > Para: Gerald Britton > CC: python-dev at python.org > Enviado: martes, 20 de enero, 2009 11:18:24 > Asunto: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions > > May I suggest you this variant? > > ??? def raiseStopIteration(): > ??? ??? raise StopIteration > > ??? g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) > > Well, there are more characters... > > But it is not using any syntax changes and does not require any approval > to be functional. Yet it is as fast as the proposed variant, does not require > modules and, I hope, will not confuse you or anyone else. > This works as a generator, but not as a list comprehension. The exception is propagated instead of just cutting short the loop: >>> def r(): raise StopIteration >>> print [i for i in range(10) if i**2 < 50 or r()] Traceback (most recent call last): ? File "", line 1, in ??? print [i for i in range(10) if i**2 < 50 or r()] ? File "", line 1, in r ??? def r(): raise StopIteration StopIteration >>> Vitor ?Todo sobre la Liga Mexicana de f?tbol! Estadisticas, resultados, calendario, fotos y m?s:< http://espanol.sports.yahoo.com/ From gerald.britton at gmail.com Tue Jan 20 17:40:07 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Tue, 20 Jan 2009 11:40:07 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <616836.15316.qm@web54408.mail.yahoo.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <616836.15316.qm@web54408.mail.yahoo.com> Message-ID: <5d1a32000901200840o70b4173ao34ef28feb7802edb@mail.gmail.com> Right, but the PEP is only about generator expressions. On Tue, Jan 20, 2009 at 11:32 AM, Vitor Bosshard wrote: > ----- Mensaje original ---- >> De: "python-3000 at udmvt.ru" >> Para: Gerald Britton >> CC: python-dev at python.org >> Enviado: martes, 20 de enero, 2009 11:18:24 >> Asunto: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions >> >> May I suggest you this variant? >> >> def raiseStopIteration(): >> raise StopIteration >> >> g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) >> >> Well, there are more characters... >> >> But it is not using any syntax changes and does not require any approval >> to be functional. Yet it is as fast as the proposed variant, does not require >> modules and, I hope, will not confuse you or anyone else. >> > > This works as a generator, but not as a list comprehension. The exception is propagated instead of just cutting short the loop: > >>>> def r(): raise StopIteration >>>> print [i for i in range(10) if i**2 < 50 or r()] > Traceback (most recent call last): > File "", line 1, in > print [i for i in range(10) if i**2 < 50 or r()] > File "", line 1, in r > def r(): raise StopIteration > StopIteration >>>> > > > Vitor > > > ?Todo sobre la Liga Mexicana de f?tbol! Estadisticas, resultados, calendario, fotos y m?s:< > http://espanol.sports.yahoo.com/ > From fuzzyman at voidspace.org.uk Tue Jan 20 17:54:50 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Tue, 20 Jan 2009 16:54:50 +0000 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: <87fxjegege.fsf@xemacs.org> References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> <4975A126.6070309@voidspace.org.uk> <4975C386.10901@egenix.com> <87fxjegege.fsf@xemacs.org> Message-ID: <497601DA.5090204@voidspace.org.uk> Stephen J. Turnbull wrote: > M.-A. Lemburg writes: > > On 2009-01-20 11:02, Michael Foord wrote: > > > > Mere collections of facts are not copyrightable as they are not > > > creative (the basis of copyright) > > That's incorrect in the U.S.; what is copyrightable is an *original > work of expression fixed in some medium*. "Original" is closely > related to "creative", but it's not the same. The emphasis is on > novelty, not on the intellectual power involved. So, for example, you > can copyright a set of paint splashes on paper, as long as they're > *new* paint splashes. > No but expression is more strongly related to creative. > The real issue here, however, is "expression". What's important is > whether there are different ways to say it. So you can indeed > copyright the phone book or a dictionary, which *does* protect such > things as unusual use of typefaces or color to aid understanding. > What you can't do is prevent someone from publishing another phone > book or dictionary based on the same facts, and since "put it in > alphabetical order" hasn't been an original form of expression since > Aristotle or so, they can alphabetize their phone book or dictionary, > and it is going to look a lot like yours. > > On the other hand, ABCs are not a "mere collection of facts". They are > subject to various forms of organization (top down, bottom up, > alphabetical order, etc), and that organization will in general be > copyrightable. Also, unless your ABCs are all independent of each > other, you will be making choices about when to derive and when to > define from scratch. That aspect of organization is expressive, and > once written down ("fixed in a medium") it is copyrightable. > As you say - mere ordering does not render something copyrightable. Phone books and maps deliberately insert fictitious data in order to be eligible for copyright under these terms. On the other hand I'm inclined to believe that there is enough original expression in the ABCs to be copyrightable. It's a basically irrelevant point though. :-) Michael > > > I recommend his book by the way - I'm about half way through so far and > > > it is highly readable > > Larry Rosen's book is also good. > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog From solipsis at pitrou.net Tue Jan 20 17:56:06 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 20 Jan 2009 16:56:06 +0000 (UTC) Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <5d1a32000901200624t243fc943m404a887aef6d0853@mail.gmail.com> <20090120153840.GO11140@ruber.office.udmvt.ru> <5d1a32000901200745i4d925e9alaa9daa7472d59198@mail.gmail.com> <20090120155755.GP11140@ruber.office.udmvt.ru> Message-ID: Alexey G. Shpagin udmvt.ru> writes: > > Example will look like > g = (n for n in range(100) if n*n < 50 or else_break()) Please don't suggest any hack involving raising StopIteration as part of a conditional statement in a generator expression. It might work today, but it might as well break tomorrow as it's only a side-effect of the implementation, not an official property of the language. Regards Antoine. From algorias at yahoo.com Tue Jan 20 18:00:14 2009 From: algorias at yahoo.com (Vitor Bosshard) Date: Tue, 20 Jan 2009 09:00:14 -0800 (PST) Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <616836.15316.qm@web54408.mail.yahoo.com> <5d1a32000901200840o70b4173ao34ef28feb7802edb@mail.gmail.com> Message-ID: <592455.44583.qm@web54409.mail.yahoo.com> ----- Mensaje original ---- > De: Gerald Britton > Para: Vitor Bosshard > CC: python-3000 at udmvt.ru; python-dev at python.org > Enviado: martes, 20 de enero, 2009 13:40:07 > Asunto: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions > > Right, but the PEP is only about generator expressions. > Yes, but consistency with list comprehensions would be a nice thing to have, which is absent from both?the "or raise()"?idiom and the takewhile one (which is, by definition, a generator). The new syntax wouldn't have this issue. I'm not in favor of the change, just pointing this out. Vitor ?Todo sobre la Liga Mexicana de f?tbol! Estadisticas, resultados, calendario, fotos y m?s:< http://espanol.sports.yahoo.com/ From ironfroggy at gmail.com Tue Jan 20 18:34:43 2009 From: ironfroggy at gmail.com (Calvin Spealman) Date: Tue, 20 Jan 2009 12:34:43 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <592455.44583.qm@web54409.mail.yahoo.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <616836.15316.qm@web54408.mail.yahoo.com> <5d1a32000901200840o70b4173ao34ef28feb7802edb@mail.gmail.com> <592455.44583.qm@web54409.mail.yahoo.com> Message-ID: <76fd5acf0901200934g4d188c92l48f0c0ace18b1af4@mail.gmail.com> On Tue, Jan 20, 2009 at 12:00 PM, Vitor Bosshard wrote: > > > ----- Mensaje original ---- >> De: Gerald Britton >> Para: Vitor Bosshard >> CC: python-3000 at udmvt.ru; python-dev at python.org >> Enviado: martes, 20 de enero, 2009 13:40:07 >> Asunto: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions >> >> Right, but the PEP is only about generator expressions. >> > > Yes, but consistency with list comprehensions would be a nice thing to have, which is absent from both the "or raise()" idiom and the takewhile one (which is, by definition, a generator). The new syntax wouldn't have this issue. > > I'm not in favor of the change, just pointing this out. I saw this to, and do want to throw in my two cents that it should be consistent between them. We should not add something to one and not the other. If the PEP, even if its rejected, doesn't change to reflect that its suggestion is for both generator expressions and list comprehensions, I think it should be considered invalid from the start. We should never add syntax that makes list(<...>) != [<...>] (where <...> is my stupid expression placeholder). > Vitor > > > ?Todo sobre la Liga Mexicana de f?tbol! Estadisticas, resultados, calendario, fotos y m?s:< > http://espanol.sports.yahoo.com/ > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From gerald.britton at gmail.com Tue Jan 20 18:46:55 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Tue, 20 Jan 2009 12:46:55 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <616836.15316.qm@web54408.mail.yahoo.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <616836.15316.qm@web54408.mail.yahoo.com> Message-ID: <5d1a32000901200946q306ed9e2i385443d585171b77@mail.gmail.com> I wonder if this is a bug? On Tue, Jan 20, 2009 at 11:32 AM, Vitor Bosshard wrote: > ----- Mensaje original ---- >> De: "python-3000 at udmvt.ru" >> Para: Gerald Britton >> CC: python-dev at python.org >> Enviado: martes, 20 de enero, 2009 11:18:24 >> Asunto: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions >> >> May I suggest you this variant? >> >> def raiseStopIteration(): >> raise StopIteration >> >> g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) >> >> Well, there are more characters... >> >> But it is not using any syntax changes and does not require any approval >> to be functional. Yet it is as fast as the proposed variant, does not require >> modules and, I hope, will not confuse you or anyone else. >> > > This works as a generator, but not as a list comprehension. The exception is propagated instead of just cutting short the loop: > >>>> def r(): raise StopIteration >>>> print [i for i in range(10) if i**2 < 50 or r()] > Traceback (most recent call last): > File "", line 1, in > print [i for i in range(10) if i**2 < 50 or r()] > File "", line 1, in r > def r(): raise StopIteration > StopIteration >>>> > > > Vitor > > > ?Todo sobre la Liga Mexicana de f?tbol! Estadisticas, resultados, calendario, fotos y m?s:< > http://espanol.sports.yahoo.com/ > From ironfroggy at gmail.com Tue Jan 20 18:54:03 2009 From: ironfroggy at gmail.com (Calvin Spealman) Date: Tue, 20 Jan 2009 12:54:03 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901200946q306ed9e2i385443d585171b77@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <616836.15316.qm@web54408.mail.yahoo.com> <5d1a32000901200946q306ed9e2i385443d585171b77@mail.gmail.com> Message-ID: <76fd5acf0901200954p4d2b812ex1a9fa753c762164@mail.gmail.com> On Tue, Jan 20, 2009 at 12:46 PM, Gerald Britton wrote: > I wonder if this is a bug? I don't think so, but its interesting nonetheless. passing a generator expression to list() involves two loops: the list construction and the generator expression. So, a StopIteration from whatever the GE is iterating over is caught by the GE mechanics, and anything else in the clauses can be caught by the list constructor. If the same thing is done in a LC, such an exception from the clause has nothing to catch it. It is not raised as part of iterating over something. I don't think we'd want to just start swallowing errors here, as it would change defined behavior. > On Tue, Jan 20, 2009 at 11:32 AM, Vitor Bosshard wrote: >> ----- Mensaje original ---- >>> De: "python-3000 at udmvt.ru" >>> Para: Gerald Britton >>> CC: python-dev at python.org >>> Enviado: martes, 20 de enero, 2009 11:18:24 >>> Asunto: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions >>> >>> May I suggest you this variant? >>> >>> def raiseStopIteration(): >>> raise StopIteration >>> >>> g = (n for n in range(100) if n*n < 50 or raiseStopIteration()) >>> >>> Well, there are more characters... >>> >>> But it is not using any syntax changes and does not require any approval >>> to be functional. Yet it is as fast as the proposed variant, does not require >>> modules and, I hope, will not confuse you or anyone else. >>> >> >> This works as a generator, but not as a list comprehension. The exception is propagated instead of just cutting short the loop: >> >>>>> def r(): raise StopIteration >>>>> print [i for i in range(10) if i**2 < 50 or r()] >> Traceback (most recent call last): >> File "", line 1, in >> print [i for i in range(10) if i**2 < 50 or r()] >> File "", line 1, in r >> def r(): raise StopIteration >> StopIteration >>>>> >> >> >> Vitor >> >> >> ?Todo sobre la Liga Mexicana de f?tbol! Estadisticas, resultados, calendario, fotos y m?s:< >> http://espanol.sports.yahoo.com/ >> > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From rdmurray at bitdance.com Tue Jan 20 19:08:00 2009 From: rdmurray at bitdance.com (rdmurray at bitdance.com) Date: Tue, 20 Jan 2009 13:08:00 -0500 (EST) Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <5d1a32000901200624t243fc943m404a887aef6d0853@mail.gmail.com> <20090120153840.GO11140@ruber.office.udmvt.ru> <5d1a32000901200745i4d925e9alaa9daa7472d59198@mail.gmail.com> <20090120155755.GP11140@ruber.office.udmvt.ru> Message-ID: On Tue, 20 Jan 2009 at 16:56, Antoine Pitrou wrote: > Alexey G. Shpagin udmvt.ru> writes: >> >> Example will look like >> g = (n for n in range(100) if n*n < 50 or else_break()) > > Please don't suggest any hack involving raising StopIteration as part of a > conditional statement in a generator expression. It might work today, but it > might as well break tomorrow as it's only a side-effect of the implementation, > not an official property of the language. Doing the above is, by definition, no different from raising StopIteration in a for loop inside a generator function. The language reference does document the raising of a StopIteration as signaling the exhaustion of the generator. In addition, the 3.0 docs (but, oddly, not the 2.6 docs) say in the 'for' loop documentation: "When the items are exhausted (which is immediately when the list is empty or an iterator raises a StopIteration exception)"). The difference in behavior between raising StopIteration in a list comprehension versus a generator expression are consistent with the above, by the way. If you raise StopIteration in a function whose definition is the same as the list comprehension but you are building the list as you go and only returning it when it is complete, then the StopIteration would propagate upward with no values returned (ie: in a for loop it would look like an empty list). I don't know about other people, but I have certainly assumed that raising StopIteration was a legitimate way to terminate an iterator, and have written code accordingly. If this is not true, it should probably be explicitly documented in the language reference somewhere. --RDM From guido at python.org Tue Jan 20 20:24:39 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 20 Jan 2009 11:24:39 -0800 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> Message-ID: On Sun, Jan 18, 2009 at 11:49 PM, Adam Olsen wrote: > On Sun, Jan 18, 2009 at 9:32 PM, Guido van Rossum wrote: >> On Sun, Jan 18, 2009 at 5:38 PM, Gregory P. Smith wrote: >>> +1 on getting rid of the IOBase __del__ in the C rewrite in favor of >>> tp_dealloc. >>> >>> On Sun, Jan 18, 2009 at 11:53 PM, Christian Heimes wrote: >>>> >>>> Brett Cannon schrieb: >>>> > Fine by me. People should be using the context manager for guaranteed >>>> > file closure anyway IMO. >>> >>> Yes they should. (how I really really wish i didn't have to use 2.4 anymore >>> ;) >> >> Come on, the open-try-use-finally-close idiom isn't *that* bad... >> >>> But lets at least be clear that is never acceptable for a python >>> implementation to leak file descriptors/handles (or other system resources), >>> they should be closed and released whenever the particular GC implementation >>> gets around to it. >> >> I would like to make a stronger promise. I think that for files open >> for *writing*, all data written to the file should be flushed to disk >> before the fd is closed. This is the real reason for having the >> __del__: closing the fd is done by the C implementation of FileIO, but >> since (until the rewrite in C) the buffer management is all in Python >> (both the binary I/O buffer and the additional text I/O buffer), I >> felt the downside of having a __del__ method was preferable over the >> possibility of output files not being flushed (which is always >> nightmarish to debug). >> >> Of course, once both layers of buffering are implemented in C, the >> need for __del__ to do this goes away, and I would be fine with doing >> it all in tp_alloc. >> >>>> You make a very good point! Perhaps we should stop promising that files >>>> get closed as soon as possible and encourage people in using the with >>>> statement. >>>> >>>> Christian >>> >>> eegads, do we actually -promise- that somewhere? If so I'll happily go >>> update those docs with a caveat. >> >> I don't think we've promised that ever since the days when JPython >> (with a P!) was young... >> >>> I regularly point out in code reviews that the very convenient and common >>> idiom of open(name, 'w').write(data) doesn't guarantee when the file will be >>> closed; its up to the GC implementation details. Good code should never >>> depend on the GC for a timely release of scarce external resources (file >>> descriptors/handles). >> >> And buffer flushing. While I don't want to guarantee that the buffer >> is flushed ASAP, I do want to continue promising that it is flushed >> before the object is GC'ed and before the fd is closed. > > Could we add a warning if the file has not been explicitly flushed? > Consider removing the implicit flush later, if there's a sufficient > implementation benefit to it. No, I really want to keep the implicit flush, even if it's hard for the implementation. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ncoghlan at gmail.com Tue Jan 20 21:38:05 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 21 Jan 2009 06:38:05 +1000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <5d1a32000901200624t243fc943m404a887aef6d0853@mail.gmail.com> <20090120153840.GO11140@ruber.office.udmvt.ru> <5d1a32000901200745i4d925e9alaa9daa7472d59198@mail.gmail.com> <20090120155755.GP11140@ruber.office.udmvt.ru> Message-ID: <4976362D.5040803@gmail.com> Antoine Pitrou wrote: > Alexey G. Shpagin udmvt.ru> writes: >> Example will look like >> g = (n for n in range(100) if n*n < 50 or else_break()) > > Please don't suggest any hack involving raising StopIteration as part of a > conditional statement in a generator expression. It might work today, but it > might as well break tomorrow as it's only a side-effect of the implementation, > not an official property of the language. As RDM noted, it actually is documented behaviour due to the equivalence between generator expressions and the corresponding generator functions. Writing a separate generator function is typically going to be cleaner and more readable though. Cheers, Nick. P.S. Here's another cute hack for terminating an iterator early: >>> list(iter((n for n in range(10)).next, 5)) [0, 1, 2, 3, 4] (it's nowhere near as flexible as itertools.takewhile, of course) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Tue Jan 20 21:42:36 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 21 Jan 2009 06:42:36 +1000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901200946q306ed9e2i385443d585171b77@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <616836.15316.qm@web54408.mail.yahoo.com> <5d1a32000901200946q306ed9e2i385443d585171b77@mail.gmail.com> Message-ID: <4976373C.9040808@gmail.com> Gerald Britton wrote: > I wonder if this is a bug? Nope, it's part of the defined behaviour. Avoiding the overhead of the GE machinery is actually the main advantage in using a comprehension over the equivalent generator expression. Deliberately raising StopIteration is about the only way to notice the small semantics difference (in Py3k anyway - in 2.x there are scoping differences as well). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Tue Jan 20 22:01:27 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 21 Jan 2009 07:01:27 +1000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> Message-ID: <49763BA7.6070802@gmail.com> Luke Kenneth Casson Leighton wrote: > i'd say "great" - but given a choice of "impressive profile guided > optimisation plus a proprietary compiler, proprietary operating system > _and_ being forced to purchase a system _capable_ of running said > proprietary compiler, said proprietary operating system, _and_ giving > up free software principles _and_ having to go through patch-pain, > install-pain _and_ being forced to use a GUI-based IDE for > compilation" or "free software tools and downloads the use of which > means i am beholden to NOONE", it's a simple choice for me to make - > maybe not for other people. It only becomes a problem when someone wants to both support Windows users of their extension modules with pre-built binaries, but *also* doesn't want to set up the appropriate environment for building such binaries (currently a minimum bar of Visual Studio Express on a Windows VM instance). The most common reaction I've seen to this problem from package developers is "I don't run Windows, so if users want pre-built binaries, someone with a Windows environment is going to have to volunteer to provide them". And that seems like a perfectly reasonable way to handle the situation to me. On POSIX systems, GCC does a great job, on Windows, MSVC is better (from a performance point of view). The closed source vs open source, free vs non-free philosophical arguments don't really play a significant part in the decision. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From martin at v.loewis.de Tue Jan 20 22:19:02 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 20 Jan 2009 22:19:02 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> Message-ID: <49763FC6.9090303@v.loewis.de> >> That's a non-starter for anyone who incorporates Python in an existing >> MSVC-based development environment. > > surely incorporating libpython2.5.dll.a or libpython2.6.dll.a, along > with the .def and the importlib that's generated with dlldump, unless > i'm missing something, would be a simple matter, yes? Not exactly sure what this is, but I believe Python *already* includes such a thing. Regards, Martin From dickinsm at gmail.com Tue Jan 20 23:05:29 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Tue, 20 Jan 2009 22:05:29 +0000 Subject: [Python-Dev] Strategies for debugging buildbot failures? In-Reply-To: <1afaf6160901181626j4d92eb31v1ad5400e1a37460b@mail.gmail.com> References: <5c6f2a5d0901181003j96d32a5x35db64bc710d3405@mail.gmail.com> <18803.51331.633341.223823@montanaro.dyndns.org> <1afaf6160901181626j4d92eb31v1ad5400e1a37460b@mail.gmail.com> Message-ID: <5c6f2a5d0901201405v5b893e66qa13ca1dabfdf84d6@mail.gmail.com> Thanks for all the feedback. [Michael Foord] > At Resolver Systems we regularly extend the test framework purely > to provide more diagnostic information in the event of test failures. > We do a lot of functional testing through the UI, which is particularly > prone to intermittent and hard to diagnose failures. Seems like a sound approach in general. It seems awkward to apply the method to this particular failure, though. I guess one would need extra code in regrtest.py to catch the invalid signal, for a start. [Martin v. L?wis] > Buildbot also supports submission of patches directly to the > slaves. This is currently not activated, and clearly requires > some authentication/authorization; if you want to use that, > I'd be happy to experiment with setting it up, though. > [...] > In the past, for the really difficult problems, we arranged to > have the developers get access to the buildbot slaves. Thanks, Martin. I think I've pretty much run out of time to pursue this particular problem for the moment; I may return to it later. It's good to know that these options are available, though. Mark From steve at pearwood.info Tue Jan 20 23:55:59 2009 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 21 Jan 2009 09:55:59 +1100 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120155755.GP11140@ruber.office.udmvt.ru> Message-ID: <200901210955.59699.steve@pearwood.info> On Wed, 21 Jan 2009 03:56:06 am Antoine Pitrou wrote: > Alexey G. Shpagin udmvt.ru> writes: > > Example will look like > > g = (n for n in range(100) if n*n < 50 or else_break()) > > Please don't suggest any hack involving raising StopIteration as part > of a conditional statement in a generator expression. It might work > today, but it might as well break tomorrow as it's only a side-effect > of the implementation, not an official property of the language. If that's the case, then that is a point in favour of the PEP. Personally, I find the proposed syntax change very readable and intuitive. Some have argued that it makes Python harder to learn because it adds one more thing to learn, but that's a trivial objection. It's an extension to the existing syntax, but an obvious one. The difficulty in becoming proficient in a language is not learning the syntax, but in becoming experienced with the libraries, and on that regard the PEP is a win because it simplifies the itertools module by removing takewhile (which unfortunately I find neither readable or intuitive). Another argument against the PEP was that it breaks the correspondence between the generator expression and the equivalent for-loop. I had never even noticed such correspondence before, because to my eyes the most important term is the yielded expression, not the scaffolding around it. In a generator expression, we have: yielded-expr for-clause if-clause while the corresponding nested statements are: for-clause if-clause yielded-expr The three clauses are neither in the same order, nor are they in reverse order. I don't know how important that correspondence is to language implementers, but as a Python programmer, I'd gladly give up that correspondence (which I don't find that great) in order to simplify exiting a generator expression early. So I like the proposed change. I find it elegant and very Pythonic. +1 for me. -- Steven D'Aprano From tjreedy at udel.edu Wed Jan 21 00:09:23 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 20 Jan 2009 18:09:23 -0500 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: <497596B1.4060600@egenix.com> References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> Message-ID: M.-A. Lemburg wrote: > On 2009-01-20 00:56, Raymond Hettinger wrote: >> Why does numbers.py say: >> >> # Copyright 2007 Google, Inc. All Rights Reserved. >> # Licensed to PSF under a Contributor Agreement. > > Because that's where that file originated, I guess. This is part > of what you have to do for things that are licensed to the PSF > under a contributor agreement: > > http://www.python.org/psf/contrib/contrib-form/ > > """ > Contributor shall identify each Contribution by placing the following notice in > its source code adjacent to Contributor's valid copyright notice: "Licensed to > PSF under a Contributor Agreement." > """ > >> Weren't there multiple contributors including non-google people? > > The initial contribution was done by Google (Jeffrey Yasskin > AFAIK) and that's where the above lines originated from. Thank you for the explanation, here and below, as far as it goes. But what about the copyrightable and therefore copywrited contributions of others? Does Google (in this case) get an automatic transfer of copyright to Google? A single copyright notice seems to imply that. In the case of minor edits of the original work, perhaps yes. When, for instance, I send an author notice of a typo or a suggested rephasing of a sentence, I consider that a donation to the author. In the case of new work, added to the file by PSF so that the file become a compilation or anthology of the work of several people, I should think not. If there is any copyright notice, then perhaps there should be several -- one for each 'major' (new section) contributor and one for the PSF for the compilation. I have occasional seen such things in printed works. >> Does Google want to be associated with code that >> was submitted with no tests? > > Only Google can comment on this. > >> Do we want this sort of stuff in the code? > > Yes, it is required by the contrib forms. Then it seems to me that there should/could be a notice for each major contributor of independent and separately copyrightable sections. >> If someone signs a contributor agreement, can we >> forgo the external copyright comments? > > No. See above. Only the copyright owner can remove such > notices. > >> Do we want to make a practice of every contributor >> commenting in the name of the company they were >> working for at the time (if so, I would have to add >> the comment to a lot of modules)? > > That depends on the contract a contributor has with the > company that funded the work. It's quite common for such > contracts to include a clause stating that all IP generated > during work time is owned by the employer. > >> Does the copyright concept even apply to an >> abstract base class (I thought APIs were not >> subject to copyright, just like database layouts >> and language definitions)? > > It applies to the written program text. You are probably > thinking about other IP rights such as patents or designs. Bottom line to me. The current notion of copyright does not work too well with evolving, loosely collective works (which eventually become 'folklore'). Terry Jan Reedy From mal at egenix.com Wed Jan 21 00:15:27 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 21 Jan 2009 00:15:27 +0100 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: <87fxjegege.fsf@xemacs.org> References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> <4975A126.6070309@voidspace.org.uk> <4975C386.10901@egenix.com> <87fxjegege.fsf@xemacs.org> Message-ID: <49765B0F.7080300@egenix.com> On 2009-01-20 16:54, Stephen J. Turnbull wrote: > M.-A. Lemburg writes: > > On 2009-01-20 11:02, Michael Foord wrote: > > > > Mere collections of facts are not copyrightable as they are not > > > creative (the basis of copyright) > > That's incorrect in the U.S.; what is copyrightable is an *original > work of expression fixed in some medium*. "Original" is closely > related to "creative", but it's not the same. The emphasis is on > novelty, not on the intellectual power involved. So, for example, you > can copyright a set of paint splashes on paper, as long as they're > *new* paint splashes. > > The real issue here, however, is "expression". What's important is > whether there are different ways to say it. So you can indeed > copyright the phone book or a dictionary, which *does* protect such > things as unusual use of typefaces or color to aid understanding. > What you can't do is prevent someone from publishing another phone > book or dictionary based on the same facts, and since "put it in > alphabetical order" hasn't been an original form of expression since > Aristotle or so, they can alphabetize their phone book or dictionary, > and it is going to look a lot like yours. The above argument is what makes copyright so complicated. Computer software has been given the same status as a piece of literary work, so all conventions for such works apply. However, this doesn't necessarily mean that all computer software is copyrightable per-se. The key problem is defining the threshold of originality needed for a work to become copyrightable at all and that's where different jurisdictions use different definitions or guidelines based on case law. http://en.wikipedia.org/wiki/Threshold_of_originality E.g. in Germany it is common not to grant copyright on logos that are used as trademarks. OTOH, use of a logo in the trademark sense automatically makes it a trademark (even without registration). > On the other hand, ABCs are not a "mere collection of facts". They are > subject to various forms of organization (top down, bottom up, > alphabetical order, etc), and that organization will in general be > copyrightable. Also, unless your ABCs are all independent of each > other, you will be making choices about when to derive and when to > define from scratch. That aspect of organization is expressive, and > once written down ("fixed in a medium") it is copyrightable. > > > > I recommend his book by the way - I'm about half way through so far and > > > it is highly readable > > Larry Rosen's book is also good. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 20 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From tjreedy at udel.edu Wed Jan 21 00:27:58 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 20 Jan 2009 18:27:58 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901200946q306ed9e2i385443d585171b77@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120141824.GN11140@ruber.office.udmvt.ru> <616836.15316.qm@web54408.mail.yahoo.com> <5d1a32000901200946q306ed9e2i385443d585171b77@mail.gmail.com> Message-ID: Gerald Britton wrote: > I wonder if this is a bug? It is a known glitch reported last summer. Devs decided not to fix because doing so would, in the patches tried, slow list comps significantly. Also, the documented intent and expected usage of StopIteration is this "exception StopIteration Raised by builtin next() and an iterator?s __next__() method to signal that there are no further values." The second clause includes usage in the body of a generator function since that body becomes the __next__ method of the generator-iterator produced by calling the generator function. The meaning of any other usage, such as in the body of a standard function other than next(),(as in the example producing the glitch), is undefined and leads to undefined behavior, which could be different in other implementations and change in future implementations. Terry Jan Reedy From python at rcn.com Wed Jan 21 00:36:44 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 20 Jan 2009 15:36:44 -0800 Subject: [Python-Dev] Copyright notices in modules References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1><497596B1.4060600@egenix.com> Message-ID: <9E704C0F455642AE974D4DAC51534F3F@RaymondLaptop1> [Terry Reedy] > Bottom line to me. The current notion of copyright does not work too > well with evolving, loosely collective works (which eventually become > 'folklore'). I'm at a loss of why the notice needs to be there at all. AFAICT, we've had tons of contributions from googlers and only one has put a Google copyright notice in the source. Raymond From benjamin at python.org Wed Jan 21 01:00:10 2009 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 20 Jan 2009 18:00:10 -0600 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: <9E704C0F455642AE974D4DAC51534F3F@RaymondLaptop1> References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> <9E704C0F455642AE974D4DAC51534F3F@RaymondLaptop1> Message-ID: <1afaf6160901201600r56c0e3k5c86ca0e34cf0a72@mail.gmail.com> On Tue, Jan 20, 2009 at 5:36 PM, Raymond Hettinger wrote: > [Terry Reedy] >> >> Bottom line to me. The current notion of copyright does not work too well >> with evolving, loosely collective works (which eventually become >> 'folklore'). > > I'm at a loss of why the notice needs to be there at all. AFAICT, we've > had tons of contributions from googlers and only one has put a Google > copyright notice in the source. Oh? Grepping through the source shows no less than 30 copyright notices from Google. -- Regards, Benjamin From tjreedy at udel.edu Wed Jan 21 01:04:06 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 20 Jan 2009 19:04:06 -0500 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: Message-ID: Luke Kenneth Casson Leighton wrote: > this is a fairly important issue for python development > interoperability - martin mentioned that releases of mingw-compiled > python, if done with a non-interoperable version of msvcrt, would > cause much mayhem. > well, compiling python on mingw with msvcr80 _can_ be done; using it > can also be a simple matter of creating a python.exe.manifest file, > but i can't actually do any testing because it doesn't work under > wine. > so, pending any further advice and guidance from anyone which allows > me to successfully proceed, i'm not going to continue to compile - or > release - python2.5 *or* python2.6 builds (when i get round to that) > using msvcr80 or msvcr9X. > one issue in favour of this decision is that the DLL that's produced > by the autoconf build process is "libpython2.5.dll.a" - not > "python2.5.dll". it has a different name. it should be abundantly > clear to users and developers that "if name equals libpython2.5.dll.a > then duh build equals different". additionally, the setup.py > distutils all goes swimmingly well and lovely - using > libpython2.5.dll.a. > the only issue which _is_ going to throw a spanner in the works is > that people who download win32-built precompiled c-based modules are > going to find that hey, "it don't work!" and the answer will have to > be "go get a version of that module, compiled with mingw, not MSVC". > > of course - if python for win32 ENTIRELY DROPPED msvc as a development > platform, and went for an entirely free software development > toolchain, then this problem goes away. > > thoughts, anyone? As I understand the above, you listed or implied 3 paths other than you completely giving up, which you are not ready to do yet. 1. You release non-interoperable binary, with a modified name to alleviate, but not prevent confusion. 2. You get some sort of help from someone to release an interoperable binary. 3. The devs drop msvc (wink missing ;-). Not surprisingly to me, people on pydev followed herring #3 to explain why not. If you want responses to path 2, a post leaving out 3 and giving more detail might be more successful. All I could do is unzip stuff into a temp directory and run the test suite on my XP mechine. Terry Jan Reedy From guido at python.org Wed Jan 21 01:45:36 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 20 Jan 2009 16:45:36 -0800 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: <9E704C0F455642AE974D4DAC51534F3F@RaymondLaptop1> References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> <9E704C0F455642AE974D4DAC51534F3F@RaymondLaptop1> Message-ID: 2009/1/20 Raymond Hettinger : > I'm at a loss of why the notice needs to be there at all. There's a difference between contributing a whole file and contributing a patch. Patches do not require copyright notices. Whole files do. This is not affected by later edits to the file. > AFAICT, we've > had tons of contributions from googlers and only one has put a Google copyright notice in the source. I count 28 .py files with a Google copyright and 127 with other copyrights (not counting the 47 PSF copyrights :-). Why are you picking on Google? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From python at rcn.com Wed Jan 21 02:20:44 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 20 Jan 2009 17:20:44 -0800 Subject: [Python-Dev] Copyright notices in modules References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> <9E704C0F455642AE974D4DAC51534F3F@RaymondLaptop1> Message-ID: [Raymond Hettinger] >> I'm at a loss of why the notice needs to be there at all. [GvR] > There's a difference between contributing a whole file and > contributing a patch. Patches do not require copyright notices. Whole > files do. This is not affected by later edits to the file. That makes sense. In general though, I think if a contributor isn't required by their company to add a copyright, then this sort of thing should be left out of the source code. Most of the contributors here don't seem to copyright-up everything they do (with the exception of big packages or externally maintained resources). If everyone making a significant contribution has a contributor agreement on file, perhaps we can build a list of those in a single file rather than scattering notices throughout the code. I don't see that those benefit anyone (maintainers, the original contributor, or the contributor's company). At least these notices are somewhat innocuous. The ones that were the most irritating are the ones requiring a literal copy of the notice to be placed in the docs. A while back, I spent a day getting us in compliance with those. FWIW, I'm not picking on anyone. I would just like to see a practice emerge where these stop getting added and perhaps start getting removed unless they are actually necessary for some reason (i.e. a company requires it). AFAICT, little notices like the one atop numbers.py don't confer property rights to anyone. The original purpose of a copyright notice has been lost. It has become useless boilerplate, a toothless warning sign about a unclaimable property claim on donated code. Raymond P.S. It seems silly that the copyright on PEP3141 says, "this document has been placed in the public domain" but the code itself has a company copyright. The former seems like something someone would care more about as a creative expression or original research. From guido at python.org Wed Jan 21 03:03:18 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 20 Jan 2009 18:03:18 -0800 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> <9E704C0F455642AE974D4DAC51534F3F@RaymondLaptop1> Message-ID: I would be all for cleaning up, if the lawyers agree, but I've spent enough time talking to lawyers for the rest of my life. You know where to reach Van Lindberg. On Tue, Jan 20, 2009 at 5:20 PM, Raymond Hettinger wrote: > > [Raymond Hettinger] >>> >>> I'm at a loss of why the notice needs to be there at all. > > [GvR] >> >> There's a difference between contributing a whole file and >> contributing a patch. Patches do not require copyright notices. Whole >> files do. This is not affected by later edits to the file. > > That makes sense. In general though, I think if a contributor isn't > required by their company to add a copyright, then this sort of thing > should be left out of the source code. Most of the contributors here > don't seem to copyright-up everything they do (with the exception > of big packages or externally maintained resources). > > If everyone making a significant contribution has a contributor agreement > on file, perhaps we can build a list of those in a single file rather than > scattering notices throughout the code. I don't see that those benefit > anyone (maintainers, the original contributor, or the contributor's > company). > > At least these notices are somewhat innocuous. The ones that were > the most irritating are the ones requiring a literal copy of the notice > to be placed in the docs. A while back, I spent a day getting us in > compliance with those. > > FWIW, I'm not picking on anyone. I would just like to see a practice > emerge where these stop getting added and perhaps start getting removed > unless they are actually necessary for some reason (i.e. a company requires > it). > > AFAICT, little notices like the one atop numbers.py don't confer property > rights to anyone. The original purpose of a copyright notice has been lost. > It has become useless boilerplate, a toothless warning sign about a > unclaimable > property claim on donated code. > > > Raymond > > > P.S. It seems silly that the copyright on PEP3141 says, "this document has > been placed in the public domain" but the code itself has a company > copyright. > The former seems like something someone would care more about as a > creative expression or original research. > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy at udel.edu Wed Jan 21 05:10:24 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Tue, 20 Jan 2009 23:10:24 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <200901210955.59699.steve@pearwood.info> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <20090120155755.GP11140@ruber.office.udmvt.ru> <200901210955.59699.steve@pearwood.info> Message-ID: Steven D'Aprano wrote: > Another argument against the PEP was that it breaks the correspondence > between the generator expression and the equivalent for-loop. I had > never even noticed such correspondence before, because to my eyes the > most important term is the yielded expression, not the scaffolding > around it. This was a major reason to add comprehensions. Your not noticing a primary design principle is hardly a reason to abandon it. Indeed, I claim that your ignorance shows its validity ;-). > In a generator expression, we have: > > yielded-expr for-clause if-clause > > while the corresponding nested statements are: > > for-clause if-clause yielded-expr > > The three clauses are neither in the same order, nor are they in reverse > order. They are in the same order but rotated, with the last brought around to the front to emphasize it. Did you really not notice that either? >I don't know how important that correspondence is to language > implementers, but as a Python programmer, I'd gladly give up that > correspondence (which I don't find that great) in order to simplify > exiting a generator expression early. > > So I like the proposed change. I find it elegant and very Pythonic. +1 > for me. Ironically, in a thread cross-posted on c.l.p and elsewhere, someone just labeled Python's comprehension syntax as "ad hoc syntax soup". That currently is completely wrong. It is a carefully designed 1 to 1 transformation between multiple nested statements and a single expression. But this proposal ignores and breaks that. Using 'while x' to mean 'if x: break' *is*, to me, 'ad hoc'. So I detest the proposed change. I find it ugly and anti-Pythonic. Terry Jan Reedy From tjreedy at udel.edu Wed Jan 21 08:01:40 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 21 Jan 2009 02:01:40 -0500 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> <9E704C0F455642AE974D4DAC51534F3F@RaymondLaptop1> Message-ID: Guido van Rossum wrote: > 2009/1/20 Raymond Hettinger : >> I'm at a loss of why the notice needs to be there at all. > > There's a difference between contributing a whole file and > contributing a patch. Patches do not require copyright notices. Whole > files do. This is not affected by later edits to the file. In my comment, I postulated the situation where the patch consisted of merging in another, independently copyrighted, 'whole file'. Perhaps this has mostly been a non-existent situation and therefor moot. One real situation I was thinking of, unconnected to Google as far as I am aware, is the case of two third-party IP6 modules and the suggestion that they be merged into one stdlib module. If that were accomplished by committing one and merging the other in a patch, it would be unfair (and untrue) to have just one copyright notice. Of course, in this case, I hope the two authors work everything out between themselves first before any submission. I completely understand about strongly preferring programming to lawyer consultation ;-). tjr From kristjan at ccpgames.com Wed Jan 21 11:30:10 2009 From: kristjan at ccpgames.com (=?iso-8859-1?Q?Kristj=E1n_Valur_J=F3nsson?=) Date: Wed, 21 Jan 2009 10:30:10 +0000 Subject: [Python-Dev] Issue 4448 Message-ID: <930F189C8A437347B80DF2C156F7EC7F04DACA726B@exchis.ccp.ad.local> Hello there. I recently reactivated http://bugs.python.org/issue4448 because of the need to port http://bugs.python.org/issue4879 to 3.1 This isn't a straightforward port because of the changes in the IO library. I'd appreciate if someone could shed some light on the comment in line 268 in Lib/http/client.py. See my last comment in the issue for details. Thanks, Kristj?n -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at pearwood.info Wed Jan 21 11:46:46 2009 From: steve at pearwood.info (Steven D'Aprano) Date: Wed, 21 Jan 2009 21:46:46 +1100 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> Message-ID: <200901212146.46821.steve@pearwood.info> On Wed, 21 Jan 2009 03:10:24 pm Terry Reedy wrote: > Steven D'Aprano wrote: ... > > In a generator expression, we have: > > > > yielded-expr for-clause if-clause > > > > while the corresponding nested statements are: > > > > for-clause if-clause yielded-expr > > > > The three clauses are neither in the same order, nor are they in > > reverse order. > > They are in the same order but rotated, with the last brought around > to the front to emphasize it. Did you really not notice that either? There are only three items, of course I noticed that there is *some* rearrangement of the first that leads to the second. Out of the six possible permutations of three items, they can all be described in terms of some sort of reflection, rotation or swap. > > I don't know how important that correspondence is to language > > implementers, but as a Python programmer, I'd gladly give up that > > correspondence (which I don't find that great) in order to simplify > > exiting a generator expression early. > > > > So I like the proposed change. I find it elegant and very Pythonic. > > +1 for me. > > Ironically, in a thread cross-posted on c.l.p and elsewhere, someone > just labeled Python's comprehension syntax as "ad hoc syntax soup". Is that Xah Lee? It sounds like the sort of thing he'd say. > That currently is completely wrong. It certainly is wrong. List comps and generator expressions are very elegant, at least to English speakers with a maths background (I personally "got" list comps once I recognised the correspondence to mathematical set notation. I assumed that was deliberate). > It is a carefully designed 1 to > 1 transformation between multiple nested statements and a single > expression. I'm sure that correspondence is obvious to some, but it wasn't obvious to me, and I don't suppose I'm the only one. That's not a criticism of the current syntax. Far from it -- the current syntax is excellent, regardless of whether or not you notice that it corresponds to a if-loop nested inside a for-loop with the contents rotated outside. > But this proposal ignores and breaks that. Using 'while > x' to mean 'if x: break' *is*, to me, 'ad hoc'. But it doesn't mean that. The proposed "while x" has very similar semantics to the "while x" in a while-loop: break when *not* x. > So I detest the proposed change. I find it ugly and anti-Pythonic. To each their own. I find it an elegant extension to the existing syntax. -- Steven D'Aprano From lkcl at lkcl.net Wed Jan 21 12:08:23 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Wed, 21 Jan 2009 11:08:23 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <49763BA7.6070802@gmail.com> References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763BA7.6070802@gmail.com> Message-ID: > It only becomes a problem when someone wants to both support Windows > users of their extension modules with pre-built binaries, but *also* > doesn't want to set up the appropriate environment for building such > binaries (currently a minimum bar of Visual Studio Express on a Windows > VM instance). ok - fortunately, thanks to dan kegel for pointing me in the right direction of "winetricks vcrun2005p1" i was able to get a successful build using Microsoft.VC8.CRT assemblies. i say "successful" because Parser/pgen.exe was built and ran, and libpython2.5.dll.a was also successfully built, as was python.exe successfully built. the problem _now_ to overcome is that the bloody libmsvcrt80.a has the wrong definitions, for a 32-bit build! it has functions like _fstat instead of _fstat32 and so on. if this was a 64-bit version of wine i was using mingw32 under, i would not have encountered this issue. amazingly, however, someone _else_ who kindly tried out compiling python2.5 with mingw and msvcr80, native on win32, reported that it was a complete success! as in, "successful build, successful install, successful run of tests, only 4 failed regression tests". i am utterly mystified as to how that happened. next task: beat the crap out of libmsvcr80.a and /mingw/include/*.h, repeat until success. l. From lkcl at lkcl.net Wed Jan 21 12:10:38 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Wed, 21 Jan 2009 11:10:38 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <49763FC6.9090303@v.loewis.de> References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> Message-ID: On Tue, Jan 20, 2009 at 9:19 PM, "Martin v. L?wis" wrote: >>> That's a non-starter for anyone who incorporates Python in an existing >>> MSVC-based development environment. >> >> surely incorporating libpython2.5.dll.a or libpython2.6.dll.a, along >> with the .def and the importlib that's generated with dlldump, unless >> i'm missing something, would be a simple matter, yes? > > Not exactly sure what this is, but I believe Python *already* includes > such a thing. sorry, martin - i thought the win32 builds generated python25.lib, python25.dll and python25.def so as to fit into the 8.3 filename convention. From lkcl at lkcl.net Wed Jan 21 12:42:49 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Wed, 21 Jan 2009 11:42:49 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763BA7.6070802@gmail.com> Message-ID: > next task: beat the crap out of libmsvcr80.a and /mingw/include/*.h, > repeat until success. https://sourceforge.net/tracker/index.php?func=detail&aid=2134161&group_id=2435&atid=352435 roumen, it looks like you've been and done that, already - thank you! From rdmurray at bitdance.com Wed Jan 21 14:39:26 2009 From: rdmurray at bitdance.com (rdmurray at bitdance.com) Date: Wed, 21 Jan 2009 08:39:26 -0500 (EST) Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <200901212146.46821.steve@pearwood.info> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> Message-ID: On Wed, 21 Jan 2009 at 21:46, Steven D'Aprano wrote: > On Wed, 21 Jan 2009 03:10:24 pm Terry Reedy wrote: >> It is a carefully designed 1 to >> 1 transformation between multiple nested statements and a single >> expression. > > I'm sure that correspondence is obvious to some, but it wasn't obvious > to me, and I don't suppose I'm the only one. That's not a criticism of > the current syntax. Far from it -- the current syntax is excellent, > regardless of whether or not you notice that it corresponds to a > if-loop nested inside a for-loop with the contents rotated outside. It wasn't obvious to me until I read this thread, but now that I know about it I feel a huge sense of relief. I was never comfortable with extending (or reading an extension of) a list comprehension beyond the obvious yield/for/if pattern before. Now I have a reliable tool to understand any complex list comprehension. I would not want to lose that! >> But this proposal ignores and breaks that. Using 'while >> x' to mean 'if x: break' *is*, to me, 'ad hoc'. > > But it doesn't mean that. The proposed "while x" has very similar > semantics to the "while x" in a while-loop: break when *not* x. Half right. 'while x' in the proposed syntax is equivalent to 'if not x: break', But IMO it goes too far to say it has similar semantics to 'while x' in a while loop. Neither while x*x<4: for x in range(10): yield x*x nor for x in range(10): while x*x<4: yield x*x are the same as for x in range(10): if not x*x<4: break yield x*x I understand that you are saying that 'while x' is used in the same logical sense ("take a different action when x is no longer true"), but that I don't feel that that is enough to say that it has similar semantics. Or, perhaps more accurately, it is just similar enough to be very confusing because it is also different enough to be very surprising. The semantics of 'while' in python includes the bit about creating a loop, and does _not_ include executing a 'break' in the surrounding loop. To give 'while' this new meaning would be, IMO, un-pythonic. (If python had a 'for/while' construct, it would be a different story...and then it would probably already be part of the list comprehension syntax.) >> So I detest the proposed change. I find it ugly and anti-Pythonic. I'd say +1 except that I don't find it ugly, just un-Pythonic :) --RDM From gerald.britton at gmail.com Wed Jan 21 15:38:01 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Wed, 21 Jan 2009 09:38:01 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> Message-ID: <5d1a32000901210638y1fc361d0s893a58988b7e9a7a@mail.gmail.com> FWIW, there are a few historic languages that implement a compound for-loop: Algol 68, PL/I, SAS et al allow constructs that, if translated to an equivalent (currently invalid) Python-style syntax would look like this" for in while : Some also allow for an "until" keyword. I'm not suggesting that we need to do this in Python; it's just interesting to note that there is some precedent for this approach. On Wed, Jan 21, 2009 at 8:39 AM, wrote: > On Wed, 21 Jan 2009 at 21:46, Steven D'Aprano wrote: >> >> On Wed, 21 Jan 2009 03:10:24 pm Terry Reedy wrote: >>> >>> It is a carefully designed 1 to >>> 1 transformation between multiple nested statements and a single >>> expression. >> >> I'm sure that correspondence is obvious to some, but it wasn't obvious >> to me, and I don't suppose I'm the only one. That's not a criticism of >> the current syntax. Far from it -- the current syntax is excellent, >> regardless of whether or not you notice that it corresponds to a >> if-loop nested inside a for-loop with the contents rotated outside. > > It wasn't obvious to me until I read this thread, but now that I know > about it I feel a huge sense of relief. I was never comfortable with > extending (or reading an extension of) a list comprehension beyond the > obvious yield/for/if pattern before. Now I have a reliable tool to > understand any complex list comprehension. I would not want to lose that! > >>> But this proposal ignores and breaks that. Using 'while >>> x' to mean 'if x: break' *is*, to me, 'ad hoc'. >> >> But it doesn't mean that. The proposed "while x" has very similar >> semantics to the "while x" in a while-loop: break when *not* x. > > Half right. 'while x' in the proposed syntax is equivalent to 'if not x: > break', But IMO it goes too far to say it has similar semantics to 'while > x' in a while loop. Neither > > while x*x<4: > for x in range(10): > yield x*x > > nor > > for x in range(10): > while x*x<4: > yield x*x > > are the same as > > for x in range(10): > if not x*x<4: break > yield x*x > > I understand that you are saying that 'while x' is used in the same > logical sense ("take a different action when x is no longer true"), > but that I don't feel that that is enough to say that it has similar > semantics. Or, perhaps more accurately, it is just similar enough to be > very confusing because it is also different enough to be very surprising. > The semantics of 'while' in python includes the bit about creating a > loop, and does _not_ include executing a 'break' in the surrounding loop. > To give 'while' this new meaning would be, IMO, un-pythonic. (If python > had a 'for/while' construct, it would be a different story...and then > it would probably already be part of the list comprehension syntax.) > >>> So I detest the proposed change. I find it ugly and anti-Pythonic. > > I'd say +1 except that I don't find it ugly, just un-Pythonic :) > > --RDM > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/gerald.britton%40gmail.com > From algorias at yahoo.com Wed Jan 21 16:27:38 2009 From: algorias at yahoo.com (Vitor Bosshard) Date: Wed, 21 Jan 2009 07:27:38 -0800 (PST) Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> <5d1a32000901210638y1fc361d0s893a58988b7e9a7a@mail.gmail.com> Message-ID: <549056.3449.qm@web54409.mail.yahoo.com> ----- Mensaje original ---- > De: Gerald Britton > Para: rdmurray at bitdance.com > CC: python-dev at python.org > Enviado: mi?rcoles, 21 de enero, 2009 11:38:01 > Asunto: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions > > FWIW, there are a few historic languages that implement a compound > for-loop:? Algol 68, PL/I, SAS et al allow constructs that, if > translated to an equivalent (currently invalid) Python-style syntax > would look like this" > > for in while : > ? > ? > > Some also allow for an "until" keyword.? I'm not suggesting that we > need to do this in Python; it's just interesting to note that there is > some precedent for this approach. > Well,?you could propose changing the for loop syntax (and by extension comprehensions and generators). It's?a much more radical proposal, but?it does keep?consistency across the board, which is one of the major flaws of the PEP in its current form. BTW, there is already an "until" keyword in python, it's called "while not" ;) Vitor ?Todo sobre la Liga Mexicana de f?tbol! Estadisticas, resultados, calendario, fotos y m?s:< http://espanol.sports.yahoo.com/ From gerald.britton at gmail.com Wed Jan 21 16:51:39 2009 From: gerald.britton at gmail.com (Gerald Britton) Date: Wed, 21 Jan 2009 10:51:39 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <549056.3449.qm@web54409.mail.yahoo.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> <5d1a32000901210638y1fc361d0s893a58988b7e9a7a@mail.gmail.com> <549056.3449.qm@web54409.mail.yahoo.com> Message-ID: <5d1a32000901210751l2ef9e7fdy378293d937b91138@mail.gmail.com> OK then, what is the feeling out there about extending the "for" syntax in general (and by extension list comprehensions and generator expressions) by adding an optional while clause like this: for in [while [ | not ]: The predicate would be tested after an is taken from and before execution of the . If the predicate evaluates to false, StopIteration would be raised. This construct would be equivalent to: for in : if [not | ]: break Note: this is beyond what I was thinking in the first place, but has arisen from the ensuing discussion. Note 2: this would cover itertools.takewhile but not itertools.dropwhile, AFAICS On Wed, Jan 21, 2009 at 10:27 AM, Vitor Bosshard wrote: > ----- Mensaje original ---- >> De: Gerald Britton >> Para: rdmurray at bitdance.com >> CC: python-dev at python.org >> Enviado: mi?rcoles, 21 de enero, 2009 11:38:01 >> Asunto: Re: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions >> >> FWIW, there are a few historic languages that implement a compound >> for-loop: Algol 68, PL/I, SAS et al allow constructs that, if >> translated to an equivalent (currently invalid) Python-style syntax >> would look like this" >> >> for in while : >> >> >> >> Some also allow for an "until" keyword. I'm not suggesting that we >> need to do this in Python; it's just interesting to note that there is >> some precedent for this approach. >> > > Well, you could propose changing the for loop syntax (and by extension comprehensions and generators). It's a much more radical proposal, but it does keep consistency across the board, which is one of the major flaws of the PEP in its current form. > > BTW, there is already an "until" keyword in python, it's called "while not" ;) > > > Vitor > > > ?Todo sobre la Liga Mexicana de f?tbol! Estadisticas, resultados, calendario, fotos y m?s:< > http://espanol.sports.yahoo.com/ > From ludvig at lericson.se Wed Jan 21 18:32:37 2009 From: ludvig at lericson.se (Ludvig Ericson) Date: Wed, 21 Jan 2009 18:32:37 +0100 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901210929s66b8233ew32310e3a6a4f392f@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> <5d1a32000901210638y1fc361d0s893a58988b7e9a7a@mail.gmail.com> <549056.3449.qm@web54409.mail.yahoo.com> <5d1a32000901210751l2ef9e7fdy378293d937b91138@mail.gmail.com> <5d1a32000901210929s66b8233ew32310e3a6a4f392f@mail.gmail.com> Message-ID: <10286A07-55AB-4BA3-9818-DAE5CD5999FD@lericson.se> The following was supposed to go to the list: 18:29 Gerald Britton: > Yes you could have long lines, but you wouldn't have to use it. You > could still code it up as you would today. It might be convenient for > shorter expressions though. > > 12:12 PM Ludvig Ericson: >> On Jan 21, 2009, at 16:51, Gerald Britton wrote: >> >>> for in [while [ | not ]: >>> >> >> (Sorry for just sort of popping in to the list.) >> >> That would make for some very, very long lines. I for one wouldn't >> like >> seeing: >> >>>>> for cart_item in current_user.cart.new_items \ >> ... while cart_item.cant_imagine_more: >> ... >> >> I realize that the other approach--an immediate if-break--wouldn't >> look >> great either, but it wouldn't be cramming that much stuff into one >> line. From aahz at pythoncraft.com Wed Jan 21 18:54:37 2009 From: aahz at pythoncraft.com (Aahz) Date: Wed, 21 Jan 2009 09:54:37 -0800 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901210751l2ef9e7fdy378293d937b91138@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> <5d1a32000901210638y1fc361d0s893a58988b7e9a7a@mail.gmail.com> <549056.3449.qm@web54409.mail.yahoo.com> <5d1a32000901210751l2ef9e7fdy378293d937b91138@mail.gmail.com> Message-ID: <20090121175437.GA6175@panix.com> On Wed, Jan 21, 2009, Gerald Britton wrote: > > OK then, what is the feeling out there about extending the "for" > syntax in general (and by extension list comprehensions and generator > expressions) by adding an optional while clause like this: > > for in [while [ | not ]: > What I suggest is that your ideas need more thought before bringing them to python-dev -- I think you should either go back to python-ideas or try comp.lang.python -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. From aahz at pythoncraft.com Wed Jan 21 19:00:14 2009 From: aahz at pythoncraft.com (Aahz) Date: Wed, 21 Jan 2009 10:00:14 -0800 Subject: [Python-Dev] PEP 8 and constants Message-ID: <20090121180014.GA16447@panix.com> In comp.lang.python, there has been some discussion of the fact that there are no guidelines in PEP 8 for constants: http://groups.google.com/group/comp.lang.python/browse_thread/thread/ed964fe8ad6da7b7 Is there any sentiment that PEP 8 should be updated to reflect the common usage of ALL_CAPS for constants? -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. From guido at python.org Wed Jan 21 19:04:19 2009 From: guido at python.org (Guido van Rossum) Date: Wed, 21 Jan 2009 10:04:19 -0800 Subject: [Python-Dev] Copyright notices in modules In-Reply-To: References: <94F1BCD4C7B448C8AE5C8C1FA34DD661@RaymondLaptop1> <497596B1.4060600@egenix.com> <9E704C0F455642AE974D4DAC51534F3F@RaymondLaptop1> Message-ID: On Tue, Jan 20, 2009 at 11:01 PM, Terry Reedy wrote: > Guido van Rossum wrote: >> >> 2009/1/20 Raymond Hettinger : >>> >>> I'm at a loss of why the notice needs to be there at all. >> >> There's a difference between contributing a whole file and >> contributing a patch. Patches do not require copyright notices. Whole >> files do. This is not affected by later edits to the file. > > In my comment, I postulated the situation where the patch consisted of > merging in another, independently copyrighted, 'whole file'. Perhaps this > has mostly been a non-existent situation and therefor moot. > > One real situation I was thinking of, unconnected to Google as far as I am > aware, is the case of two third-party IP6 modules and the suggestion that > they be merged into one stdlib module. If that were accomplished by > committing one and merging the other in a patch, it would be unfair (and > untrue) to have just one copyright notice. Of course, in this case, I hope > the two authors work everything out between themselves first before any > submission. There's nothing top stop you from having multiple copyrights in one file, when that represents the rights of the original authors fairly. > I completely understand about strongly preferring programming to lawyer > consultation ;-). -- --Guido van Rossum (home page: http://www.python.org/~guido/) From guido at python.org Wed Jan 21 19:48:59 2009 From: guido at python.org (Guido van Rossum) Date: Wed, 21 Jan 2009 10:48:59 -0800 Subject: [Python-Dev] PEP 8 and constants In-Reply-To: <20090121180014.GA16447@panix.com> References: <20090121180014.GA16447@panix.com> Message-ID: On Wed, Jan 21, 2009 at 10:00 AM, Aahz wrote: > In comp.lang.python, there has been some discussion of the fact that > there are no guidelines in PEP 8 for constants: > > http://groups.google.com/group/comp.lang.python/browse_thread/thread/ed964fe8ad6da7b7 > > Is there any sentiment that PEP 8 should be updated to reflect the common > usage of ALL_CAPS for constants? It makes sense to codify this usage in PEP 8. I think it's by far the most common convention adopted by projects that set their own style guide based on PEP 8 with local additions. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tseaver at palladion.com Wed Jan 21 20:02:55 2009 From: tseaver at palladion.com (Tres Seaver) Date: Wed, 21 Jan 2009 14:02:55 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <549056.3449.qm@web54409.mail.yahoo.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> <5d1a32000901210638y1fc361d0s893a58988b7e9a7a@mail.gmail.com> <549056.3449.qm@web54409.mail.yahoo.com> Message-ID: <4977715F.8030203@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Vitor Bosshard wrote: >> Some also allow for an "until" keyword. I'm not suggesting that we >> need to do this in Python; it's just interesting to note that there is >> some precedent for this approach. >> > > Well, you could propose changing the for loop syntax (and by > extension comprehensions and generators). It's a much more radical proposal, but > it does keep consistency across the board, which is one of the major > flaws of the PEP in its current form. > > BTW, there is already an "until" keyword in python, it's called "while not" ;) 'until' is used at least in some languages (Pascal, Modula*, maybe Ada?) for a "terminate at bottom" loop (one guaranteed to run at least once): in such cases, the predicate has the negative sense. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFJd3Ff+gerLs4ltQ4RAuOQAJ47EA8Cf1KPMdNiZTBiJqweiUNZBgCgsVrc 38fgphB+hjdnTblAQT8Q5tA= =SeEn -----END PGP SIGNATURE----- From tseaver at palladion.com Wed Jan 21 20:02:55 2009 From: tseaver at palladion.com (Tres Seaver) Date: Wed, 21 Jan 2009 14:02:55 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <549056.3449.qm@web54409.mail.yahoo.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> <5d1a32000901210638y1fc361d0s893a58988b7e9a7a@mail.gmail.com> <549056.3449.qm@web54409.mail.yahoo.com> Message-ID: <4977715F.8030203@palladion.com> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Vitor Bosshard wrote: >> Some also allow for an "until" keyword. I'm not suggesting that we >> need to do this in Python; it's just interesting to note that there is >> some precedent for this approach. >> > > Well, you could propose changing the for loop syntax (and by > extension comprehensions and generators). It's a much more radical proposal, but > it does keep consistency across the board, which is one of the major > flaws of the PEP in its current form. > > BTW, there is already an "until" keyword in python, it's called "while not" ;) 'until' is used at least in some languages (Pascal, Modula*, maybe Ada?) for a "terminate at bottom" loop (one guaranteed to run at least once): in such cases, the predicate has the negative sense. Tres. - -- =================================================================== Tres Seaver +1 540-429-0999 tseaver at palladion.com Palladion Software "Excellence by Design" http://palladion.com -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFJd3Ff+gerLs4ltQ4RAuOQAJ47EA8Cf1KPMdNiZTBiJqweiUNZBgCgsVrc 38fgphB+hjdnTblAQT8Q5tA= =SeEn -----END PGP SIGNATURE----- From martin at v.loewis.de Wed Jan 21 20:42:17 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 21 Jan 2009 20:42:17 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> Message-ID: <49777A99.9010607@v.loewis.de> > sorry, martin - i thought the win32 builds generated python25.lib, > python25.dll Correct. > and python25.def No. > so as to fit into the 8.3 filename convention. No. It generates python25.lib because that's the import library for python25.dll. It calls it python25.dll because the lib prefix is atypical for the platform, and also redundant (DLL means "dynamic link library"). The Python binary installer also includes libpython25.a, for use with mingw32. Regards, Martin From rowen at u.washington.edu Wed Jan 21 20:47:52 2009 From: rowen at u.washington.edu (Russell E. Owen) Date: Wed, 21 Jan 2009 11:47:52 -0800 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> Message-ID: In article , rdmurray at bitdance.com wrote: >... > I understand that you are saying that 'while x' is used in the same > logical sense ("take a different action when x is no longer true"), > but that I don't feel that that is enough to say that it has similar > semantics. Or, perhaps more accurately, it is just similar enough to be > very confusing because it is also different enough to be very surprising. > The semantics of 'while' in python includes the bit about creating a > loop, and does _not_ include executing a 'break' in the surrounding loop. > To give 'while' this new meaning would be, IMO, un-pythonic. (If python > had a 'for/while' construct, it would be a different story...and then > it would probably already be part of the list comprehension syntax.) I agree. I feel that the term "while" is a poor choice for "when this is no longer true then stop". It sounds more like a synonym for "if" to me. I would be much more comfortable using "until" (in the opposite sense to the proposed "while"); it clearly implies "we're done so stop". I don't know if it's a feature that is really useful, but I do think it would be transparent: code that used it would be easily understood. -- Russell From lkcl at lkcl.net Wed Jan 21 20:50:28 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Wed, 21 Jan 2009 19:50:28 +0000 Subject: [Python-Dev] progress: compiling python2.5 under msys (specifically but not exclusively under wine) with msvcr80 Message-ID: this is a progress report on compiling python using entirely free software tools, no proprietary compilers or operating systems involved, yet still linking and successfully running with msvcr80 assemblies. manifests and rc files, which are compiled to internal resources, have been added. various sections which are uniquely identifed by _MSC_VER >= 1400 etc have had to be enabled with corresponding MSVCRT_VERSION >= 0x0800 - in particular, signal handling (PyOS_getsig()). currently, under wine with msvcr80, there looks like there is a bug with a common theme related to threads, but here's a short list: test_array.py is blocking, test_bz2.py is hanging and test_cmd_line.py causes a segfault; test_ctypes is _still_ a bundle of fun. for those people who use native win32 platforms who are compiling up this code, you should have better luck. significantly, the wine developers have been absolutely fantastic, and have fixed several bugs in wine, sometimes within hours, that were found as a result of running the extremely comprehensive python regression tests. the python regression tests are a credit to the collaborative incremental improvement process of free software development. i look forward to seeing the same incremental improvement applied to the development of python, evidence of which would be clearly seen by the acceptance of one of the following patches, one of which is dated 2003: http://bugs.python.org/issue3754 http://bugs.python.org/issue841454 http://bugs.python.org/issue3871 http://bugs.python.org/issue4954 http://bugs.python.org/issue5010 for those people wishing to track and contribute to the development of python for win32 using entirely free software tools, either under wine or native windows, there is a git repository, here, slightly illogically named pythonwine because that's where i started from (cross-compiling python under wine, so i could get at the wine registry from python). obviously, since then, things have... moved on :) http://github.com/lkcl/pythonwine/tree/python_2.5.2_wine l. From lkcl at lkcl.net Wed Jan 21 21:08:05 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Wed, 21 Jan 2009 20:08:05 +0000 Subject: [Python-Dev] progress: compiling python2.5 under msys (specifically but not exclusively under wine) with msvcr80 In-Reply-To: References: Message-ID: > http://bugs.python.org/issue5010 correction: that's http://bugs.python.org/issue5026 apologies for the mix-up. also,for the msvcrt80 build, it is _essential_ that you use a patched version of mingw32-runtime, see: https://sourceforge.net/tracker/index.php?func=detail&aid=2134161&group_id=2435&atid=352435 libmsvcr80.a mistakenly thinks that _fstat exists (it doesn't - only _fstat32 does, and many more). it's quite straightforward to rebuild - just remember to run ./configure --prefix=/mingw and if you want to revert just reinstall mingw runtime .exe l. From tjreedy at udel.edu Wed Jan 21 21:42:05 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 21 Jan 2009 15:42:05 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <200901212146.46821.steve@pearwood.info> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> Message-ID: Steven D'Aprano wrote: > On Wed, 21 Jan 2009 03:10:24 pm Terry Reedy wrote: >> Steven D'Aprano wrote: >>> The three clauses are neither in the same order, nor are they in >>> reverse order. >> They are in the same order but rotated, with the last brought around >> to the front to emphasize it. Did you really not notice that either? > > There are only three items, of course I noticed that there is *some* > rearrangement of the first that leads to the second. Out of the six > possible permutations of three items, they can all be described in > terms of some sort of reflection, rotation or swap. Irrelevant. *Every* comprehension, no matter how many clauses, rotates the expression from last to first and keeps the clauses in the same order with the same meaning. Simple rule. >> Ironically, in a thread cross-posted on c.l.p and elsewhere, someone >> just labeled Python's comprehension syntax as "ad hoc syntax soup". > > Is that Xah Lee? It sounds like the sort of thing he'd say. It was the thread he started, but not him. He contributed other idiocies. Terry Jan Reedy From lkcl at lkcl.net Wed Jan 21 22:07:30 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Wed, 21 Jan 2009 21:07:30 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <49777A99.9010607@v.loewis.de> References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> Message-ID: On Wed, Jan 21, 2009 at 7:42 PM, "Martin v. L?wis" wrote: >> sorry, martin - i thought the win32 builds generated python25.lib, >> python25.dll > > Correct. > >> and python25.def > > No. > >> so as to fit into the 8.3 filename convention. > > No. It generates python25.lib because that's the import library > for python25.dll. It calls it python25.dll because the lib prefix > is atypical for the platform, and also redundant (DLL means > "dynamic link library"). > > The Python binary installer also includes libpython25.a, for use > with mingw32. ok, so - different from what's being generated by ./configure under msys under wine or native win32 - what's being generated (libpython 2 . 5 . a and libpython 2 . 5 . dll . a) is more akin to the cygwin environment. therefore, there's absolutely no doubt that the two are completely different. and on that basis, would i be correct in thinking that you _can't_ go linking or building modules or any python win32 code for one and have a hope in hell of using it on the other, and that you would _have_ to rebuild e.g. numpy for use with a mingw32-msys-built version of python? or, is the .pyd loading a bit cleverer (or perhaps a bit less cleverer) than i'm expecting it to be? l. From martin at v.loewis.de Wed Jan 21 22:13:26 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 21 Jan 2009 22:13:26 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> Message-ID: <49778FF6.8070608@v.loewis.de> > ok, so - different from what's being generated by ./configure under > msys under wine or native win32 - what's being generated (libpython 2 > . 5 . a and libpython 2 . 5 . dll . a) is more akin to the cygwin > environment. > > therefore, there's absolutely no doubt that the two are completely different. > > and on that basis, would i be correct in thinking that you _can't_ go > linking or building modules or any python win32 code for one and have > a hope in hell of using it on the other, and that you would _have_ to > rebuild e.g. numpy for use with a mingw32-msys-built version of > python? I can't comment on that, because I don't know what your port does. Does it not produce a .dll containing the majority of Python? And is that not called python25.dll? Regards, Martin From cesare.dimauro at a-tono.com Wed Jan 21 21:48:55 2009 From: cesare.dimauro at a-tono.com (Cesare Di Mauro) Date: Wed, 21 Jan 2009 21:48:55 +0100 (CET) Subject: [Python-Dev] progress: compiling python2.5 under msys (specifically but not exclusively under wine) with msvcr80 In-Reply-To: References: Message-ID: <50019.151.53.150.247.1232570935.squirrel@webmail1.pair.com> Have you made some benchmarks like pystone? Cheers, Cesare On Wed, Jan 21, 2009 08:50PM, Luke Kenneth Casson Leighton wrote: > this is a progress report on compiling python using entirely free > software tools, no proprietary compilers or operating systems > involved, yet still linking and successfully running with msvcr80 > assemblies. manifests and rc files, which are compiled to internal > resources, have been added. > various sections which are uniquely identifed by _MSC_VER >= 1400 etc > have had to be enabled with corresponding MSVCRT_VERSION >= 0x0800 - > in particular, signal handling (PyOS_getsig()). > > currently, under wine with msvcr80, there looks like there is a bug > with a common theme related to threads, but here's a short list: > test_array.py is blocking, test_bz2.py is hanging and test_cmd_line.py > causes a segfault; test_ctypes is _still_ a bundle of fun. for those > people who use native win32 platforms who are compiling up this code, > you should have better luck. > > significantly, the wine developers have been absolutely fantastic, and > have fixed several bugs in wine, sometimes within hours, that were > found as a result of running the extremely comprehensive python > regression tests. > > the python regression tests are a credit to the collaborative > incremental improvement process of free software development. > > i look forward to seeing the same incremental improvement applied to > the development of python, evidence of which would be clearly seen by > the acceptance of one of the following patches, one of which is dated > 2003: > http://bugs.python.org/issue3754 > http://bugs.python.org/issue841454 > http://bugs.python.org/issue3871 > http://bugs.python.org/issue4954 > http://bugs.python.org/issue5010 > > for those people wishing to track and contribute to the development of > python for win32 using entirely free software tools, either under wine > or native windows, there is a git repository, here, slightly > illogically named pythonwine because that's where i started from > (cross-compiling python under wine, so i could get at the wine > registry from python). obviously, since then, things have... moved on > :) > > http://github.com/lkcl/pythonwine/tree/python_2.5.2_wine > > l. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/cesare.dimauro%40a-tono.com > > From g.brandl at gmx.net Wed Jan 21 22:53:42 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 21 Jan 2009 22:53:42 +0100 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: References: Message-ID: Brett Cannon schrieb: > I have been writing up the initial docs for importlib and four things struck me: > > 1. Why is three space indents the preferred indentation level? As said, it matches directive content with directive headers nicely. Ben's solution is nice as well, but now that we have 3-space I'd rather we stick with 3-space (however, if you don't care, I'll not reformat 4-space indents :) Code in code blocks should use 4-space as usual. > 2. Should we start using function annotations? It's not really supported yet by Sphinx. Also, I don't know if it makes too much sense, given that it will reinforce the thinking of annotations as type declarations. > 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, > c=None]])``) really necessary when default argument values are > present? And do we really need to nest the brackets when it is obvious > that having on optional argument means the rest are optional as well? We've discussed that once on the doc-SIG, and I agreed that the bracketing is not really pretty, especially if it's heavily nested. Python functions where it makes sense should use the default-value syntax, while C functions without kwargs support need to keep the brackets. Making this consistent throughout the docs is no small task, of course. > 4. The var directive is not working even though the docs list it as a > valid directive; so is it still valid and something is broken, or the > docs need to be updated? (First, you're confusing "directive" and "role" which led to some confusion on Benjamin's part.) Where is a "var" role documented? If it is, it is a bug. Georg From bugtrack at roumenpetrov.info Wed Jan 21 23:51:36 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Thu, 22 Jan 2009 00:51:36 +0200 Subject: [Python-Dev] progress: compiling python2.5 under msys (specifically but not exclusively under wine) with msvcr80 In-Reply-To: References: Message-ID: <4977A6F8.4060904@roumenpetrov.info> Terry Reedy wrote: > Luke Kenneth Casson Leighton wrote: > >> i look forward to seeing the same incremental improvement applied to >> the development of python, evidence of which would be clearly seen by >> the acceptance of one of the following patches, one of which is dated >> 2003: > >> http://bugs.python.org/issue841454 > > Against 2.3, rejected due to dependence on SCons. > Also appears to have been incomplete, needing more work. No it was complete but use SCons. Most of changes changes in code you will see again in 3871. >> http://bugs.python.org/issue3754 > > Open by Roumen Petrov, no review, see below. This is again request and the patch is for trunk. It share common idea with 841454:Cross building python for mingw32:Andreas Ames (yxcv):2003-11-13 14:31 1006238:Cross compile patch:Daniel Goertzen (goertzen):2004-08-09 22:05 1597850:Cross compiling patches for MINGW hanwen:2006-11-16 16:57 >> http://bugs.python.org/issue3871 > > Open, from same submitter, only (minor) review by you. > Does this supercede 3754? No. It share common changes to code with 841454, 1006238, 1412448, 1597850. May be 1597850 and 3871 supercede 1412448. The issue3871 raise questions (and include solution/work around) related to: 2942 - mingw/cygwin do not accept asm file as extension source 2445 - Use The CygwinCCompiler Under Cygwin 1706863 - Failed to build Python 2.5.1 with sqlite3 Also issues related to LDFLAGS: 4010 - configure options don't trickle down to distutils 1628484 - Python 2.5 64 bit compile fails on Solaris 10/gcc 4.1.1 [SNIP] From aahz at pythoncraft.com Thu Jan 22 00:39:24 2009 From: aahz at pythoncraft.com (Aahz) Date: Wed, 21 Jan 2009 15:39:24 -0800 Subject: [Python-Dev] Where is Fred Drake? Message-ID: <20090121233924.GA10141@panix.com> Mail to fdrake at acm.org is bouncing; I don't know whether it's a temporary failure. Does anyone have another address for him? ----- Forwarded message from Mail Delivery System ----- > Date: Wed, 21 Jan 2009 22:48:49 +0100 (CET) > From: Mail Delivery System > Subject: Undelivered Mail Returned to Sender > To: webmaster at python.org > Content-Description: Notification > This is the mail system at host bag.python.org. > > I'm sorry to have to inform you that your message could not > be delivered to one or more recipients. It's attached below. > > For further assistance, please send mail to postmaster. > > If you do so, please include this problem report. You can > delete your own text from the attached returned message. > > The mail system > > : host acm.org.s7a1.psmtp.com[64.18.6.14] said: 550 No such > user - psmtp (in reply to RCPT TO command) Content-Description: Delivery report > Reporting-MTA: dns; bag.python.org > X-Postfix-Queue-ID: 24FF41E404E > X-Postfix-Sender: rfc822; webmaster at python.org > Arrival-Date: Wed, 21 Jan 2009 22:48:48 +0100 (CET) > > Final-Recipient: rfc822; fdrake at acm.org > Original-Recipient: rfc822;fdrake at acm.org > Action: failed > Status: 5.0.0 > Remote-MTA: dns; acm.org.s7a1.psmtp.com > Diagnostic-Code: smtp; 550 No such user - psmtp ----- End forwarded message ----- -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. From brett at python.org Thu Jan 22 00:56:19 2009 From: brett at python.org (Brett Cannon) Date: Wed, 21 Jan 2009 15:56:19 -0800 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: References: Message-ID: On Wed, Jan 21, 2009 at 13:53, Georg Brandl wrote: > Brett Cannon schrieb: >> I have been writing up the initial docs for importlib and four things struck me: >> >> 1. Why is three space indents the preferred indentation level? > > As said, it matches directive content with directive headers nicely. > Ben's solution is nice as well, but now that we have 3-space I'd rather > we stick with 3-space (however, if you don't care, I'll not reformat > 4-space indents :) > =) OK. > Code in code blocks should use 4-space as usual. > >> 2. Should we start using function annotations? > > It's not really supported yet by Sphinx. Also, I don't know if it makes > too much sense, given that it will reinforce the thinking of annotations > as type declarations. > Fine by me. >> 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, >> c=None]])``) really necessary when default argument values are >> present? And do we really need to nest the brackets when it is obvious >> that having on optional argument means the rest are optional as well? > > We've discussed that once on the doc-SIG, and I agreed that the bracketing > is not really pretty, especially if it's heavily nested. Python functions > where it makes sense should use the default-value syntax, while C functions > without kwargs support need to keep the brackets. > That was my thinking. > Making this consistent throughout the docs is no small task, of course. > Nope, but perhaps all new docs should stop their use. >> 4. The var directive is not working even though the docs list it as a >> valid directive; so is it still valid and something is broken, or the >> docs need to be updated? > > (First, you're confusing "directive" and "role" which led to some confusion > on Benjamin's part.) > > Where is a "var" role documented? If it is, it is a bug. http://docs.python.org/dev/3.0/documenting/markup.html#inline-markup. -Brett From benji at benjiyork.com Thu Jan 22 00:56:52 2009 From: benji at benjiyork.com (Benji York) Date: Wed, 21 Jan 2009 18:56:52 -0500 Subject: [Python-Dev] Where is Fred Drake? In-Reply-To: <20090121233924.GA10141@panix.com> References: <20090121233924.GA10141@panix.com> Message-ID: On Wed, Jan 21, 2009 at 6:39 PM, Aahz wrote: > Mail to fdrake at acm.org is bouncing; I don't know whether it's a > temporary failure. Does anyone have another address for him? /me channels Fred: Use freddrake at verizon.net until the acm.org account is back up. -- Benji York From cs at zip.com.au Thu Jan 22 01:40:10 2009 From: cs at zip.com.au (Cameron Simpson) Date: Thu, 22 Jan 2009 11:40:10 +1100 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <4977715F.8030203@palladion.com> Message-ID: <20090122004010.GA14090@cskk.homeip.net> On 21Jan2009 14:02, Tres Seaver wrote: | Vitor Bosshard wrote: | > BTW, there is already an "until" keyword in python, it's called "while not" ;) | | 'until' is used at least in some languages (Pascal, Modula*, maybe Ada?) | for a "terminate at bottom" loop (one guaranteed to run at least once): | in such cases, the predicate has the negative sense. This is a particular flavour of "do ... while" which just happens to read a little better in English. It does sometimes bother me that Python doesn't have do...while when I find my self replicating the loop bottom above the loop. Back at uni we had to implement a small language in our compilers class and the lecturer had specified a proper generic while loop, thus: loop: suite while invariant suite endloop I think the keywords were better than above, but it neatly handled the fact that the while-test must often be preceeded by some setup that would be replicated at the loop bottom in Python and many other languages: setup-invariant-state while test-invariant do stuff setup-invariant-state of which the bare while... and converse do...while loops are particular extremes. Cheers, -- Cameron Simpson DoD#743 http://www.cskk.ezoshosting.com/cs/ Why doesn't DOS ever say EXCELLENT command or filename? From guido at python.org Thu Jan 22 04:03:53 2009 From: guido at python.org (Guido van Rossum) Date: Wed, 21 Jan 2009 19:03:53 -0800 Subject: [Python-Dev] PEP 8 and constants In-Reply-To: <1afaf6160901211842x57963b26q796bca35975c69e1@mail.gmail.com> References: <20090121180014.GA16447@panix.com> <1afaf6160901211842x57963b26q796bca35975c69e1@mail.gmail.com> Message-ID: Yes, that's what I commonly see. On Wed, Jan 21, 2009 at 6:42 PM, Benjamin Peterson wrote: > On Wed, Jan 21, 2009 at 12:48 PM, Guido van Rossum wrote: >> On Wed, Jan 21, 2009 at 10:00 AM, Aahz wrote: >>> In comp.lang.python, there has been some discussion of the fact that >>> there are no guidelines in PEP 8 for constants: >>> >>> http://groups.google.com/group/comp.lang.python/browse_thread/thread/ed964fe8ad6da7b7 >>> >>> Is there any sentiment that PEP 8 should be updated to reflect the common >>> usage of ALL_CAPS for constants? >> >> It makes sense to codify this usage in PEP 8. I think it's by far the >> most common convention adopted by projects that set their own style >> guide based on PEP 8 with local additions. > > Do you suggest underscores between words in constants as with other names? > > > > -- > Regards, > Benjamin > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy at udel.edu Thu Jan 22 04:22:55 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 21 Jan 2009 22:22:55 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <20090122004010.GA14090@cskk.homeip.net> References: <4977715F.8030203@palladion.com> <20090122004010.GA14090@cskk.homeip.net> Message-ID: Cameron Simpson wrote: > > Back at uni we had to implement a small language in our compilers class > and the lecturer had specified a proper generic while loop, thus: > > loop: > suite > while invariant > suite > endloop In Python, that is spelled while True: suite if not invariant: break suite > I think the keywords were better than above, but it neatly handled the > fact that the while-test must often be preceeded by some setup that > would be replicated at the loop bottom in Python and many other languages: > > setup-invariant-state > while test-invariant > do stuff > setup-invariant-state Good Python programmers do not repeat the setup code like this. See the proper say-it-once way above. This discussion belongs back on Python ideas, where it began and should have stayed. tjr From benjamin at python.org Thu Jan 22 04:34:24 2009 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 21 Jan 2009 21:34:24 -0600 Subject: [Python-Dev] PEP 8 and constants In-Reply-To: References: <20090121180014.GA16447@panix.com> <1afaf6160901211842x57963b26q796bca35975c69e1@mail.gmail.com> Message-ID: <1afaf6160901211934q4538e6f5t2819459bd1bdc4da@mail.gmail.com> >> On Wed, Jan 21, 2009 at 12:48 PM, Guido van Rossum wrote: >>> On Wed, Jan 21, 2009 at 10:00 AM, Aahz wrote: >>>> In comp.lang.python, there has been some discussion of the fact that >>>> there are no guidelines in PEP 8 for constants: >>>> >>>> http://groups.google.com/group/comp.lang.python/browse_thread/thread/ed964fe8ad6da7b7 >>>> >>>> Is there any sentiment that PEP 8 should be updated to reflect the common >>>> usage of ALL_CAPS for constants? >>> >>> It makes sense to codify this usage in PEP 8. I think it's by far the >>> most common convention adopted by projects that set their own style >>> guide based on PEP 8 with local additions. Ok. I added a note about constants in r68849. -- Regards, Benjamin From benjamin at python.org Thu Jan 22 03:42:28 2009 From: benjamin at python.org (Benjamin Peterson) Date: Wed, 21 Jan 2009 20:42:28 -0600 Subject: [Python-Dev] PEP 8 and constants In-Reply-To: References: <20090121180014.GA16447@panix.com> Message-ID: <1afaf6160901211842x57963b26q796bca35975c69e1@mail.gmail.com> On Wed, Jan 21, 2009 at 12:48 PM, Guido van Rossum wrote: > On Wed, Jan 21, 2009 at 10:00 AM, Aahz wrote: >> In comp.lang.python, there has been some discussion of the fact that >> there are no guidelines in PEP 8 for constants: >> >> http://groups.google.com/group/comp.lang.python/browse_thread/thread/ed964fe8ad6da7b7 >> >> Is there any sentiment that PEP 8 should be updated to reflect the common >> usage of ALL_CAPS for constants? > > It makes sense to codify this usage in PEP 8. I think it's by far the > most common convention adopted by projects that set their own style > guide based on PEP 8 with local additions. Do you suggest underscores between words in constants as with other names? -- Regards, Benjamin From aahz at pythoncraft.com Thu Jan 22 07:14:13 2009 From: aahz at pythoncraft.com (Aahz) Date: Wed, 21 Jan 2009 22:14:13 -0800 Subject: [Python-Dev] PEP 8 and constants In-Reply-To: <1afaf6160901211934q4538e6f5t2819459bd1bdc4da@mail.gmail.com> References: <20090121180014.GA16447@panix.com> <1afaf6160901211842x57963b26q796bca35975c69e1@mail.gmail.com> <1afaf6160901211934q4538e6f5t2819459bd1bdc4da@mail.gmail.com> Message-ID: <20090122061413.GA29901@panix.com> On Wed, Jan 21, 2009, Benjamin Peterson wrote: >>> On Wed, Jan 21, 2009 at 12:48 PM, Guido van Rossum wrote: >>>> On Wed, Jan 21, 2009 at 10:00 AM, Aahz wrote: >>>>> >>>>> In comp.lang.python, there has been some discussion of the fact that >>>>> there are no guidelines in PEP 8 for constants: >>>>> >>>>> http://groups.google.com/group/comp.lang.python/browse_thread/thread/ed964fe8ad6da7b7 >>>>> >>>>> Is there any sentiment that PEP 8 should be updated to reflect the common >>>>> usage of ALL_CAPS for constants? >>>> >>>> It makes sense to codify this usage in PEP 8. I think it's by far the >>>> most common convention adopted by projects that set their own style >>>> guide based on PEP 8 with local additions. > > Ok. I added a note about constants in r68849. Thanks! -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. From ncoghlan at gmail.com Thu Jan 22 07:33:56 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 22 Jan 2009 16:33:56 +1000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901210751l2ef9e7fdy378293d937b91138@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> <5d1a32000901210638y1fc361d0s893a58988b7e9a7a@mail.gmail.com> <549056.3449.qm@web54409.mail.yahoo.com> <5d1a32000901210751l2ef9e7fdy378293d937b91138@mail.gmail.com> Message-ID: <49781354.2090807@gmail.com> Gerald Britton wrote: > OK then, what is the feeling out there about extending the "for" > syntax in general (and by extension list comprehensions and generator > expressions) by adding an optional while clause like this: > > for in [while [ | not ]: > > > The predicate would be tested after an is taken from > and before execution of the . If the predicate evaluates to > false, StopIteration would be raised. This construct would be > equivalent to: > > for in : > if [not | ]: break > > > Note: this is beyond what I was thinking in the first place, but has > arisen from the ensuing discussion. As Aahz said, this needs to go back to python-ideas or c.l.p to see if it goes anywhere. However, be aware that you're going to need examples from *real code* that show improvements in correctness, readability or speed in order to convince a sufficiently large number of the core devs and/or Guido that such an additional wrinkle to the looping syntax is worth it. A change being clever or cute isn't enough to justify its inclusion - it needs to provide sufficient real world benefit to counter the cost of the feature's development and maintenance, as well as the additional overhead for all users of the language in learning about it. An approach that has been used effectively in the past to argue for new syntax or builtins is to trawl through the standard library and its test suite looking for things that could be simplified by the proposed addition to the language, but appropriate examples could also be drawn from the code bases of other large Python projects (Twisted, Zope, Django, Bazaar, Mercurial... etc). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Thu Jan 22 07:40:34 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 22 Jan 2009 16:40:34 +1000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <20090122004010.GA14090@cskk.homeip.net> References: <20090122004010.GA14090@cskk.homeip.net> Message-ID: <497814E2.2040400@gmail.com> Cameron Simpson wrote: > On 21Jan2009 14:02, Tres Seaver wrote: > | Vitor Bosshard wrote: > | > BTW, there is already an "until" keyword in python, it's called "while not" ;) > | > | 'until' is used at least in some languages (Pascal, Modula*, maybe Ada?) > | for a "terminate at bottom" loop (one guaranteed to run at least once): > | in such cases, the predicate has the negative sense. > > This is a particular flavour of "do ... while" which just happens > to read a little better in English. It does sometimes bother me that > Python doesn't have do...while when I find my self replicating the loop > bottom above the loop. Adding a do-while construct to Python has already been proposed: http://www.python.org/dev/peps/pep-0315/ It was merely deferred due to only garnering lukewarm support and lack of a reference implementation rather than actually being rejected: http://mail.python.org/pipermail/python-dev/2006-February/060718.html Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Thu Jan 22 07:42:29 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 22 Jan 2009 16:42:29 +1000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <200901210955.59699.steve@pearwood.info> <200901212146.46821.steve@pearwood.info> Message-ID: <49781555.1020303@gmail.com> Terry Reedy wrote: > Steven D'Aprano wrote: >> Is that Xah Lee? It sounds like the sort of thing he'd say. > > It was the thread he started, but not him. He contributed other idiocies. Xah Lee is still around? I would have expected him to get bored and go away years ago... Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From eric at trueblade.com Thu Jan 22 10:06:24 2009 From: eric at trueblade.com (Eric Smith) Date: Thu, 22 Jan 2009 04:06:24 -0500 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: References: <4977715F.8030203@palladion.com> <20090122004010.GA14090@cskk.homeip.net> Message-ID: <49783710.6070302@trueblade.com> Terry Reedy wrote: > Cameron Simpson wrote: >> >> Back at uni we had to implement a small language in our compilers class >> and the lecturer had specified a proper generic while loop, thus: >> >> loop: >> suite >> while invariant >> suite >> endloop > > In Python, that is spelled > > while True: > suite > if not invariant: break > suite Indeed. This is the well known "loop and a half" problem. Eric. From amauryfa at gmail.com Thu Jan 22 10:18:45 2009 From: amauryfa at gmail.com (Amaury Forgeot d'Arc) Date: Thu, 22 Jan 2009 10:18:45 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> Message-ID: Hello, On Wed, Jan 21, 2009 at 22:07, Luke Kenneth Casson Leighton wrote: > On Wed, Jan 21, 2009 at 7:42 PM, "Martin v. L?wis" wrote: >>> sorry, martin - i thought the win32 builds generated python25.lib, >>> python25.dll >> >> Correct. >> >>> and python25.def >> >> No. >> >>> so as to fit into the 8.3 filename convention. >> >> No. It generates python25.lib because that's the import library >> for python25.dll. It calls it python25.dll because the lib prefix >> is atypical for the platform, and also redundant (DLL means >> "dynamic link library"). >> >> The Python binary installer also includes libpython25.a, for use >> with mingw32. > > ok, so - different from what's being generated by ./configure under > msys under wine or native win32 - what's being generated (libpython 2 > . 5 . a and libpython 2 . 5 . dll . a) is more akin to the cygwin > environment. > > therefore, there's absolutely no doubt that the two are completely different. > > and on that basis, would i be correct in thinking that you _can't_ go > linking or building modules or any python win32 code for one and have > a hope in hell of using it on the other, and that you would _have_ to > rebuild e.g. numpy for use with a mingw32-msys-built version of > python? > > or, is the .pyd loading a bit cleverer (or perhaps a bit less > cleverer) than i'm expecting it to be? On Windows, you must turn on the --enable_shared option if you want to build extension modules. You could take the cygwin build as an example, see what's done in ./configure.in. -- Amaury Forgeot d'Arc From lkcl at lkcl.net Thu Jan 22 11:44:31 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Jan 2009 10:44:31 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <49778FF6.8070608@v.loewis.de> References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> Message-ID: On Wed, Jan 21, 2009 at 9:13 PM, "Martin v. L?wis" wrote: >> ok, so - different from what's being generated by ./configure under >> msys under wine or native win32 - what's being generated (libpython 2 >> . 5 . a and libpython 2 . 5 . dll . a) is more akin to the cygwin >> environment. >> >> therefore, there's absolutely no doubt that the two are completely different. >> >> and on that basis, would i be correct in thinking that you _can't_ go >> linking or building modules or any python win32 code for one and have >> a hope in hell of using it on the other, and that you would _have_ to >> rebuild e.g. numpy for use with a mingw32-msys-built version of >> python? > > I can't comment on that, because I don't know what your port does. > Does it not produce a .dll containing the majority of Python? no, it contains the minimal necessary amount of python modules, exactly like when python is built using cygwin. actualy, there's a few modules that _have_ to be included. roumen discovered that you have to have these: _functools _functoolsmodule.c # Tools for working with functions and callable objects operator operator.c # operator.add() and similar goodies _locale _localemodule.c # -lintl _struct _struct.c _subprocess ../PC/_subprocess.c _winreg ../PC/_winreg.c and i've discovered that when running under wine you have to also have these: _weakref _weakref.c and also when running unde wine with msvcr80, so far, you have to also have these: collections collectionsmodule.c thread threadmodule.c all the rest can be done as .pyd > And is that not called python25.dll? no, it's called libpython2.5.dll.a, just like when python is built using cygwin. the configure scripts, thanks to the cygwin build, already end up copying that to libpython2.5.dll. _not_ python25.dll l. p.s. there's nothing to stop you adding every single module and then renaming the resultant blob to libpython25.dll - i just haven't been given, or found, a good reason to do so yet. From lkcl at lkcl.net Thu Jan 22 11:57:07 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Jan 2009 10:57:07 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> Message-ID: On Thu, Jan 22, 2009 at 9:18 AM, Amaury Forgeot d'Arc wrote: >> or, is the .pyd loading a bit cleverer (or perhaps a bit less >> cleverer) than i'm expecting it to be? > > On Windows, you must turn on the --enable_shared option if you want to > build extension modules. > You could take the cygwin build as an example, see what's done in > ./configure.in. amaury, thank you for mentioning that - yes, as it turns out, all of the mingw ports (dan, roumen etc) do pretty much exactly this. also it turns out that on mingw, if you _don't_ disable shared (i.e. if you try to build a static library) mingw32 gcc runtime utils .16, .17 _and_ .19 all segfault or have runtime assertions when creating the archives!! either ar.exe or ranlib.exe choke themselves to death. which is greaaat. so, i've had to set the variable which specifies the libpython2.5.a static library to "" in order to stop it from being built. it would be helpful if there was a --enable-static=yes/no configure option, but there isn't one. leaving that aside, you understand therefore that dan, roumen and i have all managed to achieve building of .pyd extension modules. so, the question i am asking is: would it be reasonable to expect mingw-compiled .pyd modules to work with a proprietary-compiled msvc python.exe, and vice-versa? l. From tino at wildenhain.de Thu Jan 22 11:48:08 2009 From: tino at wildenhain.de (Tino Wildenhain) Date: Thu, 22 Jan 2009 11:48:08 +0100 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <5d1a32000901190859h6720205o585fee8d19607f2@mail.gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901190837v70242228l381f3801ea1866bb@mail.gmail.com> <5d1a32000901190859h6720205o585fee8d19607f2@mail.gmail.com> Message-ID: <49784EE8.1050704@wildenhain.de> Hi, Gerald Britton wrote: > The sieve is just one example. The basic idea is that for some > infinite generator (even a very simple one) you want to cut it off > after some point. As for the number of characters, I spelled lambda > incorrectly (left out a b) and there should be a space after the colon > to conform to design guides. So, actually the takewhile version is > two characters longer, not counting "import itertools" of course! the only usefull approach I could see is to enable slice syntax on generators which would make it possible to describe the exact or maximum lenght of results you want out of it. something like: >> g=(i for i in xrange(1000))[2:5] >> g.next() # wrapper would now step 2 times w/o yield and 1 with yield 2 >> g.next() 3 >> g.next() 4 >> g.next() Traceback (most recent call last): File "", line 1, in StopIteration as expected - this could be included into itertools for now. Regards Tino > > On Mon, Jan 19, 2009 at 11:44 AM, Daniel Stutzbach > wrote: >> On Mon, Jan 19, 2009 at 10:37 AM, Gerald Britton >> wrote: >>> prime = (p for p in sieve() while p < 1000) >>> prime = takewhile(lamda p:p<1000, sieve()) >> I'm pretty sure the extra cost of evaluating the lambda at each step is tiny >> compared to the cost of the sieve, so I don't you can make a convincing >> argument on performance. >> >> Also, you know the latter is actually fewer characters, right? :-) >> >> -- >> Daniel Stutzbach, Ph.D. >> President, Stutzbach Enterprises, LLC > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/tino%40wildenhain.de -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3241 bytes Desc: S/MIME Cryptographic Signature URL: From ncoghlan at gmail.com Thu Jan 22 12:42:02 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 22 Jan 2009 21:42:02 +1000 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <49784EE8.1050704@wildenhain.de> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901190837v70242228l381f3801ea1866bb@mail.gmail.com> <5d1a32000901190859h6720205o585fee8d19607f2@mail.gmail.com> <49784EE8.1050704@wildenhain.de> Message-ID: <49785B8A.7090509@gmail.com> Tino Wildenhain wrote: >>> g=(i for i in xrange(1000))[2:5] >>> g.next() # wrapper would now step 2 times w/o yield and 1 with yield > 2 >>> g.next() > 3 >>> g.next() > 4 >>> g.next() > Traceback (most recent call last): > File "", line 1, in > StopIteration > > as expected - this could be included into itertools for now. Slicing of arbitrary iterators has been supported by itertools ever since the module was first added to the standard library. >>> from itertools import islice >>> g = islice((i for i in xrange(1000)), 2, 5) >>> list(g) [2, 3, 4] Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From tino at wildenhain.de Thu Jan 22 13:37:22 2009 From: tino at wildenhain.de (Tino Wildenhain) Date: Thu, 22 Jan 2009 13:37:22 +0100 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <49785B8A.7090509@gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901190837v70242228l381f3801ea1866bb@mail.gmail.com> <5d1a32000901190859h6720205o585fee8d19607f2@mail.gmail.com> <49784EE8.1050704@wildenhain.de> <49785B8A.7090509@gmail.com> Message-ID: <49786882.6090802@wildenhain.de> Nick Coghlan wrote: > Tino Wildenhain wrote: >>>> g=(i for i in xrange(1000))[2:5] >>>> g.next() # wrapper would now step 2 times w/o yield and 1 with yield >> 2 >>>> g.next() >> 3 >>>> g.next() >> 4 >>>> g.next() >> Traceback (most recent call last): >> File "", line 1, in >> StopIteration >> >> as expected - this could be included into itertools for now. > > Slicing of arbitrary iterators has been supported by itertools ever > since the module was first added to the standard library. > >>>> from itertools import islice >>>> g = islice((i for i in xrange(1000)), 2, 5) >>>> list(g) > [2, 3, 4] > Yeah right, I actually believed it is already there but didn't bother to check ;-) Thx Tino -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3241 bytes Desc: S/MIME Cryptographic Signature URL: From catch-all at masklinn.net Thu Jan 22 14:44:41 2009 From: catch-all at masklinn.net (Xavier Morel) Date: Thu, 22 Jan 2009 14:44:41 +0100 Subject: [Python-Dev] PEP 3142: Add a "while" clause to generator expressions In-Reply-To: <49785B8A.7090509@gmail.com> References: <5d1a32000901190710i288bf19ahbbf1c12385f793b0@mail.gmail.com> <5d1a32000901190837v70242228l381f3801ea1866bb@mail.gmail.com> <5d1a32000901190859h6720205o585fee8d19607f2@mail.gmail.com> <49784EE8.1050704@wildenhain.de> <49785B8A.7090509@gmail.com> Message-ID: <154174DF-DA11-4F6C-8A1C-9FDD1C56F260@masklinn.net> On 22 Jan 2009, at 12:42 , Nick Coghlan wrote: > Tino Wildenhain wrote: >>>> g=(i for i in xrange(1000))[2:5] >>>> g.next() # wrapper would now step 2 times w/o yield and 1 with >>>> yield >> 2 >>>> g.next() >> 3 >>>> g.next() >> 4 >>>> g.next() >> Traceback (most recent call last): >> File "", line 1, in >> StopIteration >> >> as expected - this could be included into itertools for now. > > Slicing of arbitrary iterators has been supported by itertools ever > since the module was first added to the standard library. islice is pretty annoying in some aspects, though. Mainly because it doesn't accept kwargs and defaults to requiring the stop argument. So drop(iterable, n) has to be written islice(iterable, n, None) (and of course the naming isn't ideal), and you can't really use functools.partial since the iterator is the first argument (unless there's a way to partially apply only the tail args without kwargs). From techtonik at gmail.com Thu Jan 22 15:11:31 2009 From: techtonik at gmail.com (anatoly techtonik) Date: Thu, 22 Jan 2009 16:11:31 +0200 Subject: [Python-Dev] Single Sign-On for *.python.org In-Reply-To: <49741F83.9020804@v.loewis.de> References: <76fd5acf0901181147h4773eb43l2746e2a15daf2dd0@mail.gmail.com> <87tz7w85do.fsf@benfinney.id.au> <49741F83.9020804@v.loewis.de> Message-ID: I do have some old patches for roundup that I was unable to test, because of blocking issues with openidenabled python-openid library and my Blogger server. See the top issue with the patch at openidenabled tracker: http://trac.openidenabled.com/trac/query?status=new&status=assigned&status=reopened&project=python-openid&order=priority Judging from complaints in development mailing list during the last three months I may conclude that the library isn't supported anymore. I do not know alternative OpenID implementation for Python, so the only way I see to continue development is to fork the lib. However, it is not really clear to me how to do this in case of Apache license. On Mon, Jan 19, 2009 at 8:36 AM, "Martin v. L?wis" wrote: >> I've also had fruitless discussions about adding OpenID authentication >> to Roundup. > > Did you offer patches to roundup during these discussions? > > Regards, > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/techtonik%40gmail.com > -- --anatoly t. From techtonik at gmail.com Thu Jan 22 16:00:26 2009 From: techtonik at gmail.com (anatoly techtonik) Date: Thu, 22 Jan 2009 17:00:26 +0200 Subject: [Python-Dev] About SCons Re: progress: compiling python2.5 under msys (specifically but not exclusively under wine) with msvcr80 Message-ID: On Thu, Jan 22, 2009 at 12:51 AM, Roumen Petrov wrote: >> >> Against 2.3, rejected due to dependence on SCons. >> Also appears to have been incomplete, needing more work. > > No it was complete but use SCons. Most of changes changes in code you will > see again in 3871. > I would better use SCons for both unix and windows builds. In case of windows for both compilers - mingw and microsoft ones. To port curses extension to windows I need to know what gcc options mean, what are the rules to write Makefiles and how to repeat these rules as well as find options in visual studio interface. Not mentioning various platform-specific defines and warning fixes. -- --anatoly t. From g.brandl at gmx.net Thu Jan 22 19:12:06 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 22 Jan 2009 19:12:06 +0100 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: References: Message-ID: Brett Cannon schrieb: >>> 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, >>> c=None]])``) really necessary when default argument values are >>> present? And do we really need to nest the brackets when it is obvious >>> that having on optional argument means the rest are optional as well? >> >> We've discussed that once on the doc-SIG, and I agreed that the bracketing >> is not really pretty, especially if it's heavily nested. Python functions >> where it makes sense should use the default-value syntax, while C functions >> without kwargs support need to keep the brackets. >> > > That was my thinking. > >> Making this consistent throughout the docs is no small task, of course. >> > > Nope, but perhaps all new docs should stop their use. OK. Perhaps we can sprint a bit on automatic replacement at PyCon. >>> 4. The var directive is not working even though the docs list it as a >>> valid directive; so is it still valid and something is broken, or the >>> docs need to be updated? >> >> (First, you're confusing "directive" and "role" which led to some confusion >> on Benjamin's part.) >> >> Where is a "var" role documented? If it is, it is a bug. > > http://docs.python.org/dev/3.0/documenting/markup.html#inline-markup. I assume you're referring to "Variable names are an exception, they should be marked simply with *var*."? Do you have suggestions how to improve clarity? Georg From brett at python.org Thu Jan 22 19:20:38 2009 From: brett at python.org (Brett Cannon) Date: Thu, 22 Jan 2009 10:20:38 -0800 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: References: Message-ID: On Thu, Jan 22, 2009 at 10:12, Georg Brandl wrote: > Brett Cannon schrieb: > >>>> 3. Are brackets for optional arguments (e.g. ``def fxn(a [, b=None [, >>>> c=None]])``) really necessary when default argument values are >>>> present? And do we really need to nest the brackets when it is obvious >>>> that having on optional argument means the rest are optional as well? >>> >>> We've discussed that once on the doc-SIG, and I agreed that the bracketing >>> is not really pretty, especially if it's heavily nested. Python functions >>> where it makes sense should use the default-value syntax, while C functions >>> without kwargs support need to keep the brackets. >>> >> >> That was my thinking. >> >>> Making this consistent throughout the docs is no small task, of course. >>> >> >> Nope, but perhaps all new docs should stop their use. > > OK. Perhaps we can sprint a bit on automatic replacement at PyCon. > That's a possibility. >>>> 4. The var directive is not working even though the docs list it as a >>>> valid directive; so is it still valid and something is broken, or the >>>> docs need to be updated? >>> >>> (First, you're confusing "directive" and "role" which led to some confusion >>> on Benjamin's part.) >>> >>> Where is a "var" role documented? If it is, it is a bug. >> >> http://docs.python.org/dev/3.0/documenting/markup.html#inline-markup. > > I assume you're referring to "Variable names are an exception, they should > be marked simply with *var*."? Do you have suggestions how to improve > clarity? > "... variables, including function/method arguments, ...". -Brett From g.brandl at gmx.net Thu Jan 22 19:29:48 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Thu, 22 Jan 2009 19:29:48 +0100 Subject: [Python-Dev] Questions/comments on documentation formatting In-Reply-To: References: Message-ID: Brett Cannon schrieb: >>>>> 4. The var directive is not working even though the docs list it as a >>>>> valid directive; so is it still valid and something is broken, or the >>>>> docs need to be updated? >>>> >>>> (First, you're confusing "directive" and "role" which led to some confusion >>>> on Benjamin's part.) >>>> >>>> Where is a "var" role documented? If it is, it is a bug. >>> >>> http://docs.python.org/dev/3.0/documenting/markup.html#inline-markup. >> >> I assume you're referring to "Variable names are an exception, they should >> be marked simply with *var*."? Do you have suggestions how to improve >> clarity? >> > > "... variables, including function/method arguments, ...". Thanks, I've changed it a bit along these lines in r68859. Georg From martin at v.loewis.de Thu Jan 22 19:43:57 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 22 Jan 2009 19:43:57 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> Message-ID: <4978BE6D.6060000@v.loewis.de> >> I can't comment on that, because I don't know what your port does. >> Does it not produce a .dll containing the majority of Python? > > no, it contains the minimal necessary amount of python modules, > exactly like when python is built using cygwin. actualy, there's a > few modules that _have_ to be included. That's actually not my question. Do you have a DLL that contains all of Python/*.o and Objects/*.o? >> And is that not called python25.dll? > > no, it's called libpython2.5.dll.a, just like when python is built > using cygwin. the configure scripts, thanks to the cygwin build, > already end up copying that to libpython2.5.dll. > > _not_ python25.dll I'm giving up for now. I don't quite understand why the file gets first called libpython2.5.dll.a, and then renamed to libpython2.5.dll. Could it be perhaps that the .a file is an import library, and actually different from the .dll file? > p.s. there's nothing to stop you adding every single module and then > renaming the resultant blob to libpython25.dll - i just haven't been > given, or found, a good reason to do so yet. That doesn't really matter, I guess. An extension module build by your port will either fail to load into the regular Python (if libpython2.5.dll is not found), or load and then crash (because it uses a different copy of the Python runtime). Likewise vice versa. If this port ever takes off, we get another binary-incompatible Python version. I hope that users will understand that it is disjoint from the python.org version (as they seem to understand fine for the Cygwin build, which already picks up its extension modules also from a disjoint location, which helps to keep the two separate). Regards, Martin From martin at v.loewis.de Thu Jan 22 19:50:32 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 22 Jan 2009 19:50:32 +0100 Subject: [Python-Dev] Single Sign-On for *.python.org In-Reply-To: References: <76fd5acf0901181147h4773eb43l2746e2a15daf2dd0@mail.gmail.com> <87tz7w85do.fsf@benfinney.id.au> <49741F83.9020804@v.loewis.de> Message-ID: <4978BFF8.2020403@v.loewis.de> > I do not know alternative OpenID implementation for Python, so the > only way I see to continue development is to fork the lib. PyPI reports 15 packages when you search for OpenID. Not sure whether any of these are any good. Regards, Martin From jnoller at gmail.com Thu Jan 22 20:06:33 2009 From: jnoller at gmail.com (Jesse Noller) Date: Thu, 22 Jan 2009 14:06:33 -0500 Subject: [Python-Dev] Unable to resolve svn.python.org: OS/X Message-ID: <4222a8490901221106k44dacca2se5454b4885305b7c@mail.gmail.com> The other day, Martin pointed out that my buildslave had gone off the reservation: on restarting it via the "buildbot start ~/buildarea" command - Martin noticed the slave had started throwing the DNS resolution error: closing stdin using PTY: True svn: PROPFIND request failed on '/projects/python/branches/py3k' svn: PROPFIND of '/projects/python/branches/py3k': Could not resolve hostname `svn.python.org': Temporary failure in name resolution (http://svn.python.org) program finished with exit code 1 Apparently, this has bothered a few buildbots. Some quick googling popped up the fix: http://buildbot.net/trac/wiki/UsingLaunchd After dropping the attached plist file in /Library/LaunchDaemons and setting the permissions right (and then chown -R'ing the existing buildarea for the buildbot user to buildbot:daemon) - running "sudo launchctl load org.python.buildbot.slave.plist" brought the buildbot back up in working order. Hopefully this helps out. -jesse -------------- next part -------------- A non-text attachment was scrubbed... Name: org.python.buildbot.slave.plist Type: application/octet-stream Size: 1146 bytes Desc: not available URL: From lkcl at lkcl.net Thu Jan 22 20:17:36 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Jan 2009 19:17:36 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <4978BE6D.6060000@v.loewis.de> References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> Message-ID: On Thu, Jan 22, 2009 at 6:43 PM, "Martin v. L?wis" wrote: >>> I can't comment on that, because I don't know what your port does. >>> Does it not produce a .dll containing the majority of Python? >> >> no, it contains the minimal necessary amount of python modules, >> exactly like when python is built using cygwin. actualy, there's a >> few modules that _have_ to be included. > > That's actually not my question. ah right - sorry to not quite fully understand. > Do you have a DLL that contains > all of Python/*.o and Objects/*.o? yes. >>> And is that not called python25.dll? >> >> no, it's called libpython2.5.dll.a, just like when python is built >> using cygwin. the configure scripts, thanks to the cygwin build, >> already end up copying that to libpython2.5.dll. >> >> _not_ python25.dll > > I'm giving up for now. I don't quite understand why the file gets > first called libpython2.5.dll.a, and then renamed to libpython2.5.dll. > Could it be perhaps that the .a file is an import library, and actually > different from the .dll file? ... *thinks*... sorry, you're right. it's the way that dlltool is used on cygwin. dlltool on cygwin and gcc on cygwin create files with the following equivalence: python25.lib on msvc <---> libpython2.5.dll.a on cygwin and mingw32 python2.5.dll on msvc <--> libpython2.5.dll on cygwin and mingw32 >> p.s. there's nothing to stop you adding every single module and then >> renaming the resultant blob to libpython25.dll - i just haven't been >> given, or found, a good reason to do so yet. > > That doesn't really matter, I guess. An extension module build by your > port will either fail to load into the regular Python (if > libpython2.5.dll is not found), or load and then crash (because it uses > a different copy of the Python runtime). Likewise vice versa. > > If this port ever takes off, we get another binary-incompatible Python > version. there are at least three [mingw] already. > I hope that users will understand that it is disjoint from > the python.org version (as they seem to understand fine for the > Cygwin build, which already picks up its extension modules also from > a disjoint location, which helps to keep the two separate). there are already no less than _four_ mingw ports of python, of varying degrees. * http://jove.prohosting.com/iwave/ipython/pyMinGW.html * http://sebsauvage.net/python/mingw.html * http://python-mingw.donbennett.org/ * roumen's cross-compile+native port * the port i'm working on - extending roumen's native mingw compile one dates back to... python 2.2 i didn't include that one. another is python2.4. don's work is a cygwin cross-compile (note NOT a "compile of python for cygwin such that you need CYGWIN.DLL to run python"), so, using cygwin under win32 to cross-compile a native python.exe. smart, that. roumen then worked on that further, to make it compile under mingw / msys, not cygwin. and i'm working on taking windows _completely_ out of the loop, by getting python.exe to both compile _and_ run under wine, with the added benefit that if you _did_ happen to want to compile (or run) under either native windows or both, you can. l. From skip at pobox.com Thu Jan 22 20:23:40 2009 From: skip at pobox.com (skip at pobox.com) Date: Thu, 22 Jan 2009 13:23:40 -0600 Subject: [Python-Dev] Unable to resolve svn.python.org: OS/X In-Reply-To: <4222a8490901221106k44dacca2se5454b4885305b7c@mail.gmail.com> References: <4222a8490901221106k44dacca2se5454b4885305b7c@mail.gmail.com> Message-ID: <18808.51132.416898.304436@montanaro.dyndns.org> Jesse> The other day, Martin pointed out that my buildslave had gone off Jesse> the reservation: on restarting it via the "buildbot start Jesse> ~/buildarea" command - Martin noticed the slave had started Jesse> throwing the DNS resolution error: ... Thanks for this. This appears to be exactly what's gone wrong with the OS/X community buildbot I run. I'll try to plop your solution in place when I get home and see how it works after that. Skip From lkcl at lkcl.net Thu Jan 22 20:39:40 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Jan 2009 19:39:40 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <4978BE6D.6060000@v.loewis.de> References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> Message-ID: > version. I hope that users will understand that it is disjoint from > the python.org version (as they seem to understand fine for the > Cygwin build, which already picks up its extension modules also from > a disjoint location, which helps to keep the two separate). yes i made the default installation location (--prefix=) c:/python2.5 _not_ c:/python25 but obviously it _has_ been necessary to make the installation of modules into the exact same _style_ of location as the msvc build, because it has obviously also been necessary to use PC/getpathp.c not getpath.c so, .pyd modules will get installed in c:/python2.5/lib/site-packages/ and most importantly they'll get _looked_ for in there! for a while, they were being installed in c:/python2.5/lib/python2.5/site-packages which was a bit of a mess - that's the "unix" style of module locations. getpathp.c looks for "Lib/os.py" whilst getpath.c looks for "os.py" there's a whole _stack_ of knock-on effect little things like that. l. From martin at v.loewis.de Thu Jan 22 20:40:19 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 22 Jan 2009 20:40:19 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> Message-ID: <4978CBA3.8040905@v.loewis.de> > there are already no less than _four_ mingw ports of python, of varying degrees. > > * http://jove.prohosting.com/iwave/ipython/pyMinGW.html Ok, this one builds pythonXY, so it tries to be compatible with the official distribution (although it seems to link against MSVCRT.dll) > * http://sebsauvage.net/python/mingw.html That's *not* a port of Python to MingW. Instead, it is a set of instructions on how to build Python extension modules, using the official Python binaries, with mingw. I think this is obsolete now, as this now ships with Python itself. > * http://python-mingw.donbennett.org/ This doesn't seem to be distributing binaries. Regards, Martin From lkcl at lkcl.net Thu Jan 22 21:01:16 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Jan 2009 20:01:16 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <4978BE6D.6060000@v.loewis.de> References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> Message-ID: > That doesn't really matter, I guess. An extension module build by your > port will either fail to load into the regular Python (if > libpython2.5.dll is not found), or load and then crash (because it uses > a different copy of the Python runtime). Likewise vice versa. excellent, excellent that's _really_ good - and here's why: if it is _guaranteed_ to crash, regardless of what i do (because the copy of the python runtime is different), then it _doesn't_ matter what version of msvcrt i link the mingw-built python runtime with, does it? am i right? and if _that's_ the case, i can stop fricking about with msvcr80 :) which would be an absolute godsend because msvcr80 under wine is a frickin nightmare. the python regression tests pretty much hammer the daylights out of wine and it's squeaking in all sorts of weird places. adding in msvcr80, an undocumented transparent "blob" into the mix just makes getting this port fully operational all the more difficult. i'd like to avoid that :) l. From lkcl at lkcl.net Thu Jan 22 21:09:15 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Jan 2009 20:09:15 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <4978CBA3.8040905@v.loewis.de> References: <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978CBA3.8040905@v.loewis.de> Message-ID: On Thu, Jan 22, 2009 at 7:40 PM, "Martin v. L?wis" wrote: >> there are already no less than _four_ mingw ports of python, of varying degrees. >> >> * http://jove.prohosting.com/iwave/ipython/pyMinGW.html > > Ok, this one builds pythonXY, so it tries to be compatible with the > official distribution (although it seems to link against MSVCRT.dll) > >> * http://sebsauvage.net/python/mingw.html > > That's *not* a port of Python to MingW. Instead, it is a set of > instructions on how to build Python extension modules, using the > official Python binaries, with mingw. oh? ah, sorry, i didn't check . >> * http://python-mingw.donbennett.org/ > > This doesn't seem to be distributing binaries. sourceforge page. i checked the statistics, there don't seem to be very many hits (sorry to hear that don, if you're reading this!) ok. there _is_ a sourceforge page,... yep, downloads here: http://sourceforge.net/project/showfiles.php?group_id=182839 ok , so that makes... 3? From martin at v.loewis.de Thu Jan 22 21:17:48 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 22 Jan 2009 21:17:48 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> Message-ID: <4978D46C.3030200@v.loewis.de> > am i right? You should test that. I'm not sure whether it will crash (in particular, it might not on load), but it *might* crash, or fail in strange ways (e.g. when it checks whether something is a string, and decides it is not, because it is looking at the other PyString_Type) > and if _that's_ the case, i can stop fricking about with msvcr80 :) If so, I think there is little point in submitting patches to the Python bug tracker. I'm -1 on supporting two different-but-similar builds on Windows. I could accept a different build *process*, but the outcome must be the same either way. (of course, msvcr80 is irrelevant, because Python had never been using that officially) Regards, Martin From martin at v.loewis.de Thu Jan 22 21:22:33 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 22 Jan 2009 21:22:33 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978CBA3.8040905@v.loewis.de> Message-ID: <4978D589.1040003@v.loewis.de> >> This doesn't seem to be distributing binaries. > > sourceforge page. i checked the statistics, there don't seem to be > very many hits (sorry to hear that don, if you're reading this!) ok. > there _is_ a sourceforge page,... yep, downloads here: > > http://sourceforge.net/project/showfiles.php?group_id=182839 Where *exactly* do you get binaries there? All I can find is patches-2.5.1v2.gz Regards, Martin From LambertDW at Corning.com Thu Jan 22 21:29:09 2009 From: LambertDW at Corning.com (Lambert, David W (S&T)) Date: Thu, 22 Jan 2009 15:29:09 -0500 Subject: [Python-Dev] [issue5029] Odd slicing behaviour In-Reply-To: References: Message-ID: <84B204FFB016BA4984227335D8257FBA8490FB@CVCV0XI05.na.corning.com> I cannot find that the documentation states "with negative step swap left with right". This is perhaps non-obvious. It is the words of the tutorial that caused issue 5029 author's confusion. 'a'[0::-1] != [] (is True, author expected False). The tutorial says: "One way to remember how slices work is to think of the indices as pointing between characters, with the left edge of the first character numbered 0. Then the right edge of the last character of a string of n characters has index n, for example:" From lkcl at lkcl.net Thu Jan 22 21:55:33 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Jan 2009 20:55:33 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <4978D46C.3030200@v.loewis.de> References: <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978D46C.3030200@v.loewis.de> Message-ID: On Thu, Jan 22, 2009 at 8:17 PM, "Martin v. L?wis" wrote: >> am i right? > > You should test that. I'm not sure whether it will crash (in particular, > it might not on load), but it *might* crash, or fail in strange ways > (e.g. when it checks whether something is a string, and decides it is > not, because it is looking at the other PyString_Type) mrmmm... how? apps won't end up loading _both_ libpython2.5.dll _and_ python25.dll (or libpython2.N.dll and python2N.dll) will they? >> and if _that's_ the case, i can stop fricking about with msvcr80 :) > > If so, I think there is little point in submitting patches to the Python > bug tracker. I'm -1 on supporting two different-but-similar builds on > Windows. I could accept a different build *process*, but the outcome > must be the same either way. > > (of course, msvcr80 is irrelevant, because Python had never been using > that officially) oh? i saw the PCbuild8 and thought it was. oh that's even better - if python2.5 only officially support msvcrt whew. ok , i see - python2.6 uses msvcr90. i'll cross that bridge when i come to it. l. > Regards, > Martin > From lkcl at lkcl.net Thu Jan 22 21:56:30 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Thu, 22 Jan 2009 20:56:30 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <4978D589.1040003@v.loewis.de> References: <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978CBA3.8040905@v.loewis.de> <4978D589.1040003@v.loewis.de> Message-ID: On Thu, Jan 22, 2009 at 8:22 PM, "Martin v. L?wis" wrote: >>> This doesn't seem to be distributing binaries. >> >> sourceforge page. i checked the statistics, there don't seem to be >> very many hits (sorry to hear that don, if you're reading this!) ok. >> there _is_ a sourceforge page,... yep, downloads here: >> >> http://sourceforge.net/project/showfiles.php?group_id=182839 > > Where *exactly* do you get binaries there? > > All I can find is patches-2.5.1v2.gz doh! From martin at v.loewis.de Thu Jan 22 22:09:51 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Thu, 22 Jan 2009 22:09:51 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978D46C.3030200@v.loewis.de> Message-ID: <4978E09F.4080507@v.loewis.de> > mrmmm... how? apps won't end up loading _both_ libpython2.5.dll _and_ > python25.dll (or libpython2.N.dll and python2N.dll) will they? Of course they will! python.exe (say, the official one) loads python25.dll. Then, an import is made of a ming-wine extension, say foo.pyd, which is linked with libpython2.5.dll, which then gets loaded. Voila, you have two interpreters in memory, with different type objects, memory heaps, and so on. This was always the problem when an old extension module (say, from 2.4) was loaded into a new interpreter (say, 2.5), then you load both python25.dll and python24.dll, causing crashes. To prevent this issue, Python now checks whether the module is linked with an incorrect pythonxy.dll, but won't catch that libpython2.5.dll is also a VM. >> (of course, msvcr80 is irrelevant, because Python had never been using >> that officially) > > oh? i saw the PCbuild8 and thought it was. oh that's even better - > if python2.5 only officially support msvcrt whew. No, Python 2.5 is linked with msvcr71.dll. PCbuild8 was never officially used. > ok , i see - python2.6 uses msvcr90. Correct. Regards, Martin From techtonik at gmail.com Thu Jan 22 22:36:09 2009 From: techtonik at gmail.com (anatoly techtonik) Date: Thu, 22 Jan 2009 23:36:09 +0200 Subject: [Python-Dev] Single Sign-On for *.python.org In-Reply-To: <4978BFF8.2020403@v.loewis.de> References: <76fd5acf0901181147h4773eb43l2746e2a15daf2dd0@mail.gmail.com> <87tz7w85do.fsf@benfinney.id.au> <49741F83.9020804@v.loewis.de> <4978BFF8.2020403@v.loewis.de> Message-ID: On Thu, Jan 22, 2009 at 8:50 PM, "Martin v. L?wis" wrote: >> I do not know alternative OpenID implementation for Python, so the >> only way I see to continue development is to fork the lib. > > PyPI reports 15 packages when you search for OpenID. Not sure whether > any of these are any good. django-authopenid 0.9.6 7 Openid authentification application for Django - uses openidenabled http://code.google.com/p/django-authopenid/wiki/README plone.app.openid 1.1 7 Plone OpenID authentication support - the same openidenabled requirement http://pypi.python.org/pypi/plone.app.openid/1.1 plone.openid 1.2 7 OpenID authentication support for PAS - GPLd, openidenabled - http://svn.plone.org/svn/plone/plone.openid/trunk/setup.py python-openid 2.2.1 7 OpenID support for servers and consumers. - well, it is the openidenabled lib itself - http://openidenabled.com/python-openid/ silva.pas.openid 1.1 7 OpenID suport for Silva - openidenabled - https://svn.infrae.com/silva.pas.openid/trunk/setup.py TracOpenIDDelegate 1.0 7 Add OpenID delegation links to a Trac site. - merely delegates the auth to other site authopenid_middleware 0.1 6 OpenID authentication middleware for WSGI applications - yep, another openenabled wrapper TGOpenIDLogin 0.1 6 OpenID login controller for TurboGears - guess what? http://nxsy.org/code/tgopenidlogin/ TracAuthOpenId 0.1.9 6 OpenID plugin for Trac - the same middleware wrapper, openidenabled TestOpenID 0.2.2 5 A test consumer and server for Open ID. - demonstration of how to use the python-openid library, enabled gracie 0.2.8 3 Gracie - OpenID provider for local accounts - GPLed and enabled AuthKit 0.4.3 1 An authentication and authorization toolkit for WSGI applications and frameworks - you know the answer - http://authkit.org/trac/browser/AuthKit/trunk/setup.py plone.app.layout 1.1.7 1 Layout mechanisms for Plone - irrelevant Products.SilvaForum 0.3.1 1 Forum for Silva - irrelevant wsgiauth 0.1 - uses openidenabled No option. -- --anatoly t. From skippy.hammond at gmail.com Thu Jan 22 23:16:25 2009 From: skippy.hammond at gmail.com (Mark Hammond) Date: Fri, 23 Jan 2009 09:16:25 +1100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> Message-ID: <4978F039.2060008@gmail.com> On 23/01/2009 7:01 AM, Luke Kenneth Casson Leighton wrote: >> That doesn't really matter, I guess. An extension module build by your >> port will either fail to load into the regular Python (if >> libpython2.5.dll is not found), or load and then crash (because it uses >> a different copy of the Python runtime). Likewise vice versa. > > > excellent, excellent that's _really_ good - and here's why: > > if it is _guaranteed_ to crash, regardless of what i do (because the > copy of the python runtime is different), then it _doesn't_ matter > what version of msvcrt i link the mingw-built python runtime with, > does it? I'm very confused about this: It seems you started this work precisely so you can be compatible between MSVC built Python's and mingw builds - ie, this thread starts with you saying: > this is a fairly important issue for python development > interoperability - but now you seem to be saying it is actually a *feature* if they don't work together? > and if _that's_ the case, i can stop fricking about with msvcr80 :) If all you are doing is trying to get a version of Python working under Wine that isn't compatible with MSVC built binaries, I can't work out why you are fricking around with msvcr80 either! Cheers, Mark From bugtrack at roumenpetrov.info Fri Jan 23 00:13:40 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Fri, 23 Jan 2009 01:13:40 +0200 Subject: [Python-Dev] progress: compiling python2.5 under msys (specifically but not exclusively under wine) with msvcr80 In-Reply-To: <50019.151.53.150.247.1232570935.squirrel@webmail1.pair.com> References: <50019.151.53.150.247.1232570935.squirrel@webmail1.pair.com> Message-ID: <4978FDA4.8050101@roumenpetrov.info> Cesare Di Mauro wrote: > Have you made some benchmarks like pystone? There is result from pystone test test run an old PC (NT 5.1): - 2.6(official build): 42194,6; 42302,4; 41990,8; 42658,0; 42660,6; 42770,1 average=42429,4 deviation=311,6 - 2.6.1(official build): 35612,1; 35778,8; 35666,7; 35697,9; 35514,9; 35654,0 average=35654,1 deviation=88,1 - trunk(my mingw based build): 35256,7; 35272,5; 35247,2; 35270,7; 35225,6; 35233,5 average=35251,0 deviation=19,2 There is problem with python performance between 2.6 and 2.6.1 ~ 19% :(. Also the test for GCC-mingw is not with same source base. Roumen From bugtrack at roumenpetrov.info Fri Jan 23 00:22:28 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Fri, 23 Jan 2009 01:22:28 +0200 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> Message-ID: <4978FFB4.1050802@roumenpetrov.info> Luke Kenneth Casson Leighton wrote: > On Wed, Jan 21, 2009 at 9:13 PM, "Martin v. L?wis" wrote: >>> ok, so - different from what's being generated by ./configure under >>> msys under wine or native win32 - what's being generated (libpython 2 >>> . 5 . a and libpython 2 . 5 . dll . a) is more akin to the cygwin >>> environment. >>> >>> therefore, there's absolutely no doubt that the two are completely different. >>> >>> and on that basis, would i be correct in thinking that you _can't_ go >>> linking or building modules or any python win32 code for one and have >>> a hope in hell of using it on the other, and that you would _have_ to >>> rebuild e.g. numpy for use with a mingw32-msys-built version of >>> python? >> I can't comment on that, because I don't know what your port does. >> Does it not produce a .dll containing the majority of Python? > > no, it contains the minimal necessary amount of python modules, > exactly like when python is built using cygwin. actualy, there's a > few modules that _have_ to be included. > > roumen discovered that you have to have these: > > _functools _functoolsmodule.c # Tools for working with functions > and callable objects > operator operator.c # operator.add() and similar goodies > _locale _localemodule.c # -lintl > _struct _struct.c > _subprocess ../PC/_subprocess.c > _winreg ../PC/_winreg.c Yes and this is issue in native build - setup.py fail to load :(. In cross-build where I use python from build system I could produce those as modules. > and i've discovered that when running under wine you have to also have these: > _weakref _weakref.c > > and also when running unde wine with msvcr80, so far, you have to also > have these: > collections collectionsmodule.c > thread threadmodule.c > > all the rest can be done as .pyd [SNIP] Actually I didn't spend time to find why MSVC build include so many modules as build-ins. Roumen From bugtrack at roumenpetrov.info Fri Jan 23 00:46:26 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Fri, 23 Jan 2009 01:46:26 +0200 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <9613db600901200511tebe4f6bya1049ee42acac0fc@mail.gmail.com> <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> Message-ID: <49790552.8050504@roumenpetrov.info> Luke Kenneth Casson Leighton wrote: >> version. I hope that users will understand that it is disjoint from >> the python.org version (as they seem to understand fine for the >> Cygwin build, which already picks up its extension modules also from >> a disjoint location, which helps to keep the two separate). > > yes i made the default installation location (--prefix=) c:/python2.5 > _not_ c:/python25 but obviously it _has_ been necessary to make the > installation of modules into the exact same _style_ of location as the > msvc build, because it has obviously also been necessary to use > PC/getpathp.c not getpath.c I'm thinking about possibility to avoid compatible paths, i.e. to drop windows specific PC/getpathp.c and to return back to posix getpath.c. The problem is that MSVC based build is not installed in tree compatible to the posix build. Now I think that GCC(mingw) build has to use same tree as other posix builds. Mixing posix build and install (makefile) with paths used by from MSVC build require additional changes either in makefile or in PC/getpathp.c. In the both case change is more the 100 lines and now I prefer mingw to use posix tree. This open another issue. ?he posix build install in fixed location. I think that with a small change in distutils (no more then 10-20 lines) we may overcome this. > so, .pyd modules will get installed in > c:/python2.5/lib/site-packages/ and most importantly they'll get > _looked_ for in there! for a while, they were being installed in > c:/python2.5/lib/python2.5/site-packages which was a bit of a mess - > that's the "unix" style of module locations. getpathp.c looks for > "Lib/os.py" whilst getpath.c looks for "os.py" > > there's a whole _stack_ of knock-on effect little things like that. :) The installation is the last issue. Roumen From rasky at develer.com Fri Jan 23 02:22:15 2009 From: rasky at develer.com (Giovanni Bajo) Date: Fri, 23 Jan 2009 01:22:15 +0000 (UTC) Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> Message-ID: On Mon, 19 Jan 2009 01:38:18 +0000, Gregory P. Smith wrote: > I regularly point out in code reviews that the very convenient and > common idiom of open(name, 'w').write(data) doesn't guarantee when the > file will be closed; its up to the GC implementation details. Which, to me, sounds like "please, don't assume that bytes are 8-bits wide; this depends on implementation details of your CPU". CPython will always use reference counting and thus have a simple and clear GC criteria that can be exploited to simplify the code. I personally don't like defensive programming, nor coding for situations that will never arise . When I write CPython applications (thus, for instance, using C extensions), I don't see *any* point in trying to achieve any cross-python-implementation compatibility. I simply don't need it. Probably, library programmers have a different point of view. But I always object when I'm told that I should make my code longer and harder to read only because CPython might stop using reference counting (... when hell freezes over). Back to the topic, please let's keep things as they are now: the file descriptor is automatically closed as soon as the file object is destroyed. If you then feel "safer" always using with or try/finally, nobody is going to complain. And everybody will be happy :) -- Giovanni Bajo Develer S.r.l. http://www.develer.com From curt at hagenlocher.org Fri Jan 23 02:42:29 2009 From: curt at hagenlocher.org (Curt Hagenlocher) Date: Thu, 22 Jan 2009 17:42:29 -0800 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> Message-ID: On Thu, Jan 22, 2009 at 5:22 PM, Giovanni Bajo wrote: > On Mon, 19 Jan 2009 01:38:18 +0000, Gregory P. Smith wrote: > >> I regularly point out in code reviews that the very convenient and >> common idiom of open(name, 'w').write(data) doesn't guarantee when the >> file will be closed; its up to the GC implementation details. > > Which, to me, sounds like "please, don't assume that bytes are 8-bits > wide; this depends on implementation details of your CPU". I think it's a lot more like "please, don't assume that there's a Global Interpreter Lock" -- something that the implementation shouldn't change without good reason and sufficient warning, but which isn't actually part of the language specification. And of course, such advice always carries more weight for code that's intended to be reusable than it does for code that has little chance of escaping the application it's in. -- Curt Hagenlocher curt at hagenlocher.org From steve at holdenweb.com Fri Jan 23 02:48:11 2009 From: steve at holdenweb.com (Steve Holden) Date: Thu, 22 Jan 2009 20:48:11 -0500 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> Message-ID: Giovanni Bajo wrote: > On Mon, 19 Jan 2009 01:38:18 +0000, Gregory P. Smith wrote: > >> I regularly point out in code reviews that the very convenient and >> common idiom of open(name, 'w').write(data) doesn't guarantee when the >> file will be closed; its up to the GC implementation details. > > Which, to me, sounds like "please, don't assume that bytes are 8-bits > wide; this depends on implementation details of your CPU". > Which it does, assuming you are using (for example) an ancient DECSystem-10. But you really can't assume in your writings about Python that all readers will be using CPython, so it seems like a reasonable point to make. > CPython will always use reference counting and thus have a simple and > clear GC criteria that can be exploited to simplify the code. I > personally don't like defensive programming, nor coding for situations > that will never arise . When I write CPython applications (thus, for > instance, using C extensions), I don't see *any* point in trying to > achieve any cross-python-implementation compatibility. I simply don't > need it. > Who gave you this guarantee of CPython's future behavior? Who knows which implementation of Python will be used to support your code and mine in five years? > Probably, library programmers have a different point of view. As they properly should. > But I > always object when I'm told that I should make my code longer and harder > to read only because CPython might stop using reference counting (... > when hell freezes over). > Ah, religious beliefs ... ;-) > Back to the topic, please let's keep things as they are now: the file > descriptor is automatically closed as soon as the file object is > destroyed. If you then feel "safer" always using with or try/finally, > nobody is going to complain. And everybody will be happy :) And what are the IronPython team, and the Jython team, supposed to do when they get around to implementing Python 3? Clearly (since both teams are already committed to implementing it) the more we can do to accommodate them the better it will be for cross-implementation compatibility. Or did I miss something? You are, of course, free to make whatever assumptions you like about the environment in which your code executes. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From guido at python.org Fri Jan 23 03:42:25 2009 From: guido at python.org (Guido van Rossum) Date: Thu, 22 Jan 2009 18:42:25 -0800 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> Message-ID: On Thu, Jan 22, 2009 at 5:22 PM, Giovanni Bajo wrote: > CPython will always use reference counting and thus have a simple and > clear GC criteria that can be exploited to simplify the code. Believe this at your own peril. Once, CPython didn't have GC at all (apart from refcounting). Now it does. There are GC techniques that delay DECREF operations until it's more convenient. If someone finds a way to exploit that technique to save 10% of execution time it would only be right to start using it. You *can* assume that objects that are no longer referenced will *eventually* be GC'ed, and that GC'ing a file means flushing its buffer and closing its file descriptor. You *cannot* assume that objects are *immediately* GC'ed. This is already not always true in CPython for many different reasons, like objects involved in cycles, weak references, or tracebacks saved with exceptions, or perhaps profiling/debugging hooks. If we found a good reason to introduce file objects into some kind of cycle or weak reference dict, I could see file objects getting delayed reclamation even without changes in GC implementation. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From billiejoex at gmail.com Fri Jan 23 03:13:27 2009 From: billiejoex at gmail.com (Giampaolo Rodola') Date: Fri, 23 Jan 2009 03:13:27 +0100 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? Message-ID: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> Hi, while attempting to port pyftpdlib [1] to Python 3 I have noticed that ftplib differs from the previous 2.x version in that it uses latin-1 to encode everything it's sent over the FTP command channel, but by reading RFC-2640 [2] it seems that UTF-8 should be preferred instead. I'm far from being an expert of encodings, plus the RFC is quite hard to understand, so sorry in advance if I have misunderstood the whole thing. Just wanted to put this up to people more qualified than me. [1] http://code.google.com/p/pyftpdlib [2] http://www.ietf.org/rfc/rfc2640.txt --- Giampaolo http://code.google.com/p/pyftpdlib From brett at python.org Fri Jan 23 06:15:55 2009 From: brett at python.org (Brett Cannon) Date: Thu, 22 Jan 2009 21:15:55 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST Message-ID: I have now converted PEP 374 (http://www.python.org/dev/peps/pep-0374/) from Google Docs to reST and checked it in. I am not going to paste it into an email as it is nearly 1500 lines in reST form. Because there are four authors handling corrections it is a little different than normal on who you contact to suggest changes. For each specific VCS there is a primary author as listed in the PEP in the intro to the Scenarios section. Email the proper author if you find an issue with a specific VCS. Otherwise email me for general PEP issues. Core developers can make changes on their own while taking into account they should let the author in charge of the PEP know if they make a big change. Since I will be the author making the final recommendation I am documenting my thought processes on my decision making for this whole thing as I go along in the PEP so as to be as transparent as possible. I am not even close to being done, so please don't email me about the section. And I would like to thank my co-authors for their time and effort thus far in filling in the PEP on behalf of their favorite DVCS. Everyone has put in a lot of time already with I am sure more time in the future. -Brett From lkcl at lkcl.net Fri Jan 23 07:36:02 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Fri, 23 Jan 2009 06:36:02 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <4978E09F.4080507@v.loewis.de> References: <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978D46C.3030200@v.loewis.de> <4978E09F.4080507@v.loewis.de> Message-ID: On Thu, Jan 22, 2009 at 9:09 PM, "Martin v. L?wis" wrote: >> mrmmm... how? apps won't end up loading _both_ libpython2.5.dll _and_ >> python25.dll (or libpython2.N.dll and python2N.dll) will they? > > Of course they will! yeah, silly - i worked that out juust after i pressed "send". > python.exe (say, the official one) loads > python25.dll. Then, an import is made of a ming-wine extension, say > foo.pyd, which is linked with libpython2.5.dll, which then gets loaded. > Voila, you have two interpreters in memory, with different type objects, > memory heaps, and so on. ok, there's a solution for that - the gist of the solution is already implemented in things like Apache Runtime and Apache2 (modules), and is an extremely common standard technique implemented in OS kernels. the "old school" name for it is "vector tables". so you never let the .pyd (or so even) modules "link" to the libpythonN.N.dll, pythonNN.dll (or libpythonN.N.so even), you _pass in_ a pointer to everything it's ever going to need (in its init) function. > This was always the problem when an old extension module (say, from 2.4) > was loaded into a new interpreter (say, 2.5), then you load both > python25.dll and python24.dll, causing crashes. To prevent this issue, > Python now checks whether the module is linked with an incorrect > pythonxy.dll, but won't catch that libpython2.5.dll is also a VM. ok. i'll look at making libpython2.5.dll equal to python25.dll. >>> (of course, msvcr80 is irrelevant, because Python had never been using >>> that officially) >> >> oh? i saw the PCbuild8 and thought it was. oh that's even better - >> if python2.5 only officially support msvcrt whew. > > No, Python 2.5 is linked with msvcr71.dll. ehn? i don't see that anywhere in any PC/* files - i do see that there's a dependency on .NET SDK 1.1 which uses msvcr71.dll still, 71 is good news - as long as it's not involving assemblies. 2.6 is a different matter, but, thinking about it, i have hopes that the better-tested-codepath of the 2.6 codebase would have better luck with 9.0 [than i had with 2.5 and 8.0] simply because... it's been tested already! [and 2.5 with 8.0 hadn't] l. From lkcl at lkcl.net Fri Jan 23 07:37:25 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Fri, 23 Jan 2009 06:37:25 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978D46C.3030200@v.loewis.de> <4978E09F.4080507@v.loewis.de> Message-ID: On Fri, Jan 23, 2009 at 6:36 AM, Luke Kenneth Casson Leighton wrote: > On Thu, Jan 22, 2009 at 9:09 PM, "Martin v. L?wis" wrote: >>> mrmmm... how? apps won't end up loading _both_ libpython2.5.dll _and_ >>> python25.dll (or libpython2.N.dll and python2N.dll) will they? >> >> Of course they will! > > yeah, silly - i worked that out juust after i pressed "send". ironically, i started out with the intent of going for python2N.dll interoperability. then i noticed that all the other mingw ports dropped the total-inclusion-of-all-modules .... because you _can_. i should have stuck with my original plan :) From lkcl at lkcl.net Fri Jan 23 08:22:16 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Fri, 23 Jan 2009 07:22:16 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <4978F039.2060008@gmail.com> References: <49763FC6.9090303@v.loewis.de> <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978F039.2060008@gmail.com> Message-ID: On Thu, Jan 22, 2009 at 10:16 PM, Mark Hammond wrote: > On 23/01/2009 7:01 AM, Luke Kenneth Casson Leighton wrote: >>> >>> That doesn't really matter, I guess. An extension module build by your >>> port will either fail to load into the regular Python (if >>> libpython2.5.dll is not found), or load and then crash (because it uses >>> a different copy of the Python runtime). Likewise vice versa. >> >> >> excellent, excellent that's _really_ good - and here's why: >> >> if it is _guaranteed_ to crash, regardless of what i do (because the >> copy of the python runtime is different), then it _doesn't_ matter >> what version of msvcrt i link the mingw-built python runtime with, >> does it? > > I'm very confused about this: It seems you started this work precisely so > you can be compatible between MSVC built Python's and mingw builds yeah that's where i _started_ - and after being on this for what nearly eight days straight i was hoping to get away with as little extra work as possible. > - ie, > this thread starts with you saying: > >> this is a fairly important issue for python development >> interoperability > > - but now you seem to be saying it is actually a *feature* if they don't > work together? *sigh* no, that was me getting confused >> and if _that's_ the case, i can stop fricking about with msvcr80 :) > > If all you are doing is trying to get a version of Python working under Wine > that isn't compatible with MSVC built binaries, I can't work out why you are > fricking around with msvcr80 either! ha ha :) existence of PCbuild8 is the main reason :) that and getting the wrong end of the stick. i'll get there eventually. From tjreedy at udel.edu Fri Jan 23 08:45:23 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 23 Jan 2009 02:45:23 -0500 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> Message-ID: Giampaolo Rodola' wrote: > Hi, > while attempting to port pyftpdlib [1] to Python 3 I have noticed that > ftplib differs from the previous 2.x version in that it uses latin-1 > to encode everything it's sent over the FTP command channel, but by > reading RFC-2640 [2] it seems that UTF-8 should be preferred instead. > I'm far from being an expert of encodings, plus the RFC is quite hard > to understand, so sorry in advance if I have misunderstood the whole > thing. I read it the same way. The whole point of the RFC is that UTF-8 rather than the very limited latin-1 is needed for true internationalization. > Just wanted to put this up to people more qualified than me. > > > [1] http://code.google.com/p/pyftpdlib > [2] http://www.ietf.org/rfc/rfc2640.txt > > > --- Giampaolo > http://code.google.com/p/pyftpdlib > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/python-python-dev%40m.gmane.org > From martin at v.loewis.de Fri Jan 23 09:06:45 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 23 Jan 2009 09:06:45 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978D46C.3030200@v.loewis.de> <4978E09F.4080507@v.loewis.de> Message-ID: <49797A95.6050402@v.loewis.de> > ok, there's a solution for that - the gist of the solution is already > implemented in things like Apache Runtime and Apache2 (modules), and > is an extremely common standard technique implemented in OS kernels. > the "old school" name for it is "vector tables". We might be able to do that in Python 4; it would certainly require a PEP. >> No, Python 2.5 is linked with msvcr71.dll. > > ehn? i don't see that anywhere in any PC/* files - i do see that > there's a dependency on .NET SDK 1.1 which uses msvcr71.dll Take a look at PCbuild/pythoncore.vcproj. It says Version="7.10" This is how you know VS 2003 was used to build Python 2.5, which in turn links in msvcr71.dll. Regards, Martin From martin at v.loewis.de Fri Jan 23 09:08:10 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 23 Jan 2009 09:08:10 +0100 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978D46C.3030200@v.loewis.de> <4978E09F.4080507@v.loewis.de> Message-ID: <49797AEA.8000307@v.loewis.de> > ironically, i started out with the intent of going for python2N.dll > interoperability. then i noticed that all the other mingw ports > dropped the total-inclusion-of-all-modules .... because you _can_. What modules are built in and what modules are external doesn't affect interoperability wrt. (third-party) extension modules. Regards, Martin From lkcl at lkcl.net Fri Jan 23 09:27:16 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Fri, 23 Jan 2009 08:27:16 +0000 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <49797AEA.8000307@v.loewis.de> References: <4978BE6D.6060000@v.loewis.de> <4978D46C.3030200@v.loewis.de> <4978E09F.4080507@v.loewis.de> <49797AEA.8000307@v.loewis.de> Message-ID: On Fri, Jan 23, 2009 at 8:08 AM, "Martin v. L?wis" wrote: >> ironically, i started out with the intent of going for python2N.dll >> interoperability. then i noticed that all the other mingw ports >> dropped the total-inclusion-of-all-modules .... because you _can_. > > What modules are built in and what modules are external doesn't affect > interoperability wrt. (third-party) extension modules. ahhh .... sooo.... [said in a japanese kind of way that indicates understanding...] From martin at v.loewis.de Fri Jan 23 09:31:23 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 23 Jan 2009 09:31:23 +0100 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> Message-ID: <4979805B.4030107@v.loewis.de> Giampaolo Rodola' wrote: > Hi, > while attempting to port pyftpdlib [1] to Python 3 I have noticed that > ftplib differs from the previous 2.x version in that it uses latin-1 > to encode everything it's sent over the FTP command channel, but by > reading RFC-2640 [2] it seems that UTF-8 should be preferred instead. > I'm far from being an expert of encodings, plus the RFC is quite hard > to understand, so sorry in advance if I have misunderstood the whole > thing. I read it that a conforming client MUST issue a FEAT command, to determine whether the server supports UTF8. One would have to go back to the original FTP RFC, but it seams that, in the absence of server UTF8 support, all path names must be 7-bit clean (which means that ASCII should be the default encoding). In any case, Brett changed the encoding to latin1 in r58378, maybe he can comment. Regards, Martin From martin at v.loewis.de Fri Jan 23 09:39:17 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 23 Jan 2009 09:39:17 +0100 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: Message-ID: <49798235.50509@v.loewis.de> > And I would like to thank my co-authors for their time and effort thus > far in filling in the PEP on behalf of their favorite DVCS. Everyone > has put in a lot of time already with I am sure more time in the > future. So what will happen next? ISTM that the PEP is not complete, since it doesn't specify a specific DVCS to migrate to (i.e. it wouldn't be possible to implement the PEP as it stands). Somebody will have to make a decision. Ultimately, Guido will have to approve the PEP, but it might be that he refuses to make a choice of specific DVCS. Traditionally, it is the PEP author who makes all choices (considering suggestions from the community, of course). So what DVCS do the PEP authors recommend? Regards, Martin From eckhardt at satorlaser.com Fri Jan 23 10:19:39 2009 From: eckhardt at satorlaser.com (Ulrich Eckhardt) Date: Fri, 23 Jan 2009 10:19:39 +0100 Subject: [Python-Dev] Update on MS Windows CE port Message-ID: <200901231019.39111.eckhardt@satorlaser.com> Hi! Just a short update on my porting issues. I tried various things in the holidays/vacation, filed several bug reports, got a few patches applied and I think the thing is taking shape now. However, I couldn't work on it for the past two weeks since I'm just too swamped at work here. I haven't given up though! What still needs work? The main component that requires porting is the pythoncore project, but that port isn't finished yet. I'm using the VS8.0 project files as a base and adapted it in the following steps (note that this is preliminary): 0. Check out trunk (i.e. 2.x, not 3.x). 1. Create new project configuration for your CE target and use the win32 one as a base. 2. In the preprocessor settings, add these entries: UNICODE,_UNICODE,UNDER_CE,_WIN32_WCE=$(CEVER) 3. Try compiling. ;) What issues are left? There are two classes of errors you will encounter, those that are related to CE itself (like missing errno and other parts of standard C) and those that are more general, like assuming TCHAR=char. Those that assume that TCHAR=char can also be found with the plain win32 variant by simply adding UNICODE and _UNICODE to the preprocessor defines. What are the future plans? I'm trying to fix errors in pythoncore one by one and provide separate bug reports with patches. Note that there are already several patches in the BTS which would merit reviews if you have time, even if it's only a "patch applies and doesn't cause regressions". When I get pythoncore to compile (and be it by disabling a few builtin modules), I will try to get the commandline interface to work. Note that IIRC there are two commandlines availabel under CE, one is a common cmd.exe, the other is called AYGShell or some such. Both will/might require different approaches. Another problem is the fact that there is not just one target platform but a potentially infinite number of them, since every CE device can have a separate SDK which further might provide different features. This probably requires autogenerated build files or some other autoconf-like procedure that allows enabling or disabling certain features for builds. so much for now Uli -- Sator Laser GmbH Gesch?ftsf?hrer: Thorsten F?cking, Amtsgericht Hamburg HR B62 932 ************************************************************************************** Sator Laser GmbH, Fangdieckstra?e 75a, 22547 Hamburg, Deutschland Gesch?ftsf?hrer: Thorsten F?cking, Amtsgericht Hamburg HR B62 932 ************************************************************************************** Visit our website at ************************************************************************************** Diese E-Mail einschlie?lich s?mtlicher Anh?nge ist nur f?r den Adressaten bestimmt und kann vertrauliche Informationen enthalten. Bitte benachrichtigen Sie den Absender umgehend, falls Sie nicht der beabsichtigte Empf?nger sein sollten. Die E-Mail ist in diesem Fall zu l?schen und darf weder gelesen, weitergeleitet, ver?ffentlicht oder anderweitig benutzt werden. E-Mails k?nnen durch Dritte gelesen werden und Viren sowie nichtautorisierte ?nderungen enthalten. Sator Laser GmbH ist f?r diese Folgen nicht verantwortlich. ************************************************************************************** From dirkjan at ochtman.nl Fri Jan 23 10:56:54 2009 From: dirkjan at ochtman.nl (Dirkjan Ochtman) Date: Fri, 23 Jan 2009 09:56:54 +0000 (UTC) Subject: [Python-Dev] PEP 374 (DVCS) now in reST References: <49798235.50509@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > Somebody will have to make a decision. Ultimately, Guido will have to > approve the PEP, but it might be that he refuses to make a choice of > specific DVCS. Traditionally, it is the PEP author who makes all > choices (considering suggestions from the community, of course). So > what DVCS do the PEP authors recommend? Brett mentioned in his email that he wasn't ready to make a decision yet, I think? I also think that the PEP could still use some modifications from people who have more experience with the DVCSs. I'll probably send in some suggestions later today. Cheers, Dirkjan From rasky at develer.com Fri Jan 23 11:57:00 2009 From: rasky at develer.com (Giovanni Bajo) Date: Fri, 23 Jan 2009 11:57:00 +0100 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> Message-ID: <1232708220.8900.23.camel@ozzu> On gio, 2009-01-22 at 18:42 -0800, Guido van Rossum wrote: > On Thu, Jan 22, 2009 at 5:22 PM, Giovanni Bajo wrote: > > CPython will always use reference counting and thus have a simple and > > clear GC criteria that can be exploited to simplify the code. > > Believe this at your own peril. > > Once, CPython didn't have GC at all (apart from refcounting). Now it > does. There are GC techniques that delay DECREF operations until it's > more convenient. If someone finds a way to exploit that technique to > save 10% of execution time it would only be right to start using it. > > You *can* assume that objects that are no longer referenced will > *eventually* be GC'ed, and that GC'ing a file means flushing its > buffer and closing its file descriptor. You *cannot* assume that > objects are *immediately* GC'ed. This is already not always true in > CPython for many different reasons, like objects involved in cycles, > weak references, I don't understand what you mean with weak references delaying object deallocation. I'm probably missing something here... > or tracebacks saved with exceptions, or perhaps > profiling/debugging hooks. If we found a good reason to introduce file > objects into some kind of cycle or weak reference dict, I could see > file objects getting delayed reclamation even without changes in GC > implementation. That would be break so much code that I doubt that, in practice, you can slip it through within a release. Besides, being able to write simpler code like "for L in open("foo.txt")" is per-se a good reason *not to* put file objects in cycles; so you will probably need more than one good reason to change this. OK, not *you* because of your BDFL powers ;), but anyone else would surely have to face great opposition. The fact that file objects are collected and closed immediately in all reasonable use cases (and even in case of exceptions, that you mention, things get even better with the new semantic of the except clause) is a *good* property of Python. I regularly see people *happy* about it. I miss to understand why many Python developers are so fierce in trying to push the idea of cross-python compatibility (which is something that does simply *not* exist in real world for applications) or to warn about rainy days in the future when this would stop working in CPython. I would strongly prefer that CPython would settle on (= document) using reference counting and immediate destruction so that people can stop making their everyday code more complex with no benefit. You will be losing no more than an open door that nobody has entered in 20 years, and people would only benefit from it. -- Giovanni Bajo Develer S.r.l. http://www.develer.com From solipsis at pitrou.net Fri Jan 23 12:05:48 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 23 Jan 2009 11:05:48 +0000 (UTC) Subject: [Python-Dev] PEP 374 (DVCS) now in reST References: Message-ID: Brett Cannon python.org> writes: > > I have now converted PEP 374 > (http://www.python.org/dev/peps/pep-0374/) from Google Docs to reST > and checked it in. I am not going to paste it into an email as it is > nearly 1500 lines in reST form. It seems the ">>" token is mangled into a French closing quote ("?") inside code snippets. Antoine. From steve at holdenweb.com Fri Jan 23 13:05:59 2009 From: steve at holdenweb.com (Steve Holden) Date: Fri, 23 Jan 2009 07:05:59 -0500 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: <1232708220.8900.23.camel@ozzu> References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> Message-ID: <4979B2A7.705@holdenweb.com> Giovanni Bajo wrote: > On gio, 2009-01-22 at 18:42 -0800, Guido van Rossum wrote: [...] > I miss to understand why many Python developers are so fierce in trying > to push the idea of cross-python compatibility (which is something that > does simply *not* exist in real world for applications) or to warn about > rainy days in the future when this would stop working in CPython. I > would strongly prefer that CPython would settle on (= document) using > reference counting and immediate destruction so that people can stop > making their everyday code more complex with no benefit. You will be > losing no more than an open door that nobody has entered in 20 years, > and people would only benefit from it. Probably because it's good practice to write for compatibility where possible. Cross-OS compatibility isn't possible in the general case either, but it's still a good goal in the cases where it *is* possible. Given that your sample code will generally work even for implementations where garbage collection is used rather than reference counting I fail to understand why you insist so hard that a more restrictive rule should be implemented. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From steve at holdenweb.com Fri Jan 23 13:05:59 2009 From: steve at holdenweb.com (Steve Holden) Date: Fri, 23 Jan 2009 07:05:59 -0500 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: <1232708220.8900.23.camel@ozzu> References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> Message-ID: <4979B2A7.705@holdenweb.com> Giovanni Bajo wrote: > On gio, 2009-01-22 at 18:42 -0800, Guido van Rossum wrote: [...] > I miss to understand why many Python developers are so fierce in trying > to push the idea of cross-python compatibility (which is something that > does simply *not* exist in real world for applications) or to warn about > rainy days in the future when this would stop working in CPython. I > would strongly prefer that CPython would settle on (= document) using > reference counting and immediate destruction so that people can stop > making their everyday code more complex with no benefit. You will be > losing no more than an open door that nobody has entered in 20 years, > and people would only benefit from it. Probably because it's good practice to write for compatibility where possible. Cross-OS compatibility isn't possible in the general case either, but it's still a good goal in the cases where it *is* possible. Given that your sample code will generally work even for implementations where garbage collection is used rather than reference counting I fail to understand why you insist so hard that a more restrictive rule should be implemented. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From solipsis at pitrou.net Fri Jan 23 13:33:14 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 23 Jan 2009 12:33:14 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?=5F=5Fdel=5F=5F_and_tp=5Fdealloc_in_the_IO?= =?utf-8?q?_lib?= References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> Message-ID: Giovanni Bajo develer.com> writes: > > The fact that file objects are collected and closed immediately in all > reasonable use cases (and even in case of exceptions, that you mention, > things get even better with the new semantic of the except clause) The new except clause removes any external references to the exception, but there's still, AFAIR, the reference cycle through the traceback object, which means the whole thing will still have to wait for a pass of the cyclic garbage collector. Regards Antoine. From fijall at gmail.com Fri Jan 23 14:28:43 2009 From: fijall at gmail.com (Maciej Fijalkowski) Date: Fri, 23 Jan 2009 14:28:43 +0100 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: <1232708220.8900.23.camel@ozzu> References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> Message-ID: <693bc9ab0901230528h1165adag354310f03e065892@mail.gmail.com> > That would be break so much code that I doubt that, in practice, you can > slip it through within a release. Besides, being able to write simpler > code like "for L in open("foo.txt")" is per-se a good reason *not to* > put file objects in cycles; so you will probably need more than one good > reason to change this. OK, not *you* because of your BDFL powers ;), but > anyone else would surely have to face great opposition. note that this is about *writing* files, not reading. You would be surprised how much of the software have already taken care to be cross-platform (ie twisted, django, pylons, ...), not to rely on that and be able to run on any other python implementation. > > The fact that file objects are collected and closed immediately in all > reasonable use cases (and even in case of exceptions, that you mention, > things get even better with the new semantic of the except clause) is a > *good* property of Python. I regularly see people *happy* about it. > > I miss to understand why many Python developers are so fierce in trying > to push the idea of cross-python compatibility (which is something that > does simply *not* exist in real world for applications) or to warn about > rainy days in the future when this would stop working in CPython. I > would strongly prefer that CPython would settle on (= document) using > reference counting and immediate destruction so that people can stop > making their everyday code more complex with no benefit. You will be > losing no more than an open door that nobody has entered in 20 years, > and people would only benefit from it. someone said at some point "noone will ever need more than 650k of RAM". see what has happened. Cheers, fijal From rdmurray at bitdance.com Fri Jan 23 15:57:02 2009 From: rdmurray at bitdance.com (rdmurray at bitdance.com) Date: Fri, 23 Jan 2009 09:57:02 -0500 (EST) Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: <1232708220.8900.23.camel@ozzu> References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> Message-ID: On Fri, 23 Jan 2009 at 11:57, Giovanni Bajo wrote: > The fact that file objects are collected and closed immediately in all > reasonable use cases (and even in case of exceptions, that you mention, > things get even better with the new semantic of the except clause) is a > *good* property of Python. I regularly see people *happy* about it. I have never assumed that python closed my files before the end of the program unless I told it to do so, and have always coded accordingly. To do otherwise strikes me as bad coding. I don't believe I ever considered that such an assumption was even thinkable: closing open files when I'm done with them is part of my set of "good programming" habits developed over years of coding, habits that I apply in _any_ language in which I write code. (In fact, it took me a while before I was willing to let python take care of closing the files at program end...and even now I sometimes close files explicitly even in short programs.) Closing file objects is a specific instance of a more general programming rule that goes something like "clean up when you are done". I do in general trust python to clean up python data structures because it knows better than I do when "done" arrives; but when I know when "done" is, I do the cleanup. I love the 'with' statement :) --RDM From guido at python.org Fri Jan 23 16:27:32 2009 From: guido at python.org (Guido van Rossum) Date: Fri, 23 Jan 2009 07:27:32 -0800 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: <1232708220.8900.23.camel@ozzu> References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> Message-ID: On Fri, Jan 23, 2009 at 2:57 AM, Giovanni Bajo wrote: > I miss to understand why many Python developers are so fierce in trying > to push the idea of cross-python compatibility (which is something that > does simply *not* exist in real world for applications) or to warn about > rainy days in the future when this would stop working in CPython. I > would strongly prefer that CPython would settle on (= document) using > reference counting and immediate destruction so that people can stop > making their everyday code more complex with no benefit. You will be > losing no more than an open door that nobody has entered in 20 years, > and people would only benefit from it. You are so very wrong, my son. CPython's implementation strategy *will* evolve. Several groups are hard at work trying to make a faster Python interpreter, and when they succeed, everyone, including you, will want to use their version (or their version may simply *be* the new CPython). Plus, you'd be surprised how many people might want to port existing code (and that may include code that uses C extensions, many of which are also ported) to Jython or IronPython. Your mistake sounds more like "nobody will ever want to run this on Windows, so I won't have to use the os.path module" and other short-sighted ideas. While you may be right in the short run, it may also be the death penalty for a piece of genius code that is found to be unportable. And, by the way, "for line in open(filename): ..." will continue to work. It may just not close the file right away. This is a forgivable sin in a small program that opens a few files only. It only becomes a program when this is itself inside a loop that loops over many filenames -- you could run out of file descriptors. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From barry at python.org Fri Jan 23 16:30:59 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 23 Jan 2009 10:30:59 -0500 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <49798235.50509@v.loewis.de> References: <49798235.50509@v.loewis.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Brett, thanks for putting this PEP together! On Jan 23, 2009, at 3:39 AM, Martin v. L?wis wrote: > Somebody will have to make a decision. Ultimately, Guido will have to > approve the PEP, but it might be that he refuses to make a choice of > specific DVCS. Traditionally, it is the PEP author who makes all > choices (considering suggestions from the community, of course). So > what DVCS do the PEP authors recommend? Brett, perhaps you should publish a tentative schedule. Milestones I'd like to see include * Initial impressions section completed * Call for rebuttals * Second draft of impressions * (perhaps multiple) Recommendations to Guido and python-dev * Experimental live branches deployed for testing * Final recommendation * Final decision My understanding is that a final decision will /not/ be made by Pycon. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSXnis3EjvBPtnXfVAQJTqwQAimSA/JDzYN132npgazIag3fwOk36DAJl vvYMXOfWqvfl9DO/8cPF9YFOwF7YdHM8k4wUTmfLYhE8JfefODjrdHkL5pdclwDg Pbb2tjMfl0vBOPeaaPnJ5NKIJMwyRWkVhFMyNU5KmBmVRPJXpAQM23IOORX2dAaI tLONmrvx8K4= =CF18 -----END PGP SIGNATURE----- From rasky at develer.com Fri Jan 23 16:48:21 2009 From: rasky at develer.com (Giovanni Bajo) Date: Fri, 23 Jan 2009 16:48:21 +0100 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> Message-ID: <4979E6C5.3000502@develer.com> On 1/23/2009 4:27 PM, Guido van Rossum wrote: > On Fri, Jan 23, 2009 at 2:57 AM, Giovanni Bajo wrote: >> I miss to understand why many Python developers are so fierce in trying >> to push the idea of cross-python compatibility (which is something that >> does simply *not* exist in real world for applications) or to warn about >> rainy days in the future when this would stop working in CPython. I >> would strongly prefer that CPython would settle on (= document) using >> reference counting and immediate destruction so that people can stop >> making their everyday code more complex with no benefit. You will be >> losing no more than an open door that nobody has entered in 20 years, >> and people would only benefit from it. > > You are so very wrong, my son. CPython's implementation strategy > *will* evolve. Several groups are hard at work trying to make a faster > Python interpreter, and when they succeed, everyone, including you, > will want to use their version (or their version may simply *be* the > new CPython). I'm basing my assumption on 19 years of history of CPython. Please, correct me if I'm wrong, but the only thing that changed is that the cyclic-GC was added so that loops are now collected, but nothing change with respect to cyclic collection. And everybody (including you, IIRC) has always agreed that it would be very very hard to eradicate reference counting from CPython and all the existing extensions; so hard that it is probably more convenient to start a different interpreter implementation. > Plus, you'd be surprised how many people might want to port existing > code (and that may include code that uses C extensions, many of which > are also ported) to Jython or IronPython. I would love to be surprised, in fact! Since I fail to see any business strategy behind such a porting, I don't see this happening very often in the business industry (and even less in the open source community, where there are also political issues between those versions of Python, I would say). I also never met someone that wanted to make a cross-interpreter Python application, nor read about someone that has a reasonable use case for wanting to do that, besides geek fun; which is why I came to this conclusion, though I obviously have access only to a little information compared to other people in here. On the other hand, I see people using IronPython so that they can access to the .NET framework (which can't be ported to other Python versions), or using Java so that they can blend to existing Java programs. And those are perfectly good use cases for the existence of such interpreters, but not for the merits of writing cross-interpreter portable code. I would be pleased if you (or others) could point me to real-world use cases of this cross-interpreter portability. > Your mistake sounds more like "nobody will ever want to run this on > Windows, so I won't have to use the os.path module" and other > short-sighted ideas. While you may be right in the short run, it may > also be the death penalty for a piece of genius code that is found to > be unportable. And in fact, I don't defensively code cross-OS portable code. Python is great in that *most* of what you naturally write is portable; which means that, the day that you need it, it's a breeze to port your code (assuming that you have also picked up the correct extensions, which I always try to do). But that does not mean that I have today to waste time on something that I don't need. > And, by the way, "for line in open(filename): ..." will continue to > work. It may just not close the file right away. This is a forgivable > sin in a small program that opens a few files only. It only becomes a > program when this is itself inside a loop that loops over many > filenames -- you could run out of file descriptors. I do understand this, but I'm sure you realize that there other similars example where the side effects are far worse. Maybe you don't care since you simply decided to declare that code *wrong*. But I'm unsure the community will kindly accept such a deep change in behaviour. Especially within the existing 2.x or 3.x release lines. -- Giovanni Bajo Develer S.r.l. http://www.develer.com From solipsis at pitrou.net Fri Jan 23 16:58:03 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 23 Jan 2009 15:58:03 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?=5F=5Fdel=5F=5F_and_tp=5Fdealloc_in_the_IO?= =?utf-8?q?_lib?= References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> Message-ID: Guido van Rossum python.org> writes: > > And, by the way, "for line in open(filename): ..." will continue to > work. It may just not close the file right away. This is a forgivable > sin in a small program that opens a few files only. It only becomes a > program when this is itself inside a loop that loops over many > filenames -- you could run out of file descriptors. It can also be a problem under Windows where, IIRC, you can't delete a file which is still opened somewhere (even for reading). From steve at holdenweb.com Fri Jan 23 17:36:16 2009 From: steve at holdenweb.com (Steve Holden) Date: Fri, 23 Jan 2009 11:36:16 -0500 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: <4979E6C5.3000502@develer.com> References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> <4979E6C5.3000502@develer.com> Message-ID: Giovanni Bajo wrote: > On 1/23/2009 4:27 PM, Guido van Rossum wrote: >> On Fri, Jan 23, 2009 at 2:57 AM, Giovanni Bajo wrote: >>> I miss to understand why many Python developers are so fierce in trying >>> to push the idea of cross-python compatibility (which is something that >>> does simply *not* exist in real world for applications) or to warn about >>> rainy days in the future when this would stop working in CPython. I >>> would strongly prefer that CPython would settle on (= document) using >>> reference counting and immediate destruction so that people can stop >>> making their everyday code more complex with no benefit. You will be >>> losing no more than an open door that nobody has entered in 20 years, >>> and people would only benefit from it. >> >> You are so very wrong, my son. CPython's implementation strategy >> *will* evolve. Several groups are hard at work trying to make a faster >> Python interpreter, and when they succeed, everyone, including you, >> will want to use their version (or their version may simply *be* the >> new CPython). > > I'm basing my assumption on 19 years of history of CPython. Please, > correct me if I'm wrong, but the only thing that changed is that the > cyclic-GC was added so that loops are now collected, but nothing change > with respect to cyclic collection. And everybody (including you, IIRC) > has always agreed that it would be very very hard to eradicate reference > counting from CPython and all the existing extensions; so hard that it > is probably more convenient to start a different interpreter > implementation. > And everybody knows that the past is an infallible guide to the future ... not. >> Plus, you'd be surprised how many people might want to port existing >> code (and that may include code that uses C extensions, many of which >> are also ported) to Jython or IronPython. > > I would love to be surprised, in fact! > Well both Django and Twisted are on the list of "interested in porting". > Since I fail to see any business strategy behind such a porting, I don't > see this happening very often in the business industry (and even less in > the open source community, where there are also political issues between > those versions of Python, I would say). I also never met someone that > wanted to make a cross-interpreter Python application, nor read about > someone that has a reasonable use case for wanting to do that, besides > geek fun; which is why I came to this conclusion, though I obviously > have access only to a little information compared to other people in here. > Exactly. > On the other hand, I see people using IronPython so that they can access > to the .NET framework (which can't be ported to other Python versions), > or using Java so that they can blend to existing Java programs. And > those are perfectly good use cases for the existence of such > interpreters, but not for the merits of writing cross-interpreter > portable code. > Well, they use IronPython so they can access the .NET framework FROM PYTHON. Would you willfully isolate those users from access to other Python code? > I would be pleased if you (or others) could point me to real-world use > cases of this cross-interpreter portability. > Any code that *can* be run on multiple interpreters is a candidate for use in multiple environments. I don't understand this wish to cripple your code. >> Your mistake sounds more like "nobody will ever want to run this on >> Windows, so I won't have to use the os.path module" and other >> short-sighted ideas. While you may be right in the short run, it may >> also be the death penalty for a piece of genius code that is found to >> be unportable. > > And in fact, I don't defensively code cross-OS portable code. Python is > great in that *most* of what you naturally write is portable; which > means that, the day that you need it, it's a breeze to port your code > (assuming that you have also picked up the correct extensions, which I > always try to do). But that does not mean that I have today to waste > time on something that I don't need. > Perhaps not, but just because you assess it to be a YAGNI in your own case that doesn't mean the same arguments should be applied to (for example) standard library modules. >> And, by the way, "for line in open(filename): ..." will continue to >> work. It may just not close the file right away. This is a forgivable >> sin in a small program that opens a few files only. It only becomes a >> program when this is itself inside a loop that loops over many >> filenames -- you could run out of file descriptors. > > I do understand this, but I'm sure you realize that there other similars > example where the side effects are far worse. Maybe you don't care since > you simply decided to declare that code *wrong*. But I'm unsure the > community will kindly accept such a deep change in behaviour. Especially > within the existing 2.x or 3.x release lines. The community is, I suspect, more in line with Guido's approach than with yours. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From status at bugs.python.org Fri Jan 23 18:06:42 2009 From: status at bugs.python.org (Python tracker) Date: Fri, 23 Jan 2009 18:06:42 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20090123170642.0A04578552@psf.upfronthosting.co.za> ACTIVITY SUMMARY (01/16/09 - 01/23/09) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 2322 open (+42) / 14538 closed (+32) / 16860 total (+74) Open issues with patches: 790 Average duration of open issues: 701 days. Median duration of open issues: 6 days. Open Issues Breakdown open 2299 (+42) pending 23 ( +0) Issues Created Or Reopened (79) _______________________________ Unpickling is really slow 01/18/09 http://bugs.python.org/issue3873 reopened pitrou zipfile and winzip 01/17/09 http://bugs.python.org/issue3997 reopened pitrou patch, needs review Thread Safe Py_AddPendingCall 01/16/09 CLOSED http://bugs.python.org/issue4293 reopened marketdickinson patch, patch python3.0 -u: unbuffered stdout 01/19/09 http://bugs.python.org/issue4705 reopened pitrou patch decoding functions in _codecs module accept str arguments 01/22/09 CLOSED http://bugs.python.org/issue4874 reopened pitrou patch UTF-16 stream codec barfs on valid input 01/16/09 CLOSED http://bugs.python.org/issue4964 created gvanrossum Can doc index of html version be separately scrollable? 01/16/09 http://bugs.python.org/issue4965 created tjreedy Improving Lib Doc Sequence Types Section 01/16/09 http://bugs.python.org/issue4966 created tjreedy Bugs in _ssl object read() when a buffer is specified 01/17/09 http://bugs.python.org/issue4967 created pitrou patch Clarify inspect.is method docs 01/17/09 http://bugs.python.org/issue4968 created tjreedy mimetypes on Windows should read MIME database from registry (w/ 01/17/09 http://bugs.python.org/issue4969 created gagenellina patch test_os causes delayed failure on x86 gentoo buildbot: Unknown s 01/17/09 http://bugs.python.org/issue4970 created marketdickinson Incorrect title case 01/17/09 http://bugs.python.org/issue4971 created mrabarnett let's equip ftplib.FTP with __enter__ and __exit__ 01/17/09 http://bugs.python.org/issue4972 created tarek patch calendar formatyearpage returns bytes, not str 01/17/09 http://bugs.python.org/issue4973 created mnewman Redundant mention of lists and tuples at start of Sequence Types 01/17/09 CLOSED http://bugs.python.org/issue4974 created MLModel 3.0 base64 doc examples lack bytes 'b' indicator 01/17/09 CLOSED http://bugs.python.org/issue4975 created tjreedy Documentation of the set intersection and difference operators i 01/17/09 CLOSED http://bugs.python.org/issue4976 created MLModel test_maxint64 fails on 32-bit systems due to assumption that 64- 01/20/09 http://bugs.python.org/issue4977 reopened pitrou allow unicode keyword args 01/17/09 CLOSED http://bugs.python.org/issue4978 created benjamin.peterson patch random.uniform can return its upper limit 01/18/09 CLOSED http://bugs.python.org/issue4979 created hailperin patch Documentation for "y#" does not mention PY_SSIZE_T_CLEAN 01/18/09 CLOSED http://bugs.python.org/issue4980 created pitrou Incorrect statement regarding mutable sequences in datamodel Ref 01/18/09 CLOSED http://bugs.python.org/issue4981 created MLModel Running python 3 as Non-admin User requests the Runtime to termi 01/18/09 http://bugs.python.org/issue4982 created yhvh Spurious reference to "byte sequences" in Library stdtypes seque 01/18/09 CLOSED http://bugs.python.org/issue4983 created MLModel Inconsistent count of sequence types in Library stdtypes documen 01/18/09 CLOSED http://bugs.python.org/issue4984 created MLModel Idle hangs when given a nonexistent filename. 01/18/09 http://bugs.python.org/issue4985 created della1rv patch Augmented Assignment / Operations Confusion in Documentation 01/18/09 CLOSED http://bugs.python.org/issue4986 created acooke update distutils.README 01/18/09 http://bugs.python.org/issue4987 created tarek A link to "What???s New in Python 2.0" on "The Python Tutorial" 01/18/09 CLOSED http://bugs.python.org/issue4988 created akitada 'calendar' module is crummy and should be removed 01/18/09 CLOSED http://bugs.python.org/issue4989 created dotz test_codeccallbacks.CodecCallbackTest.test_badhandlerresult is n 01/18/09 CLOSED http://bugs.python.org/issue4990 created fijal patch os.fdopen doesn't raise on invalid file descriptors 01/18/09 CLOSED http://bugs.python.org/issue4991 created benjamin.peterson patch yield's documentation not updated 01/18/09 http://bugs.python.org/issue4992 created En-Cu-Kou Typo in importlib 01/19/09 CLOSED http://bugs.python.org/issue4993 created pitrou subprocess (Popen) doesn't works properly 01/19/09 CLOSED http://bugs.python.org/issue4994 created simonbcn sqlite3 module gives SQL logic error only in transactions 01/19/09 CLOSED http://bugs.python.org/issue4995 created alsadi io.TextIOWrapper calls buffer.read1() 01/19/09 http://bugs.python.org/issue4996 created kawai xml.sax.saxutils.XMLGenerator should write to io.RawIOBase. 01/19/09 http://bugs.python.org/issue4997 created kawai __slots__ on Fraction is useless 01/19/09 http://bugs.python.org/issue4998 created Somelauw patch multiprocessing.Queue does not order objects 01/19/09 http://bugs.python.org/issue4999 created ndfred multiprocessing - Pool.map() slower about 5 times than map() on 01/19/09 CLOSED http://bugs.python.org/issue5000 created 0x666 Remove assertion-based checking in multiprocessing 01/19/09 http://bugs.python.org/issue5001 created jnoller multiprocessing/pipe_connection.c compiler warning (conn_poll) 01/19/09 CLOSED http://bugs.python.org/issue5002 created ocean-city patch Error while installing Python-3 01/19/09 CLOSED http://bugs.python.org/issue5003 created vani socket.getfqdn() doesn't cope properly with purely DNS-based set 01/19/09 http://bugs.python.org/issue5004 created dfranke 3.0 sqlite doc: most examples refer to pysqlite2, use 2.x syntax 01/19/09 http://bugs.python.org/issue5005 created tjreedy Duplicate UTF-16 BOM if a file is open in append mode 01/19/09 http://bugs.python.org/issue5006 created haypo urllib2 HTTPS connection failure (BadStatusLine Exception) 01/19/09 http://bugs.python.org/issue5007 created ak Wrong tell() result for a file opened in append mode 01/20/09 CLOSED http://bugs.python.org/issue5008 created haypo patch multiprocessing: failure in manager._debug_info() 01/20/09 CLOSED http://bugs.python.org/issue5009 created amaury.forgeotdarc patch, needs review repoened "test_maxint64 fails on 32-bit systems due to assumptio 01/20/09 CLOSED http://bugs.python.org/issue5010 created lkcl issue4428 - make io.BufferedWriter observe max_buffer_size limit 01/20/09 CLOSED http://bugs.python.org/issue5011 created pitrou How to test python 3 was installed properly in my directory? 01/20/09 CLOSED http://bugs.python.org/issue5012 created vani Problems with delay parm of logging.handlers.RotatingFileHandler 01/20/09 CLOSED http://bugs.python.org/issue5013 created pycurry Kernel Protection Failure 01/20/09 CLOSED http://bugs.python.org/issue5014 reopened pitrou The Py_SetPythonHome C API function is undocumented 01/20/09 http://bugs.python.org/issue5015 created Kylotan FileIO.seekable() can return False 01/20/09 http://bugs.python.org/issue5016 created pitrou patch import suds help( suds ) fails 01/21/09 http://bugs.python.org/issue5017 created pjb Overly general claim about sequence unpacking in tutorial 01/21/09 http://bugs.python.org/issue5018 created MLModel Specifying common controls DLL in manifest 01/21/09 http://bugs.python.org/issue5019 created robind Regex Expression Error 01/21/09 CLOSED http://bugs.python.org/issue5020 created sleepyfish doctest.testfile should set __name__, can't use namedtuple 01/21/09 http://bugs.python.org/issue5021 created tlynn doctest should allow running tests with "python -m doctest" 01/21/09 CLOSED http://bugs.python.org/issue5022 created tlynn Segfault in datetime.time.strftime("%z") 01/21/09 http://bugs.python.org/issue5023 created eswald sndhdr.whathdr returns -1 for WAV file frame count 01/21/09 http://bugs.python.org/issue5024 created rpyle test_kqueue failure on OS X 01/21/09 http://bugs.python.org/issue5025 created marketdickinson patch [reopening] native build of python win32 using msys under both w 01/21/09 http://bugs.python.org/issue5026 created lkcl xml namespace not understood by xml.sax.saxutils.XMLGenerator 01/21/09 http://bugs.python.org/issue5027 created roug tokenize.generate_tokens doesn't always return logical line 01/22/09 http://bugs.python.org/issue5028 created duncf Odd slicing behaviour 01/22/09 CLOSED http://bugs.python.org/issue5029 created dgaletic Typo in class tkinter.filedialog.Directory prevents compilation 01/22/09 CLOSED http://bugs.python.org/issue5030 created ringhome Thread.daemon docs 01/22/09 http://bugs.python.org/issue5031 created steve21 patch, needs review itertools.count step 01/22/09 http://bugs.python.org/issue5032 created steve21 setup.py crashes if sqlite version contains 'beta' 01/22/09 http://bugs.python.org/issue5033 created blahblahwhat itertools.fixlen 01/22/09 http://bugs.python.org/issue5034 created lehmannro patch Compilation --without-threads fails 01/22/09 http://bugs.python.org/issue5035 created pitrou patch xml.parsers.expat make a dictionary which keys are broken if buf 01/23/09 http://bugs.python.org/issue5036 created tksmashiw unexpected unicode behavior for proxy objects 01/23/09 http://bugs.python.org/issue5037 created Taldor Issues Now Closed (74) ______________________ Adds the .compact() method to bsddb db.DB objects 444 days http://bugs.python.org/issue1391 jcea patch complex constructor doesn't accept string with nan and inf 341 days http://bugs.python.org/issue2121 marketdickinson patch embed manifest in windows extensions 288 days http://bugs.python.org/issue2563 mhammond patch, patch Console UnicodeDecodeError s once more 282 days http://bugs.python.org/issue2614 benjamin.peterson Multiprocessing hangs when multiprocessing.Pool methods are call 203 days http://bugs.python.org/issue3272 jnoller multiprocessing and meaningful errors 203 days http://bugs.python.org/issue3273 jnoller multiprocessing.connection doesn't import AuthenticationError, w 202 days http://bugs.python.org/issue3283 jnoller patch _multiprocessing.Connection() doesn't check handle 195 days http://bugs.python.org/issue3321 jnoller patch, needs review use of cPickle in multiprocessing 193 days http://bugs.python.org/issue3325 jnoller patch thread_nt.c update 7 days http://bugs.python.org/issue3582 krisvale patch, patch importing from UNC roots doesn't work 151 days http://bugs.python.org/issue3677 ocean-city patch _multiprocessing build fails when configure --without-threads 137 days http://bugs.python.org/issue3807 haypo patch Problem with SocketIO for closing the socket 132 days http://bugs.python.org/issue3826 gregory.p.smith patch IDLE: checksyntax() doesn't support Unicode? 109 days http://bugs.python.org/issue4008 loewis patch, needs review _multiprocessing doesn't build on macosx 10.3 104 days http://bugs.python.org/issue4065 jnoller Py_FatalError cleanup patch 101 days http://bugs.python.org/issue4077 amaury.forgeotdarc patch, easy Thread Safe Py_AddPendingCall 4 days http://bugs.python.org/issue4293 marketdickinson patch, patch incorrect and inconsistent logging in multiprocessing 68 days http://bugs.python.org/issue4301 jnoller patch, needs review Fix performance issues in xmlrpclib 63 days http://bugs.python.org/issue4336 krisvale patch, patch, easy AssertionError in Doc/includes/mp_benchmarks.py 51 days http://bugs.python.org/issue4449 jnoller patch, needs review Optimize new io library 43 days http://bugs.python.org/issue4561 pitrou patch Documentation for multiprocessing - Pool.apply() 45 days http://bugs.python.org/issue4593 jnoller easy Issue with RotatingFileHandler logging handler on Windows 25 days http://bugs.python.org/issue4749 vsajip threding, bsddb and double free or corruption (fasttop) 23 days http://bugs.python.org/issue4774 jcea idle 3.1a1 utf8 16 days http://bugs.python.org/issue4815 loewis patch, needs review md_state is not released 13 days http://bugs.python.org/issue4838 pitrou patch int('3L') still valid in Python 3.0 15 days http://bugs.python.org/issue4842 marketdickinson patch syntax: no unpacking in augassign 12 days http://bugs.python.org/issue4857 georg.brandl decoding functions in _codecs module accept str arguments 0 days http://bugs.python.org/issue4874 pitrou patch Allow buffering for HTTPResponse 12 days http://bugs.python.org/issue4879 krisvale patch, patch ast.literal_eval does not properly handled complex numbers 7 days http://bugs.python.org/issue4907 tjreedy patch, patch trunc(x) erroneously documented as built-in 7 days http://bugs.python.org/issue4914 georg.brandl set.add and set.discard are not conformant to collections.Mutabl 7 days http://bugs.python.org/issue4922 georg.brandl 26backport time.strftime documentation needs update 6 days http://bugs.python.org/issue4923 georg.brandl Inconsistent unicode repr for fileobject 3 days http://bugs.python.org/issue4927 loewis patch, patch, needs review smptlib.py can raise socket.error 6 days http://bugs.python.org/issue4929 krisvale patch, patch, needs review Small optimization in type construction 4 days http://bugs.python.org/issue4930 amaury.forgeotdarc patch native build of python win32 using msys under wine. 6 days http://bugs.python.org/issue4954 lkcl os.ftruncate raises IOError instead of OSError 4 days http://bugs.python.org/issue4957 krisvale patch, patch inspect.formatargspec fails for keyword args without defaults, a 1 days http://bugs.python.org/issue4959 benjamin.peterson patch UTF-16 stream codec barfs on valid input 0 days http://bugs.python.org/issue4964 gvanrossum Redundant mention of lists and tuples at start of Sequence Types 1 days http://bugs.python.org/issue4974 georg.brandl 3.0 base64 doc examples lack bytes 'b' indicator 1 days http://bugs.python.org/issue4975 georg.brandl Documentation of the set intersection and difference operators i 1 days http://bugs.python.org/issue4976 georg.brandl allow unicode keyword args 3 days http://bugs.python.org/issue4978 benjamin.peterson patch random.uniform can return its upper limit 1 days http://bugs.python.org/issue4979 georg.brandl patch Documentation for "y#" does not mention PY_SSIZE_T_CLEAN 1 days http://bugs.python.org/issue4980 georg.brandl Incorrect statement regarding mutable sequences in datamodel Ref 0 days http://bugs.python.org/issue4981 benjamin.peterson Spurious reference to "byte sequences" in Library stdtypes seque 0 days http://bugs.python.org/issue4983 georg.brandl Inconsistent count of sequence types in Library stdtypes documen 0 days http://bugs.python.org/issue4984 georg.brandl Augmented Assignment / Operations Confusion in Documentation 0 days http://bugs.python.org/issue4986 acooke A link to "What???s New in Python 2.0" on "The Python Tutorial" 0 days http://bugs.python.org/issue4988 georg.brandl 'calendar' module is crummy and should be removed 0 days http://bugs.python.org/issue4989 skip.montanaro test_codeccallbacks.CodecCallbackTest.test_badhandlerresult is n 0 days http://bugs.python.org/issue4990 benjamin.peterson patch os.fdopen doesn't raise on invalid file descriptors 1 days http://bugs.python.org/issue4991 benjamin.peterson patch Typo in importlib 0 days http://bugs.python.org/issue4993 brett.cannon subprocess (Popen) doesn't works properly 0 days http://bugs.python.org/issue4994 LambertDW sqlite3 module gives SQL logic error only in transactions 2 days http://bugs.python.org/issue4995 alsadi multiprocessing - Pool.map() slower about 5 times than map() on 0 days http://bugs.python.org/issue5000 0x666 multiprocessing/pipe_connection.c compiler warning (conn_poll) 0 days http://bugs.python.org/issue5002 jnoller patch Error while installing Python-3 0 days http://bugs.python.org/issue5003 pitrou Wrong tell() result for a file opened in append mode 1 days http://bugs.python.org/issue5008 pitrou patch multiprocessing: failure in manager._debug_info() 1 days http://bugs.python.org/issue5009 jnoller patch, needs review repoened "test_maxint64 fails on 32-bit systems due to assumptio 0 days http://bugs.python.org/issue5010 pitrou issue4428 - make io.BufferedWriter observe max_buffer_size limit 0 days http://bugs.python.org/issue5011 gregory.p.smith How to test python 3 was installed properly in my directory? 0 days http://bugs.python.org/issue5012 benjamin.peterson Problems with delay parm of logging.handlers.RotatingFileHandler 0 days http://bugs.python.org/issue5013 vsajip Kernel Protection Failure 0 days http://bugs.python.org/issue5014 loewis Regex Expression Error 0 days http://bugs.python.org/issue5020 amaury.forgeotdarc doctest should allow running tests with "python -m doctest" 1 days http://bugs.python.org/issue5022 benjamin.peterson Odd slicing behaviour 0 days http://bugs.python.org/issue5029 georg.brandl Typo in class tkinter.filedialog.Directory prevents compilation 0 days http://bugs.python.org/issue5030 benjamin.peterson Only "Overwrite" mode possible with curses.textpad.Textbox 1557 days http://bugs.python.org/issue1048820 dashing patch inspect.isclass() fails with custom __getattr__ 1306 days http://bugs.python.org/issue1225107 benjamin.peterson Top Issues Most Discussed (10) ______________________________ 22 IDLE won't start in custom directory. 129 days open http://bugs.python.org/issue3881 15 __slots__ on Fraction is useless 4 days open http://bugs.python.org/issue4998 14 native build of python win32 using msys under wine. 6 days closed http://bugs.python.org/issue4954 13 PyUnicode_FromWideChar incorrect for characters outside the BMP 54 days open http://bugs.python.org/issue4474 11 let's equip ftplib.FTP with __enter__ and __exit__ 6 days open http://bugs.python.org/issue4972 11 _multiprocessing.Connection() doesn't check handle 195 days closed http://bugs.python.org/issue3321 10 Compilation --without-threads fails 1 days open http://bugs.python.org/issue5035 9 test_maxint64 fails on 32-bit systems due to assumption that 64 3 days open http://bugs.python.org/issue4977 8 Wrong tell() result for a file opened in append mode 1 days closed http://bugs.python.org/issue5008 8 multiprocessing.Queue does not order objects 4 days open http://bugs.python.org/issue4999 From bcannon at gmail.com Fri Jan 23 19:15:18 2009 From: bcannon at gmail.com (Brett Cannon) Date: Fri, 23 Jan 2009 10:15:18 -0800 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: <4979805B.4030107@v.loewis.de> References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> Message-ID: On Fri, Jan 23, 2009 at 00:31, "Martin v. L?wis" wrote: > Giampaolo Rodola' wrote: >> Hi, >> while attempting to port pyftpdlib [1] to Python 3 I have noticed that >> ftplib differs from the previous 2.x version in that it uses latin-1 >> to encode everything it's sent over the FTP command channel, but by >> reading RFC-2640 [2] it seems that UTF-8 should be preferred instead. >> I'm far from being an expert of encodings, plus the RFC is quite hard >> to understand, so sorry in advance if I have misunderstood the whole >> thing. > > I read it that a conforming client MUST issue a FEAT command, to > determine whether the server supports UTF8. One would have to go > back to the original FTP RFC, but it seams that, in the absence > of server UTF8 support, all path names must be 7-bit clean (which > means that ASCII should be the default encoding). > > In any case, Brett changed the encoding to latin1 in r58378, maybe > he can comment. > If I remember correctly something along Martin's comment about 7-bit clean is needed, but some servers don't follow the standard, so I swapped it to Latin-1. But that was so long ago I don't remember where I gleaned the details from in the RFC. If I misread the RFC and it is UTF-8 then all the better to make more of the world move over to Unicode. -Brett From brett at python.org Fri Jan 23 19:24:26 2009 From: brett at python.org (Brett Cannon) Date: Fri, 23 Jan 2009 10:24:26 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <49798235.50509@v.loewis.de> References: <49798235.50509@v.loewis.de> Message-ID: On Fri, Jan 23, 2009 at 00:39, "Martin v. L?wis" wrote: >> And I would like to thank my co-authors for their time and effort thus >> far in filling in the PEP on behalf of their favorite DVCS. Everyone >> has put in a lot of time already with I am sure more time in the >> future. > > So what will happen next? ISTM that the PEP is not complete, since it > doesn't specify a specific DVCS to migrate to (i.e. it wouldn't be > possible to implement the PEP as it stands). > Right, it isn't done. But I know from experience people care A LOT about their favorite DVCS, so I wanted to get the PEP up now so people can start sending in feedback to the proper authors instead of having it all flood in after I have done my exploratory work on all of them and be accused of having based my decision on faulty information. > Somebody will have to make a decision. That falls on my shoulders. > Ultimately, Guido will have to > approve the PEP, but it might be that he refuses to make a choice of > specific DVCS. Guido is staying out of this one. > Traditionally, it is the PEP author who makes all > choices (considering suggestions from the community, of course). So > what DVCS do the PEP authors recommend? Umm:: import random print(random.choice('svn', 'bzr', 'hg', 'git')) -Brett From brett at python.org Fri Jan 23 19:25:38 2009 From: brett at python.org (Brett Cannon) Date: Fri, 23 Jan 2009 10:25:38 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: Message-ID: On Fri, Jan 23, 2009 at 03:05, Antoine Pitrou wrote: > Brett Cannon python.org> writes: >> >> I have now converted PEP 374 >> (http://www.python.org/dev/peps/pep-0374/) from Google Docs to reST >> and checked it in. I am not going to paste it into an email as it is >> nearly 1500 lines in reST form. > > It seems the ">>" token is mangled into a French closing quote ("?") inside code > snippets. Yeah, the Google Docs export didn't come out well in pure text. I tried to catch as many of those characters as I could but apparently I missed a couple. -Brett From brett at python.org Fri Jan 23 19:40:02 2009 From: brett at python.org (Brett Cannon) Date: Fri, 23 Jan 2009 10:40:02 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <49798235.50509@v.loewis.de> Message-ID: On Fri, Jan 23, 2009 at 07:30, Barry Warsaw wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Brett, thanks for putting this PEP together! > Yep. Just make sure I don't do something like this for a LONG time. Apparently I didn't learn my lesson after the issue tracker migration. > On Jan 23, 2009, at 3:39 AM, Martin v. L?wis wrote: > >> Somebody will have to make a decision. Ultimately, Guido will have to >> approve the PEP, but it might be that he refuses to make a choice of >> specific DVCS. Traditionally, it is the PEP author who makes all >> choices (considering suggestions from the community, of course). So >> what DVCS do the PEP authors recommend? > > Brett, perhaps you should publish a tentative schedule. Milestones I'd like > to see include > > * Initial impressions section completed > * Call for rebuttals > * Second draft of impressions > * (perhaps multiple) Recommendations to Guido and python-dev > * Experimental live branches deployed for testing > * Final recommendation > * Final decision > I think you just published the schedule, Barry. =) Seriously, though, I have several other commitments that take precedent over this PEP so I don't feel comfortable locking down any dates. > My understanding is that a final decision will /not/ be made by Pycon. It's doubtful, but would be nice by then. What I will do is eliminate a contender by/at PyCon. -Brett From phd at phd.pp.ru Fri Jan 23 19:55:01 2009 From: phd at phd.pp.ru (Oleg Broytmann) Date: Fri, 23 Jan 2009 21:55:01 +0300 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> Message-ID: <20090123185501.GA24843@phd.pp.ru> On Fri, Jan 23, 2009 at 10:15:18AM -0800, Brett Cannon wrote: > If I remember correctly something along Martin's comment about 7-bit > clean is needed, but some servers don't follow the standard, so I > swapped it to Latin-1. But that was so long ago I don't remember where > I gleaned the details from in the RFC. If I misread the RFC and it is > UTF-8 then all the better to make more of the world move over to > Unicode. I don't know any server that encode file names in any way. All servers I know just pass filenames as is, 8-bit; some that implement stricter RFC-959 mangle chr(255), but that's all. One can encounter a server that stores files in a number of different encodings. Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd at phd.pp.ru Programmers don't die, they just GOSUB without RETURN. From tony at pagedna.com Fri Jan 23 20:22:55 2009 From: tony at pagedna.com (Tony Lownds) Date: Fri, 23 Jan 2009 11:22:55 -0800 Subject: [Python-Dev] Fwd: [ALERT] cbank: OldHashChecker cannot check password, uid is None References: <20090123191626.9F69BA7EB8@siteserver3.localdomain> Message-ID: Rob and/or Tim, Can you track this down? Thanks -Tony Begin forwarded message: > From: support+dev at pagedna.com > Date: January 23, 2009 11:16:26 AM PST > To: problems at pagedna.com > Subject: [ALERT] cbank: OldHashChecker cannot check password, uid is > None > > OldHashChecker cannot check password, uid is None > Script: /inet/www/clients/cbank/index.cgi > Machine: siteserver3 > Directory: /mnt/sitenfs2_clients/cbank > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Fri Jan 23 20:35:01 2009 From: rdmurray at bitdance.com (rdmurray at bitdance.com) Date: Fri, 23 Jan 2009 14:35:01 -0500 (EST) Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: <20090123185501.GA24843@phd.pp.ru> References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> <20090123185501.GA24843@phd.pp.ru> Message-ID: On Fri, 23 Jan 2009 at 21:55, Oleg Broytmann wrote: > On Fri, Jan 23, 2009 at 10:15:18AM -0800, Brett Cannon wrote: >> If I remember correctly something along Martin's comment about 7-bit >> clean is needed, but some servers don't follow the standard, so I >> swapped it to Latin-1. But that was so long ago I don't remember where >> I gleaned the details from in the RFC. If I misread the RFC and it is >> UTF-8 then all the better to make more of the world move over to >> Unicode. > > I don't know any server that encode file names in any way. All servers > I know just pass filenames as is, 8-bit; some that implement stricter > RFC-959 mangle chr(255), but that's all. One can encounter a server that > stores files in a number of different encodings. Given that a Unix OS can't know what encoding a filename is in (*), I can't see that one could practically implement a Unix FTP server in any other way. --RDM (*) remember the earlier extensive discussion of this when the issue of listdir() ignoring non-encodable filesnames came up? From brett at python.org Fri Jan 23 20:39:21 2009 From: brett at python.org (Brett Cannon) Date: Fri, 23 Jan 2009 11:39:21 -0800 Subject: [Python-Dev] Fwd: [ALERT] cbank: OldHashChecker cannot check password, uid is None In-Reply-To: References: <20090123191626.9F69BA7EB8@siteserver3.localdomain> Message-ID: Uh, Tony, I think you sent this to the wrong email address. =) On Fri, Jan 23, 2009 at 11:22, Tony Lownds wrote: > Rob and/or Tim, > Can you track this down? > Thanks > -Tony > > Begin forwarded message: > > From: support+dev at pagedna.com > Date: January 23, 2009 11:16:26 AM PST > To: problems at pagedna.com > Subject: [ALERT] cbank: OldHashChecker cannot check password, uid is None > OldHashChecker cannot check password, uid is None > Script: /inet/www/clients/cbank/index.cgi > Machine: siteserver3 > Directory: /mnt/sitenfs2_clients/cbank > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/brett%40python.org > > From tony at pagedna.com Fri Jan 23 20:09:08 2009 From: tony at pagedna.com (Tony Lownds) Date: Fri, 23 Jan 2009 11:09:08 -0800 Subject: [Python-Dev] Fwd: [ALERT] cityoftoronto: problem saving to products table References: <20090123190001.9462DA7EB8@siteserver3.localdomain> Message-ID: Hi Paulus, Have you fixed these aerts before? We need a script to fix these alerts. Thanks -Tony Begin forwarded message: > From: support+dev at pagedna.com > Date: January 23, 2009 11:00:01 AM PST > To: problems at pagedna.com > Subject: [ALERT] cityoftoronto: problem saving to products table > > problem saving to products table > > Traceback (most recent call last): > File "/opt/printra/lib/python/printra/sossite/KindManager.py", line > 325, in save_to_products_table > self.save_to_products_table2(kinds) > File "/opt/printra/lib/python/printra/sossite/KindManager.py", line > 344, in save_to_products_table2 > if dbkind.update(site, kind): > File "/opt/printra/lib/python/printra/sossite/KindManager.py", line > 490, in update > v = fn(kind) > File "/opt/printra/lib/python/printra/sossite/KindManager.py", line > 474, in _buy_price > return coerce_qtyspec(kind.buy_qtyspec).price_for_qty(1) > File "/opt/printra/lib/python/printra/sossite/Sos2qtyspec.py", line > 778, in price_for_qty > q, p, l = _qtypricesplit(q) > File "/opt/printra/lib/python/printra/sossite/Sos2qtyspec.py", line > 671, in _qtypricesplit > raise ValueError, "bad tuple to qtyonly: %s" % t > ValueError: bad tuple to qtyonly: [(1000, '1000 - $67.42'), (6000, > '6000 - $356.52'), (10000, '10000 - $486.2')] > > Menu user: rolando > Script: /inet/www/clients/cityoftoronto/index.cgi > Machine: siteserver3 > Directory: /mnt/sitenfs2_clients/cityoftoronto > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phd at phd.pp.ru Fri Jan 23 20:45:10 2009 From: phd at phd.pp.ru (Oleg Broytmann) Date: Fri, 23 Jan 2009 22:45:10 +0300 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> <20090123185501.GA24843@phd.pp.ru> Message-ID: <20090123194510.GB24843@phd.pp.ru> On Fri, Jan 23, 2009 at 02:35:01PM -0500, rdmurray at bitdance.com wrote: > Given that a Unix OS can't know what encoding a filename is in (*), > I can't see that one could practically implement a Unix FTP server > in any other way. Can you believe there is a well-known program that solved the issue?! It is Apache web server! One can configure different directories and different file types to have different encodings. I often do that. One (sysadmin) can even allow users to do the configuration themselves via .htaccess local files. I am pretty sure FTP servers could borrow some ideas from Apache in this area. But they don't. Pity. :( Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd at phd.pp.ru Programmers don't die, they just GOSUB without RETURN. From guido at python.org Fri Jan 23 20:54:54 2009 From: guido at python.org (Guido van Rossum) Date: Fri, 23 Jan 2009 11:54:54 -0800 Subject: [Python-Dev] Fwd: [ALERT] cityoftoronto: problem saving to products table In-Reply-To: References: <20090123190001.9462DA7EB8@siteserver3.localdomain> Message-ID: Tony, you are posting internal communications to python-dev!!! On Fri, Jan 23, 2009 at 11:09 AM, Tony Lownds wrote: > Hi Paulus, > Have you fixed these aerts before? We need a script to fix these alerts. > Thanks > -Tony > > Begin forwarded message: > > From: support+dev at pagedna.com > Date: January 23, 2009 11:00:01 AM PST > To: problems at pagedna.com > Subject: [ALERT] cityoftoronto: problem saving to products table > problem saving to products table > > Traceback (most recent call last): > File "/opt/printra/lib/python/printra/sossite/KindManager.py", line 325, in > save_to_products_table > self.save_to_products_table2(kinds) > File "/opt/printra/lib/python/printra/sossite/KindManager.py", line 344, in > save_to_products_table2 > if dbkind.update(site, kind): > File "/opt/printra/lib/python/printra/sossite/KindManager.py", line 490, in > update > v = fn(kind) > File "/opt/printra/lib/python/printra/sossite/KindManager.py", line 474, in > _buy_price > return coerce_qtyspec(kind.buy_qtyspec).price_for_qty(1) > File "/opt/printra/lib/python/printra/sossite/Sos2qtyspec.py", line 778, in > price_for_qty > q, p, l = _qtypricesplit(q) > File "/opt/printra/lib/python/printra/sossite/Sos2qtyspec.py", line 671, in > _qtypricesplit > raise ValueError, "bad tuple to qtyonly: %s" % t > ValueError: bad tuple to qtyonly: [(1000, '1000 - $67.42'), (6000, '6000 - > $356.52'), (10000, '10000 - $486.2')] > > Menu user: rolando > Script: /inet/www/clients/cityoftoronto/index.cgi > Machine: siteserver3 > Directory: /mnt/sitenfs2_clients/cityoftoronto > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From a.badger at gmail.com Fri Jan 23 20:54:38 2009 From: a.badger at gmail.com (Toshio Kuratomi) Date: Fri, 23 Jan 2009 11:54:38 -0800 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: <20090123194510.GB24843@phd.pp.ru> References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> <20090123185501.GA24843@phd.pp.ru> <20090123194510.GB24843@phd.pp.ru> Message-ID: <497A207E.1040708@gmail.com> Oleg Broytmann wrote: > On Fri, Jan 23, 2009 at 02:35:01PM -0500, rdmurray at bitdance.com wrote: >> Given that a Unix OS can't know what encoding a filename is in (*), >> I can't see that one could practically implement a Unix FTP server >> in any other way. > > Can you believe there is a well-known program that solved the issue?! It > is Apache web server! One can configure different directories and different > file types to have different encodings. I often do that. One (sysadmin) can > even allow users to do the configuration themselves via .htaccess local files. > I am pretty sure FTP servers could borrow some ideas from Apache in this > area. But they don't. Pity. :( > AFAIK, Apache is in the same boat as ftp servers. You're thinking of the encoding inside of the files. The problem is with the file names themselves. -Toshio -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 197 bytes Desc: OpenPGP digital signature URL: From martin at v.loewis.de Fri Jan 23 21:06:47 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 23 Jan 2009 21:06:47 +0100 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <49798235.50509@v.loewis.de> Message-ID: <497A2357.7060205@v.loewis.de> > import random > print(random.choice('svn', 'bzr', 'hg', 'git')) Nice! So it's bzr, as my machine just told me (after adding the square brackets). Regards, Martin From steven.bethard at gmail.com Fri Jan 23 21:11:21 2009 From: steven.bethard at gmail.com (Steven Bethard) Date: Fri, 23 Jan 2009 12:11:21 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <497A2357.7060205@v.loewis.de> References: <49798235.50509@v.loewis.de> <497A2357.7060205@v.loewis.de> Message-ID: On Fri, Jan 23, 2009 at 12:06 PM, "Martin v. L?wis" wrote: >> import random >> print(random.choice('svn', 'bzr', 'hg', 'git')) > > Nice! So it's bzr, as my machine just told me (after adding > the square brackets). Wow, that decision was a lot easier than I thought it would be. ;-) Steve -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy From brett at python.org Fri Jan 23 21:13:52 2009 From: brett at python.org (Brett Cannon) Date: Fri, 23 Jan 2009 12:13:52 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <49798235.50509@v.loewis.de> <497A2357.7060205@v.loewis.de> Message-ID: On Fri, Jan 23, 2009 at 12:11, Steven Bethard wrote: > On Fri, Jan 23, 2009 at 12:06 PM, "Martin v. L?wis" wrote: >>> import random >>> print(random.choice('svn', 'bzr', 'hg', 'git')) >> >> Nice! So it's bzr, as my machine just told me (after adding >> the square brackets). > > Wow, that decision was a lot easier than I thought it would be. ;-) But my machine just told me svn, which is even easier as that means we don't need to change anything. =) -Brett From martin at v.loewis.de Fri Jan 23 21:16:49 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 23 Jan 2009 21:16:49 +0100 Subject: [Python-Dev] Update on MS Windows CE port In-Reply-To: <200901231019.39111.eckhardt@satorlaser.com> References: <200901231019.39111.eckhardt@satorlaser.com> Message-ID: <497A25B1.6070906@v.loewis.de> > Just a short update on my porting issues. I tried various things in the > holidays/vacation, filed several bug reports, got a few patches applied and I > think the thing is taking shape now. However, I couldn't work on it for the > past two weeks since I'm just too swamped at work here. I haven't given up > though! Thanks for the update. I can only say: Ready when you are. I'm doubtful that anybody will continue to work on this (at least not in the same direction that you took). So whenever you chose to return, it's likely still where you left off (with some of the code having rot). Regards, Martin From martin at v.loewis.de Fri Jan 23 21:18:38 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 23 Jan 2009 21:18:38 +0100 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <49798235.50509@v.loewis.de> Message-ID: <497A261E.5040004@v.loewis.de> > Brett mentioned in his email that he wasn't ready to make a decision yet, I > think? I also think that the PEP could still use some modifications from people > who have more experience with the DVCSs. My question really was whether it is already ready for the wider audience up for discussion (and if so, what it is that should be discussed). It seems that it's not the case, so I just sit back and wait until its ready. Regards, Martin From martin at v.loewis.de Fri Jan 23 21:23:06 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 23 Jan 2009 21:23:06 +0100 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> <20090123185501.GA24843@phd.pp.ru> Message-ID: <497A272A.4020102@v.loewis.de> > Given that a Unix OS can't know what encoding a filename is in (*), > I can't see that one could practically implement a Unix FTP server > in any other way. However, an ftp server is different. It might start up with an empty folder, and receive *all* of its files through upload. Then it can certainly know what encoding the file names have on disk. It *could* also support operation on pre-existing files, e.g. by providing a configuration directive telling the encoding of the file names, or by ignoring all file names that are not encoded in UTF-8. Regards, Martin From phd at phd.pp.ru Fri Jan 23 21:24:57 2009 From: phd at phd.pp.ru (Oleg Broytmann) Date: Fri, 23 Jan 2009 23:24:57 +0300 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: <497A207E.1040708@gmail.com> References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> <20090123185501.GA24843@phd.pp.ru> <20090123194510.GB24843@phd.pp.ru> <497A207E.1040708@gmail.com> Message-ID: <20090123202457.GC24843@phd.pp.ru> On Fri, Jan 23, 2009 at 11:54:38AM -0800, Toshio Kuratomi wrote: > AFAIK, Apache is in the same boat as ftp servers. You're thinking of > the encoding inside of the files. The problem is with the file names > themselves. Mostly yes. But Apache is so powerful I can do (and really did) a lot of tricks - I can change LC_CTYPE with mod_env, I can map URLs to the filesystem using mod_rewrite/ScriptAlias... FTP servers don't need to be that smart, but I'd like them to be more configurable WRT filename encoding. But well, they are not, so the only thing to discuss is what to do with ftplib and pyftpd. My not so humble opinion is - either use bytes instead of strings or use latin-1 because it is the straightforward encoding that preserves all 8 bits. Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd at phd.pp.ru Programmers don't die, they just GOSUB without RETURN. From tjreedy at udel.edu Fri Jan 23 21:52:56 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 23 Jan 2009 15:52:56 -0500 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: <4979E6C5.3000502@develer.com> References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> <4979E6C5.3000502@develer.com> Message-ID: Giovanni Bajo wrote: > >> You are so very wrong, my son. CPython's implementation strategy >> *will* evolve. Several groups are hard at work trying to make a faster >> Python interpreter, and when they succeed, everyone, including you, >> will want to use their version (or their version may simply *be* the >> new CPython). > > I'm basing my assumption on 19 years of history of CPython. Please, > correct me if I'm wrong, but the only thing that changed is that the > cyclic-GC was added so that loops are now collected, but nothing change > with respect to cyclic collection. And everybody (including you, IIRC) > has always agreed that it would be very very hard to eradicate reference > counting from CPython and all the existing extensions; so hard that it > is probably more convenient to start a different interpreter > implementation. Your history is true, but sometimes history changes faster than most expect. [As in the last 13 months of USA.] A year ago, I might have agreed with you, but in the last 6 months, there has been more visible ferment in the area of dynamic language implementations than I remember seeing in the past decade. When Guido says "CPython's implementation strategy *will* evolve" [emphasis his], I believe him. So this is just the wrong time to ask that it be frozen ;-). While a strong argument can be made that the remaining 2.x versions should not be changed, they do not apply to 3.x. New code and ported old code should use 'with' wherever quick closing needs to be guaranteed. The 3.0 manual clearly states "An implementation is allowed to postpone garbage collection or omit it altogether " OK, it also goes on to say "(Implementation note: the current implementation uses a reference-counting scheme with (optional) delayed detection of cyclically linked garbage,...)" I think the first part should at least be amended to 'the current CPython implementation' or 'the CPython implementation currently' or even better 'one current implementation (CPython)' and a warning added "But this may change" and "is not true of all implementaions" if that is not made clear otherwise. Terry Jan Reedy From rdmurray at bitdance.com Fri Jan 23 22:10:06 2009 From: rdmurray at bitdance.com (rdmurray at bitdance.com) Date: Fri, 23 Jan 2009 16:10:06 -0500 (EST) Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: <497A272A.4020102@v.loewis.de> References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> <20090123185501.GA24843@phd.pp.ru> <497A272A.4020102@v.loewis.de> Message-ID: On Fri, 23 Jan 2009 at 21:23, "Martin v. L?wis" wrote: >> Given that a Unix OS can't know what encoding a filename is in (*), >> I can't see that one could practically implement a Unix FTP server >> in any other way. > > However, an ftp server is different. It might start up with an empty > folder, and receive *all* of its files through upload. Then it can > certainly know what encoding the file names have on disk. It *could* > also support operation on pre-existing files, e.g. by providing a > configuration directive telling the encoding of the file names, or > by ignoring all file names that are not encoded in UTF-8. I don't see how starting with an empty directory helps. The filename comes from the client, and the FTP server can't know what the actual encoding of that filename is. --RDM From p.f.moore at gmail.com Fri Jan 23 22:39:22 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 23 Jan 2009 21:39:22 +0000 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <497A261E.5040004@v.loewis.de> References: <49798235.50509@v.loewis.de> <497A261E.5040004@v.loewis.de> Message-ID: <79990c6b0901231339rd7835b6r11e5892cb8523530@mail.gmail.com> 2009/1/23 "Martin v. L?wis" : >> Brett mentioned in his email that he wasn't ready to make a decision yet, I >> think? I also think that the PEP could still use some modifications from people >> who have more experience with the DVCSs. > > My question really was whether it is already ready for the wider > audience up for discussion (and if so, what it is that should be > discussed). It seems that it's not the case, so I just sit back and wait > until its ready. That's the impression I got - until Brett reaches the point of "Call for rebuttals", he's not looking for input (other than factual corrections of the capability and scenario sections). I'm not sure I'm comfortable with sitting back and waiting to quite that extent (I'm *already* biting my tongue over some of Brett's comments with which I strongly disagree), but I'd rather not have the PEP dissolve in a flamewar before Brett has a chance to study things better. I appreciate the need for a considered, logical evaluation, but I'm not sure keeping the lid on the impassioned arguments is going to make them any less likely to explode when they finally do happen. Paul. From brett at python.org Fri Jan 23 23:10:45 2009 From: brett at python.org (Brett Cannon) Date: Fri, 23 Jan 2009 14:10:45 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <79990c6b0901231339rd7835b6r11e5892cb8523530@mail.gmail.com> References: <49798235.50509@v.loewis.de> <497A261E.5040004@v.loewis.de> <79990c6b0901231339rd7835b6r11e5892cb8523530@mail.gmail.com> Message-ID: On Fri, Jan 23, 2009 at 13:39, Paul Moore wrote: > 2009/1/23 "Martin v. L?wis" : >>> Brett mentioned in his email that he wasn't ready to make a decision yet, I >>> think? I also think that the PEP could still use some modifications from people >>> who have more experience with the DVCSs. >> >> My question really was whether it is already ready for the wider >> audience up for discussion (and if so, what it is that should be >> discussed). It seems that it's not the case, so I just sit back and wait >> until its ready. > > That's the impression I got - until Brett reaches the point of "Call > for rebuttals", he's not looking for input (other than factual > corrections of the capability and scenario sections). That's right. > I'm not sure I'm > comfortable with sitting back and waiting to quite that extent (I'm > *already* biting my tongue over some of Brett's comments with which I > strongly disagree), but I'd rather not have the PEP dissolve in a > flamewar before Brett has a chance to study things better. > It's going to dissolve anyway. I am just trying to keep it to a single situation instead of having to go over it multiple times. > I appreciate the need for a considered, logical evaluation, but I'm > not sure keeping the lid on the impassioned arguments is going to make > them any less likely to explode when they finally do happen. I know it will explode. I am just trying to save myself the time of not having to reply to the flood of emails that I know will come before I have finished even thinking things through. It's like people asking questions during a presentation that happen to be strongly affected by the next slide; you can ask, but you might as well wait until all the info is presented to start talking. -Brett From martin at v.loewis.de Fri Jan 23 23:18:37 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 23 Jan 2009 23:18:37 +0100 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> <20090123185501.GA24843@phd.pp.ru> <497A272A.4020102@v.loewis.de> Message-ID: <497A423D.9020302@v.loewis.de> > I don't see how starting with an empty directory helps. The filename > comes from the client, and the FTP server can't know what the actual > encoding of that filename is. Sure it can. If the client supports RFC 2640, it will send file names in UTF-8. If the client does not support RFC 2640, the client must restrict itself to 7-bit file names (i.e. ASCII). If the client violates the protocol, the server must respond with error 501. Regards, Martin From bugtrack at roumenpetrov.info Fri Jan 23 23:21:59 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Sat, 24 Jan 2009 00:21:59 +0200 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> <20090123185501.GA24843@phd.pp.ru> <497A272A.4020102@v.loewis.de> Message-ID: <497A4307.5000203@roumenpetrov.info> rdmurray at bitdance.com wrote: > On Fri, 23 Jan 2009 at 21:23, "Martin v. L?wis" wrote: >>> Given that a Unix OS can't know what encoding a filename is in (*), >>> I can't see that one could practically implement a Unix FTP server >>> in any other way. >> >> However, an ftp server is different. It might start up with an empty >> folder, and receive *all* of its files through upload. Then it can >> certainly know what encoding the file names have on disk. It *could* >> also support operation on pre-existing files, e.g. by providing a >> configuration directive telling the encoding of the file names, or >> by ignoring all file names that are not encoded in UTF-8. > > I don't see how starting with an empty directory helps. The filename > comes from the client, and the FTP server can't know what the actual > encoding of that filename is. Exactly, only client can do filename conversion. May be ftplib could be extended to know encoding on filenames on local and remote system based on some user settings. May be ftplib could use UTF-8 or UCS-2/4 to store internally filename but direct conversion is may be faster. It the last case filename is a byte sequence. Roumen From bugtrack at roumenpetrov.info Fri Jan 23 23:28:49 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Sat, 24 Jan 2009 00:28:49 +0200 Subject: [Python-Dev] About SCons Re: progress: compiling python2.5 under msys (specifically but not exclusively under wine) with msvcr80 In-Reply-To: References: Message-ID: <497A44A1.20907@roumenpetrov.info> anatoly techtonik wrote: > On Thu, Jan 22, 2009 at 12:51 AM, Roumen Petrov > wrote: >>> Against 2.3, rejected due to dependence on SCons. >>> Also appears to have been incomplete, needing more work. >> No it was complete but use SCons. Most of changes changes in code you will >> see again in 3871. >> > > I would better use SCons for both unix and windows builds. In case of > windows for both compilers - mingw and microsoft ones. To port curses > extension to windows I need to know what gcc options mean, what are > the rules to write Makefiles and how to repeat these rules as well as > find options in visual studio interface. Not mentioning various > platform-specific defines and warning fixes. Did you select one of existing curses library for windows ? Roumen From guido at python.org Fri Jan 23 23:28:55 2009 From: guido at python.org (Guido van Rossum) Date: Fri, 23 Jan 2009 14:28:55 -0800 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> <4979E6C5.3000502@develer.com> Message-ID: On Fri, Jan 23, 2009 at 12:52 PM, Terry Reedy wrote: > Giovanni Bajo wrote: >> >>> You are so very wrong, my son. CPython's implementation strategy >>> *will* evolve. Several groups are hard at work trying to make a faster >>> Python interpreter, and when they succeed, everyone, including you, >>> will want to use their version (or their version may simply *be* the >>> new CPython). >> >> I'm basing my assumption on 19 years of history of CPython. Please, >> correct me if I'm wrong, but the only thing that changed is that the >> cyclic-GC was added so that loops are now collected, but nothing change with >> respect to cyclic collection. And everybody (including you, IIRC) has always >> agreed that it would be very very hard to eradicate reference counting from >> CPython and all the existing extensions; so hard that it is probably more >> convenient to start a different interpreter implementation. > > Your history is true, but sometimes history changes faster than most expect. > [As in the last 13 months of USA.] A year ago, I might have agreed with > you, but in the last 6 months, there has been more visible ferment in the > area of dynamic language implementations than I remember seeing in the past > decade. When Guido says "CPython's implementation strategy *will* evolve" > [emphasis his], I believe him. So this is just the wrong time to ask that > it be frozen ;-). > > While a strong argument can be made that the remaining 2.x versions should > not be changed, they do not apply to 3.x. New code and ported old code > should use 'with' wherever quick closing needs to be guaranteed. The 3.0 > manual clearly states "An implementation is allowed to postpone garbage > collection or omit it altogether " I would hope the 2.x manual says the same, since that same assumption has been around explicitly ever since JPython was first introduced. I'm not sure we should exempt 2.x from these changes (though if only 3.x could be made twice as fast it would of course encourage people to upgrade... :-). We've had many changes in the past affecting the lifetime of local variables, usually due to changes in the way tracebacks were managed. > OK, it also goes on to say "(Implementation note: the current implementation > uses a reference-counting scheme with (optional) delayed detection of > cyclically linked garbage,...)" I think the first part should at least be > amended to 'the current CPython implementation' or 'the CPython > implementation currently' or even better 'one current implementation > (CPython)' and a warning added "But this may change" and "is not true of all > implementaions" if that is not made clear otherwise. True. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ziade.tarek at gmail.com Fri Jan 23 23:36:31 2009 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Fri, 23 Jan 2009 23:36:31 +0100 Subject: [Python-Dev] distutils.mwerkscompiler and macpath deprecation Message-ID: <94bdd2610901231436y7222c130te4ec64f9da091018@mail.gmail.com> Hi, for http://bugs.python.org/issue4863 If no one objects, I am going to remove the mwerkscompiler module from distutils without any deprecation process, as Martin suggested (since it is linked into ccompiler with "mac" for the os.name) Now the question is, should I add a ticket to deprecate macpath.py as well ? Regards Tarek -- Tarek Ziad? | Association AfPy | www.afpy.org Blog FR | http://programmation-python.org Blog EN | http://tarekziade.wordpress.com/ From bugtrack at roumenpetrov.info Fri Jan 23 23:48:48 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Sat, 24 Jan 2009 00:48:48 +0200 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: References: <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978D46C.3030200@v.loewis.de> <4978E09F.4080507@v.loewis.de> Message-ID: <497A4950.80909@roumenpetrov.info> Luke Kenneth Casson Leighton wrote: [SNIP] >> python.exe (say, the official one) loads >> python25.dll. Then, an import is made of a ming-wine extension, say >> foo.pyd, which is linked with libpython2.5.dll, which then gets loaded. >> Voila, you have two interpreters in memory, with different type objects, >> memory heaps, and so on. > > ok, there's a solution for that - the gist of the solution is already > implemented in things like Apache Runtime and Apache2 (modules), and > is an extremely common standard technique implemented in OS kernels. > the "old school" name for it is "vector tables". > [SNIP] Did you think that this will escape python MSVC from "Assembly hell" ? Roumen From bugtrack at roumenpetrov.info Fri Jan 23 23:48:56 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Sat, 24 Jan 2009 00:48:56 +0200 Subject: [Python-Dev] compiling python2.5 (msys+mingw+wine) - giving up using msvcr80 assemblies for now In-Reply-To: <49797A95.6050402@v.loewis.de> References: <49777A99.9010607@v.loewis.de> <49778FF6.8070608@v.loewis.de> <4978BE6D.6060000@v.loewis.de> <4978D46C.3030200@v.loewis.de> <4978E09F.4080507@v.loewis.de> <49797A95.6050402@v.loewis.de> Message-ID: <497A4958.8050109@roumenpetrov.info> [SNIP] >>> No, Python 2.5 is linked with msvcr71.dll. >> ehn? i don't see that anywhere in any PC/* files - i do see that >> there's a dependency on .NET SDK 1.1 which uses msvcr71.dll > > Take a look at PCbuild/pythoncore.vcproj. It says > > Version="7.10" > > This is how you know VS 2003 was used to build Python 2.5, which > in turn links in msvcr71.dll. Luke, the python MSVC build assume equivalence between MSVC compiler and runtime, i.e. if compiler is version X the runtime is version Y. This is not try for GCC(mingw) build. This compiler is more flexible and allow build against different runtimes. But you know this - I already comment issue870382 in the patch. > Regards, > Martin From tjreedy at udel.edu Sat Jan 24 00:26:48 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 23 Jan 2009 18:26:48 -0500 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <49798235.50509@v.loewis.de> <497A261E.5040004@v.loewis.de> <79990c6b0901231339rd7835b6r11e5892cb8523530@mail.gmail.com> Message-ID: Brett Cannon wrote: > On Fri, Jan 23, 2009 at 13:39, Paul Moore wrote: > >> I'm not sure I'm >> comfortable with sitting back and waiting to quite that extent (I'm >> *already* biting my tongue over some of Brett's comments with which I >> strongly disagree), but I'd rather not have the PEP dissolve in a >> flamewar before Brett has a chance to study things better. >> > > It's going to dissolve anyway. I am just trying to keep it to a single > situation instead of having to go over it multiple times. I have two suggestions: 1. Conduct the discussion on python-ideas rather than python-dev so as to not overwhelm the day-to-day discussions and also to provide a bit of psychological distance. 2. Have several focused threads. Some examples: Given three experimental setups a. Experience with bzr b. Experience with hg c. Experience with git later d. Comparisons based on real experience Your different comments might best be separately discussed. "In the PEP, I said 'blah, blah'. Comments?" Terry From tjreedy at udel.edu Sat Jan 24 01:19:19 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 23 Jan 2009 19:19:19 -0500 Subject: [Python-Dev] __del__ and tp_dealloc in the IO lib In-Reply-To: References: <52dc1c820901181738h53e7ecf9if254a91de2025722@mail.gmail.com> <1232708220.8900.23.camel@ozzu> <4979E6C5.3000502@develer.com> Message-ID: Guido van Rossum wrote: > On Fri, Jan 23, 2009 at 12:52 PM, Terry Reedy wrote: >> While a strong argument can be made that the remaining 2.x versions should >> not be changed, they do not apply to 3.x. New code and ported old code >> should use 'with' wherever quick closing needs to be guaranteed. The 3.0 >> manual clearly states "An implementation is allowed to postpone garbage >> collection or omit it altogether " > > I would hope the 2.x manual says the same, since that same assumption > has been around explicitly ever since JPython was first introduced. Yes. It was changed a bit when gc was added. > I'm not sure we should exempt 2.x from these changes (though if only > 3.x could be made twice as fast it would of course encourage people to > upgrade... :-). ;-) If the issue became real, one could ask 2.x users which they prefer, compatability or speed. I have no opinion since my concern is with 3.x. >> OK, it also goes on to say "(Implementation note: the current implementation >> uses a reference-counting scheme with (optional) delayed detection of >> cyclically linked garbage,...)" I think the first part should at least be >> amended to 'the current CPython implementation' or 'the CPython >> implementation currently' or even better 'one current implementation >> (CPython)' and a warning added "But this may change" and "is not true of all >> implementaions" if that is not made clear otherwise. > > True. http://bugs.python.org/issue5039 tjr From ncoghlan at gmail.com Sat Jan 24 02:44:10 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 24 Jan 2009 11:44:10 +1000 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <49798235.50509@v.loewis.de> <497A2357.7060205@v.loewis.de> Message-ID: <497A726A.30204@gmail.com> Brett Cannon wrote: > On Fri, Jan 23, 2009 at 12:11, Steven Bethard wrote: >> On Fri, Jan 23, 2009 at 12:06 PM, "Martin v. L?wis" wrote: >>>> import random >>>> print(random.choice('svn', 'bzr', 'hg', 'git')) >>> Nice! So it's bzr, as my machine just told me (after adding >>> the square brackets). >> Wow, that decision was a lot easier than I thought it would be. ;-) > > But my machine just told me svn, which is even easier as that means we > don't need to change anything. =) Mine briefly flirted with git, but quickly changed its mind. It *is* a Kubuntu machine though, so it's probably biased :) Cheers, Nick. >>> import random >>> random.choice(['svn', 'bzr', 'hg', 'git']) 'git' >>> random.choice(['svn', 'bzr', 'hg', 'git']) 'bzr' >>> random.choice(['svn', 'bzr', 'hg', 'git']) 'bzr' >>> random.choice(['svn', 'bzr', 'hg', 'git']) 'bzr' -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From lkcl at lkcl.net Sat Jan 24 12:15:45 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 24 Jan 2009 11:15:45 +0000 Subject: [Python-Dev] future-proofing vector tables for python APIs: binary-module interoperability Message-ID: On Fri, Jan 23, 2009 at 10:48 PM, Roumen Petrov wrote: >>> python.exe (say, the official one) loads >>> python25.dll. Then, an import is made of a ming-wine extension, say >>> foo.pyd, which is linked with libpython2.5.dll, which then gets loaded. >>> Voila, you have two interpreters in memory, with different type objects, >>> memory heaps, and so on. >> >> ok, there's a solution for that - the gist of the solution is already >> implemented in things like Apache Runtime and Apache2 (modules), and >> is an extremely common standard technique implemented in OS kernels. >> the "old school" name for it is "vector tables". >> > [SNIP] Did you think that this will escape python MSVC from "Assembly hell" > ? let me think about that.... write some things down, i might have an answer at the end :) but it would certainly mean that there would be both a future-proof path for binary modules from either msvc-compiled _or_ mingw-compiled 2.5, 2.6, 2.7 etc. to work with 2.5, 2.6, 2.7, 2.8 etc. _without_ a recompile. [forwards-future-proof-compatibility _is_ possible, but... it's a bit more... complicated. backwards-compatibility is easy]. what you do is you make sure that the vector-table is always and only "extended" - added to - never "removed from" or altered. if one function turns out to be a screw-up (inadequate, not enough parameters), you do NOT change its function parameters, you add an "Ex" version - or an "Ex1" version. just like microsoft does. [.... now you know _why_ they do that "ridiculous" thing of adding FunctionEx1 FunctionEx2 and if you look at the MSHTML specification i think they go up to _six_ revisions of the same function in one case!] to detect revisions of the vector-table you use a "negotiation" tactic. you add a bit-field at the beginning of the struct, and each bit expresses a "new revision" indicating that the vector table has been extended (and so needs to be typecast to a different struct - exactly the same as is done with PyObject being typecast to different structs). the first _function_ in the vector-table is one which the module must call (in its initXXXX()) to pass in the "version number" of the module, to the python runtime. just in case someone needs to know. but for the most part, the initiation - of function call-out - is done _from_ modules, so each and every module will never try to call something beyond what it understands. but basically, not only is this technique nothing new - it's in use in Apache RunTime, FreeDCE, the NT Kernel, the Linux Kernel - but also it's actually _already_ in use in one form in the way that python objects are typecast from PyObject to other types of structs! the difference is that a bit-field would make detection of revisions a bit easier but to be honest you could just as easily make it an int and increase the revision number. .... ok, i've thought about your question, and i think it might [save us from assembly hell]. what you would likely have to do is compile _individual modules_ with assemblies, should they need them. for example, the msvcrt module would obviously have to be.... hey, that'd be interesting, how about having different linked versions of the msvcrt module? coool :) in the mingw builds, it's not necessary to link in PC/msvcrtmodule.o into the python dll - so (and this confused the hell out of me for a minute so i had to do find . -name "msvcrt*") you end up with a Modules/msvcrt.pyd. surely, that should be the _only_ dll which gets _specifically_ linked against msvcr71.dll (or 90, or... whatever) and it would be even _better_ if that then got _named_ msvcr71.pyd, msvcr90.pyd etc. i'll do an experiment, later, to confirm that this actually _does_ work - i.e. creating an msvcr80.pyd with "mingw gcc -specs=msvcr80". the neat thing is that if it works, you wouldn't need to _force_ people to link to the python dll or the python exe with msvcr90 or any other version. and the mingw built python.exe or python dll would be interchangeable, as it would be _specific modules_ that required specific versions of the msvc runtime. l. From techtonik at gmail.com Sat Jan 24 12:26:34 2009 From: techtonik at gmail.com (anatoly techtonik) Date: Sat, 24 Jan 2009 13:26:34 +0200 Subject: [Python-Dev] About SCons Re: progress: compiling python2.5 under msys (specifically but not exclusively under wine) with msvcr80 In-Reply-To: <497A44A1.20907@roumenpetrov.info> References: <497A44A1.20907@roumenpetrov.info> Message-ID: On Sat, Jan 24, 2009 at 12:28 AM, Roumen Petrov >> >> I would better use SCons for both unix and windows builds. In case of >> windows for both compilers - mingw and microsoft ones. To port curses >> extension to windows I need to know what gcc options mean, what are >> the rules to write Makefiles and how to repeat these rules as well as >> find options in visual studio interface. Not mentioning various >> platform-specific defines and warning fixes. > > Did you select one of existing curses library for windows ? I've selected PDCurses and successfully compiled the module and run demos manually - you may see the batch and the patch at http://bugs.python.org/issue2889 However, I was asked for VS2008 project file and this is where it all stopped for 8 months already. First I couldn't get the VS2008, then it refused to run on my W2K and now I can't get enough time to learn it (including that I have 50%/40% experience in PHP/Python and only 5%/5% C/Java). -- --anatoly t. From techtonik at gmail.com Sat Jan 24 14:07:51 2009 From: techtonik at gmail.com (anatoly techtonik) Date: Sat, 24 Jan 2009 15:07:51 +0200 Subject: [Python-Dev] subprocess crossplatformness and async communication Message-ID: Greetings, This turned out to be a rather long post that in short can be summarized as: "please-please-please, include asynchronous process communication in subprocess module and do not allow "available only on ..." functionality", because it hurts the brain". Code to speak for itself: http://code.activestate.com/recipes/440554/ The subprocess module was a great step forward to unify various spawn and system and exec and etc. calls in one module, and more importantly - in one uniform API. But this API is partly crossplatform, and I believe I've seen recent commits to docs with more unix-only differences in this module. The main point of this module is to "allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes". PEP 324 goal is also to make "make Python an even better replacement language for over-complicated shell scripts". Citing pre-subrocess PEP 324, "Currently, Python has a large number of different functions for process creation. This makes it hard for developers to choose." Now there is one class with many methods and many platform-specific comments and notices. To make thing worse people on Unix use subprocess with fcntl and people on windows tend not to use it at all, because it looks complicated and doesn't solve the problem with asynchronous communication. That I suggest is to add either support for async crossplatfrom read/write/probing of executed process or a comment at the top of module documentation that will warn that subprocess works in blocking mode. With async mode you can emulate blocking, the opposite is not possible. This will save python users a lot of time. Thanks for reading my rant. BTW, the proposed change is top 10 python recipe on ActiveState http://code.activestate.com/recipes/langs/python/ -- --anatoly t. From python-3000 at udmvt.ru Sat Jan 24 14:28:12 2009 From: python-3000 at udmvt.ru (Alexey G.) Date: Sat, 24 Jan 2009 17:28:12 +0400 Subject: [Python-Dev] Should ftplib use UTF-8 instead of latin-1 encoding? In-Reply-To: <497A423D.9020302@v.loewis.de> References: <729626cc0901221813m72a9896bj30092dbfbd5cb133@mail.gmail.com> <4979805B.4030107@v.loewis.de> <20090123185501.GA24843@phd.pp.ru> <497A272A.4020102@v.loewis.de> <497A423D.9020302@v.loewis.de> Message-ID: <20090124132812.GA15096@ruber.office.udmvt.ru> On Fri, Jan 23, 2009 at 11:18:37PM +0100, "Martin v. L?wis" wrote: > > I don't see how starting with an empty directory helps. The filename > > comes from the client, and the FTP server can't know what the actual > > encoding of that filename is. > > Sure it can. If the client supports RFC 2640, it will send file names > in UTF-8. If the client does not support RFC 2640, the client must > restrict itself to 7-bit file names (i.e. ASCII). If the client violates > the protocol, the server must respond with error 501. Perhaps, that is true, but that is in the world of standards. In my life I remember the situation when users uploaded files from Windows with names in CP866 encoding to UNIX-based ftp server, which by itself had KOI8-R as the encoding for LC_CTYPE. Since administrator was unhappy being impossible to read the names of files correctly, he found and installed specialized ("russified") version of ftp daemon, which had configuration settings, that said what is the network encoding and what is the filesystem encoding. So both ftp daemon and ftp clients violated RFC, but users and administrator were happy. I think, we should preserve the ability of ftp client to download all files he see in the listing from the server. What to do with user specified filenames when they cannot be encoded into ascii and server does not support UTF8, but violates RFC and allows 8-bit bytes in the file names? The ideal ftp client will ask the user about the encoding he thiks filenames are stored on the server side and then recode from user's encoding. It also allow the user to try several variants, if first don't work. It will allow user to download files with names in several different encodings from the same server using single ftp session. Dumb client will send filename from user as bytes, and will succeed, if user was able to specify filename verbatim. Anything between that will make the idea of using Unicode as character encoding for filenames absurd, since it will only break the i18n capabilities of the library. If python library will have file name encoding hardwired to latin1, but arguments will only be unicode strings, well, a lot of people will not even notice that, since they use only ascii part of utf-8. But then there will be again numerous "russification"-like patches to python and to modules, which are incompatible with everything, but work well in some very specific situations. This is the evil that was supposed to be defeated with i18n and with the total adoption of Unicode. Alexey G. From aahz at pythoncraft.com Sat Jan 24 16:25:08 2009 From: aahz at pythoncraft.com (Aahz) Date: Sat, 24 Jan 2009 07:25:08 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: Message-ID: <20090124152507.GA23994@panix.com> On Thu, Jan 22, 2009, Brett Cannon wrote: > > I have now converted PEP 374 > (http://www.python.org/dev/peps/pep-0374/) from Google Docs to reST > and checked it in. First of all, thanks for providing PEP number, URL, and short title; that makes it much easier to keep track of the discussion on list. Second, I think it would be good to explicitly mention the option of deferring this PEP. Based on previous discussion, it sounds like there are a fair number of people who think that there is a DVCS in Python's future, but not now (where "now" means over the next couple of years). -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. From guido at python.org Sat Jan 24 16:58:40 2009 From: guido at python.org (Guido van Rossum) Date: Sat, 24 Jan 2009 07:58:40 -0800 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: References: Message-ID: Anatoly, I'm confused. The subprocess already allows reading/writing its stdin/stdout/stderr, and AFAIK it's a platform-neutral API. I'm sure there's something missing, but your post doesn't make it clear what exactly, and the recipe you reference is too large to digest easily. Can you explain what it is that the current subprocess does't have beyond saying "async communication" (which could mean many things to many people)? --Guido On Sat, Jan 24, 2009 at 5:07 AM, anatoly techtonik wrote: > Greetings, > > This turned out to be a rather long post that in short can be summarized as: > "please-please-please, include asynchronous process communication in > subprocess module and do not allow "available only on ..." > functionality", because it hurts the brain". > > Code to speak for itself: http://code.activestate.com/recipes/440554/ > > > The subprocess module was a great step forward to unify various spawn > and system and exec and etc. calls in one module, and more importantly > - in one uniform API. But this API is partly crossplatform, and I > believe I've seen recent commits to docs with more unix-only > differences in this module. > > The main point of this module is to "allows you to spawn new > processes, connect to their input/output/error pipes, and obtain their > return codes". PEP 324 goal is also to make "make Python an even > better replacement language for over-complicated shell scripts". > > Citing pre-subrocess PEP 324, "Currently, Python has a large number of > different functions for process creation. This makes it hard for > developers to choose." Now there is one class with many methods and > many platform-specific comments and notices. To make thing worse > people on Unix use subprocess with fcntl and people on windows tend > not to use it at all, because it looks complicated and doesn't solve > the problem with asynchronous communication. > > That I suggest is to add either support for async crossplatfrom > read/write/probing of executed process or a comment at the top of > module documentation that will warn that subprocess works in blocking > mode. With async mode you can emulate blocking, the opposite is not > possible. This will save python users a lot of time. > > Thanks for reading my rant. > > > BTW, the proposed change is top 10 python recipe on ActiveState > http://code.activestate.com/recipes/langs/python/ > > -- > --anatoly t. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ncoghlan at gmail.com Sat Jan 24 16:58:37 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 25 Jan 2009 01:58:37 +1000 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <20090124152507.GA23994@panix.com> References: <20090124152507.GA23994@panix.com> Message-ID: <497B3AAD.5020403@gmail.com> Aahz wrote: > On Thu, Jan 22, 2009, Brett Cannon wrote: >> I have now converted PEP 374 >> (http://www.python.org/dev/peps/pep-0374/) from Google Docs to reST >> and checked it in. > > First of all, thanks for providing PEP number, URL, and short title; > that makes it much easier to keep track of the discussion on list. > > Second, I think it would be good to explicitly mention the option of > deferring this PEP. Based on previous discussion, it sounds like there > are a fair number of people who think that there is a DVCS in Python's > future, but not now (where "now" means over the next couple of years). Put me in that category - the switch from CVS to SVN was simple and obvious because SVN set out to be a better CVS and achieved that goal admirably. The only major hurdle to adopting it was getting the history across, and Martin was able to handle that in the end. The benefits of atomic commits alone were well worth the migration cost. With the level of development still going on in the DVCS area, I think this is a time when dragging our feet on making a decision may actually work to our advantage. Although if Brett genuinely wants to narrow it down to a two-horse race at PyCon, then I think the one thing to keep in mind is how well the chosen tool embodies the Zen of Python (especially "Readability counts" and "One obvious way to do it"). Core devs *are* core devs at least in part because we largely like and agree with those design philosophies. I personally find the command lines for 2 of the presented options quite pleasant to read, while the examples of using the 3rd make me shudder the way I do when I'm forced to read or write a Perl script.* Performance problems can be fixed, but an antithetical design philosophy is unlikely to make for a good tool fit. Cheers, Nick. * In other words, the examples of using git in the PEP make me want to run screaming in the opposite direction. However, assuming bzr's performance issues and line feed handling limitations are addressed by the time the switch actually happens, I'm currently fairly neutral on the choice between bzr and hg. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From bugtrack at roumenpetrov.info Sat Jan 24 20:24:22 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Sat, 24 Jan 2009 21:24:22 +0200 Subject: [Python-Dev] future-proofing vector tables for python APIs: binary-module interoperability In-Reply-To: References: Message-ID: <497B6AE6.3050402@roumenpetrov.info> Luke Kenneth Casson Leighton wrote: > On Fri, Jan 23, 2009 at 10:48 PM, Roumen Petrov [SNIP] > but it would certainly mean that there would be both a future-proof > path for binary modules from either msvc-compiled _or_ mingw-compiled > 2.5, 2.6, 2.7 etc. to work with 2.5, 2.6, 2.7, 2.8 etc. _without_ a > recompile. [forwards-future-proof-compatibility _is_ possible, but... > it's a bit more... complicated. backwards-compatibility is easy]. > > what you do is you make sure that the vector-table is always and only > "extended" - added to - never "removed from" or altered. if one > function turns out to be a screw-up (inadequate, not enough > parameters), you do NOT change its function parameters, you add an > "Ex" version - or an "Ex1" version. [SNIP] > but basically, not only is this technique nothing new - it's in use in > Apache RunTime, FreeDCE, the NT Kernel, the Linux Kernel - but also > it's actually _already_ in use in one form in the way that python > objects are typecast from PyObject to other types of structs! the > difference is that a bit-field would make detection of revisions a bit > easier but to be honest you could just as easily make it an int and > increase the revision number. This look like simple RCP implemantation. If I remember well SUN-RPC assign number to program, function, version. [SNIP] > surely, that should be the _only_ dll which gets _specifically_ linked > against msvcr71.dll (or 90, or... whatever) and it would be even > _better_ if that then got _named_ msvcr71.pyd, msvcr90.pyd etc. [SNIP] Yes it is enough to encapsulate memory allocation and file functions into python shared library. The python provide memory allocation functions, but not all modules use them. File functions are hiden by posixmodule and python modules can't use them. Roumen From konryd at gmail.com Sat Jan 24 21:00:25 2009 From: konryd at gmail.com (Konrad Delong) Date: Sat, 24 Jan 2009 21:00:25 +0100 Subject: [Python-Dev] Additional behaviour for itertools.combinations Message-ID: <74401640901241200t7db8defbtb38d3f53b9ce544d@mail.gmail.com> I'm not sure if it's the right place to post it. If so - I'll be glad to learn where is one. Anyway: I think the function itertools.combinations would benefit from making the 'r' (length of the combinations) argument optionally a sequence. With that change one could call combinations(sequence, [2, 3]) in order to get all combinations of length 2 and 3. In particular, one could call combinations(sequence, range(len(sequence)) in order to get *all* combinations of given sequence. The change would be backwards compatible as it would check for sequential arguments. Is it worth the shot? best regards Konrad PS. Didn't want to spoil the beginning of the post, but I consider it to be a good practice to introduce oneself when posting the first time, so: Hello, my name is Konrad, I'm an IT student and I'm following python-dev for some time, but never posted before. From python at rcn.com Sat Jan 24 21:23:51 2009 From: python at rcn.com (Raymond Hettinger) Date: Sat, 24 Jan 2009 12:23:51 -0800 Subject: [Python-Dev] Additional behaviour for itertools.combinations References: <74401640901241200t7db8defbtb38d3f53b9ce544d@mail.gmail.com> Message-ID: From: "Konrad Delong" > I'm not sure if it's the right place to post it. If so - I'll be glad > to learn where is one. Please post a feature request on the bug tracker and assign it to me. > Anyway: > I think the function itertools.combinations would benefit from making > the 'r' (length of the combinations) argument optionally a sequence. > > With that change one could call combinations(sequence, [2, 3]) in > order to get all combinations of length 2 and 3. > In particular, one could call combinations(sequence, > range(len(sequence)) in order to get *all* combinations of given > sequence. This design is similar to the API for similar functionality in mathematica. The question is whether there are sufficient worthwhile use cases to warrant the added API complexity and algorithm complexity. The latter is a bit tricky if we want to maintain the lexicographic ordering and the notion of combinations being a subsequence of the permutations code. Since I expect students to be among the users for the comb/perm functions, there is some merit to keeping the API as simple as possible. Besides, it is not hard to use the existing tool as a primitive to get to the one you want: def mycombinations(iterable, r_seq): # mycombinations('abc', [1,2]) --> A B C AB AC BC iterable = list(iterable) return chain.from_iterable(imap(combinations, repeat(iterable), r_seq)) > PS. Didn't want to spoil the beginning of the post, but I consider it > to be a good practice to introduce oneself when posting the first > time, so: Hello, my name is Konrad, I'm an IT student and I'm > following python-dev for some time, but never posted before. Hello Konrad. Welcome to python-dev. Raymond Hettinger From skip at pobox.com Sat Jan 24 21:33:20 2009 From: skip at pobox.com (skip at pobox.com) Date: Sat, 24 Jan 2009 14:33:20 -0600 (CST) Subject: [Python-Dev] ac_sys_system == Monterey*? Message-ID: <20090124203320.3DF09D52DF4@montanaro.dyndns.org> >From configure.in: # The current (beta) Monterey compiler dies with optimizations # XXX what is Monterey? Does it still die w/ -O? Can we get rid of this? case $ac_sys_system in Monterey*) OPT="" ;; esac What is Monterey? Can this check be removed from trunk/py3k branches? Skip From eric at trueblade.com Sat Jan 24 21:47:14 2009 From: eric at trueblade.com (Eric Smith) Date: Sat, 24 Jan 2009 15:47:14 -0500 Subject: [Python-Dev] ac_sys_system == Monterey*? In-Reply-To: <20090124203320.3DF09D52DF4@montanaro.dyndns.org> References: <20090124203320.3DF09D52DF4@montanaro.dyndns.org> Message-ID: <497B7E52.9060903@trueblade.com> skip at pobox.com wrote: >>From configure.in: > > # The current (beta) Monterey compiler dies with optimizations > # XXX what is Monterey? Does it still die w/ -O? Can we get rid of this? > case $ac_sys_system in > Monterey*) > OPT="" > ;; > esac > > What is Monterey? Can this check be removed from trunk/py3k branches? This post http://mail.python.org/pipermail/patches/2000-August/001708.html would have you believe it's a 64-bit AIX compiler. From martin at v.loewis.de Sat Jan 24 21:54:57 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 24 Jan 2009 21:54:57 +0100 Subject: [Python-Dev] ac_sys_system == Monterey*? In-Reply-To: <20090124203320.3DF09D52DF4@montanaro.dyndns.org> References: <20090124203320.3DF09D52DF4@montanaro.dyndns.org> Message-ID: <497B8021.5000806@v.loewis.de> > What is Monterey? Monterey was the code name of a joint operating system project of SCO and IBM, porting AIX to 64-bit processors (apparently, IA-64 and POWER). See http://www.cs.umbc.edu/help/architecture/idfmontereylab.pdf http://en.wikipedia.org/wiki/Project_Monterey Monterey was cancelled in 2000, although parts of it were integrated into AIX 5L. Regards, Martin From skip at pobox.com Sat Jan 24 22:03:36 2009 From: skip at pobox.com (skip at pobox.com) Date: Sat, 24 Jan 2009 15:03:36 -0600 (CST) Subject: [Python-Dev] Change Makefile.pre.in based on configure? Message-ID: <20090124210336.5213AD55391@montanaro.dyndns.org> I'm working on issue 4111 which will add dtrace support to Python when requested by the builder and when supported by the platform (currently just Solaris and Mac OSX I believe). Sun and Apple have quite different ways to generate the code necessary to link into the executable. Sun's dtrace command supports a -G flag which generates a .o file from a .d file. Apple instead generates an include file using the -h flag to dtrace (-G and .o file generation are not supported). This puts a bit of a crimp in generating Makefile dependencies. In the Sun case you have a couple extra .o files to link into libpython. In the Apple case you have a couple extra .h files which define Dtrace macros. How do I work around this difference in Makefile.pre.in? I can detect Sun vs. Apple in the configure script, but I see no conditional logic in Makefile.pre.in to use as an example. It seems to only use variable expansion on the RHS of stuff. Can I do something like if @WITH_DTRACE_SUN@ = 1 then ... Sun-style dependencies here ... else ... Apple-style dependencies here ... fi where WITH_DTRACE_SUN is a macro defined in pyconfig.h by the configure script? Thanks, Skip From lkcl at lkcl.net Sat Jan 24 22:10:41 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 24 Jan 2009 21:10:41 +0000 Subject: [Python-Dev] future-proofing vector tables for python APIs: binary-module interoperability In-Reply-To: <497B6AE6.3050402@roumenpetrov.info> References: <497B6AE6.3050402@roumenpetrov.info> Message-ID: >> but basically, not only is this technique nothing new - it's in use in >> Apache RunTime, FreeDCE, the NT Kernel, the Linux Kernel - but also > This look like simple RPC implemantation. yep. > If I remember well SUN-RPC assign number to program, function, version. yep - i forgot about that one: yes, that's another example. this is pretty basic well-understood, well-documented techniques that virtually every large project that requires isolation between components (and an upgrade path) ends up using in one form or another. the only fly in the ointment will be that putting pointers to PyType_String etc. etc. into a vector table (struct), you end up with an extra de-ref overhead, which is often an argument utilised to do away with vector tables. but - tough: the decision involves getting away from "Hell" to something that makes everyone's lives that much easier, it's an easy decision to make. >> surely, that should be the _only_ dll which gets _specifically_ linked >> against msvcr71.dll (or 90, or... whatever) and it would be even >> _better_ if that then got _named_ msvcr71.pyd, msvcr90.pyd etc. > > [SNIP] > Yes it is enough to encapsulate memory allocation and file functions into > python shared library. The python provide memory allocation functions, but > not all modules use them. File functions are hiden by posixmodule and python > modules can't use them. except ... posixmodule gets renamed to ntmodule .... oh, i see what you mean: python modules aren't allowed _direct_ access to msvcrtNN's file functions, they have to go via posixmodule-renamed-to-ntmodule. so it's still ok. l. From brett at python.org Sat Jan 24 22:23:41 2009 From: brett at python.org (Brett Cannon) Date: Sat, 24 Jan 2009 13:23:41 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <20090124152507.GA23994@panix.com> References: <20090124152507.GA23994@panix.com> Message-ID: On Sat, Jan 24, 2009 at 07:25, Aahz wrote: > On Thu, Jan 22, 2009, Brett Cannon wrote: >> >> I have now converted PEP 374 >> (http://www.python.org/dev/peps/pep-0374/) from Google Docs to reST >> and checked it in. > > First of all, thanks for providing PEP number, URL, and short title; > that makes it much easier to keep track of the discussion on list. > =) Welcome. > Second, I think it would be good to explicitly mention the option of > deferring this PEP. Based on previous discussion, it sounds like there > are a fair number of people who think that there is a DVCS in Python's > future, but not now (where "now" means over the next couple of years). Sure, I can add a note somewhere that says if a clear winner doesn't come about the PEP can be revisited to a later date. -Brett From lkcl at lkcl.net Sat Jan 24 22:51:13 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sat, 24 Jan 2009 21:51:13 +0000 Subject: [Python-Dev] mingw+msys port of python2.7a0 (svn #r68884) Message-ID: http://bugs.python.org/issue5046 mingw+msys port which was previously done against python-2.5.2 has been brought forward to latest subversion r68884. the primary reason for initially doing python 2.5.2 was to 1) "stay out of the way" of primary python development 2) provide some hope for those people still using win98 and nt. builds using msvcr90 are possible with the --enable-msvcr9build switch. install mingw, install msys, optionally install a boat-load of libraries such as sqlite3, zlib, bz2, dbm etc. (good luck, it's a pain - ask me if you'd like some prebuilt). make sure you patch the mingw runtime if you want to use --enable-msvcr9build (again, ask me if you'd like the patched headers and prebuilt libmsvcrNN.a files). also added is the msi module - http:/ /lkcl.net/msi.tgz and run make in the msi directory, to install the import libraries and header files borrowed from Wine and beaten into submission. run ./configure --enable-win32build=yes --enable-shared=yes and go do something else for about 10 minutes, then run make and make install. no proprietary compilers or tools were used or harmed [*] in the making or development of this patch. l. [*] such a shame... From python at rcn.com Sat Jan 24 23:46:52 2009 From: python at rcn.com (Raymond Hettinger) Date: Sat, 24 Jan 2009 14:46:52 -0800 Subject: [Python-Dev] Operator module deprecations Message-ID: I would like to deprecate some outdated functions in the operator module. The isSequenceType(), isMappingType(), and isNumberType() functions never worked reliably and now their intended purpose has been largely fulfilled by ABCs. The isCallable() function has long been deprecated and I think it's finally time to rip it out. The repeat() function never really corresponded to an operator. Instead, it reflected an underlying implementation detail (namely the naming of the sq_repeat slot and the abstract C API function PySequence_Repeat). That functionality is already exposed by operator.mul: operator.mul('abc', 3) --> 'abcabcabc' Raymond From skip at pobox.com Sun Jan 25 00:22:20 2009 From: skip at pobox.com (skip at pobox.com) Date: Sat, 24 Jan 2009 17:22:20 -0600 Subject: [Python-Dev] ac_sys_system == Monterey*? In-Reply-To: <497B8021.5000806@v.loewis.de> References: <20090124203320.3DF09D52DF4@montanaro.dyndns.org> <497B8021.5000806@v.loewis.de> Message-ID: <18811.41644.577224.31055@montanaro.dyndns.org> Martin> Monterey was cancelled in 2000, although parts of it were Martin> integrated into AIX 5L. Thanks... http://bugs.python.org/issue4111 It doesn't appear it's mentioned anywhere other than in the configure script. Skip From skip at pobox.com Sun Jan 25 00:23:37 2009 From: skip at pobox.com (skip at pobox.com) Date: Sat, 24 Jan 2009 17:23:37 -0600 Subject: [Python-Dev] ac_sys_system == Monterey*? In-Reply-To: <497B8021.5000806@v.loewis.de> References: <20090124203320.3DF09D52DF4@montanaro.dyndns.org> <497B8021.5000806@v.loewis.de> Message-ID: <18811.41721.380619.894328@montanaro.dyndns.org> http://bugs.python.org/issue4111 Jeez, I'm an idiot. Should be http://bugs.python.org/issue5047 Skip From martin at v.loewis.de Sun Jan 25 00:31:19 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 25 Jan 2009 00:31:19 +0100 Subject: [Python-Dev] Change Makefile.pre.in based on configure? In-Reply-To: <20090124210336.5213AD55391@montanaro.dyndns.org> References: <20090124210336.5213AD55391@montanaro.dyndns.org> Message-ID: <497BA4C7.9020807@v.loewis.de> > How do I work around this difference in Makefile.pre.in? To answer this question, I would have to see the exact fragment that you want to see in the Solaris case, and the exact fragment that you want to have in the OSX case. Can you provide these? Regards, Martin From martin at v.loewis.de Sun Jan 25 00:34:40 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 25 Jan 2009 00:34:40 +0100 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <20090124152507.GA23994@panix.com> Message-ID: <497BA590.7060406@v.loewis.de> >> Second, I think it would be good to explicitly mention the option of >> deferring this PEP. Based on previous discussion, it sounds like there >> are a fair number of people who think that there is a DVCS in Python's >> future, but not now (where "now" means over the next couple of years). > > Sure, I can add a note somewhere that says if a clear winner doesn't > come about the PEP can be revisited to a later date. > I think the request is slightly different: consider that a potential outcome should be "svn for the next five years, then reconsider" - not because none of the DVCS is a clear winner, but because there is too much resistance to DVCSes in general, at the moment. Regards, Martin From skip at pobox.com Sun Jan 25 00:37:21 2009 From: skip at pobox.com (skip at pobox.com) Date: Sat, 24 Jan 2009 17:37:21 -0600 Subject: [Python-Dev] Change Makefile.pre.in based on configure? In-Reply-To: <497BA4C7.9020807@v.loewis.de> References: <20090124210336.5213AD55391@montanaro.dyndns.org> <497BA4C7.9020807@v.loewis.de> Message-ID: <18811.42545.569745.170812@montanaro.dyndns.org> >> How do I work around this difference in Makefile.pre.in? Martin> To answer this question, I would have to see the exact fragment Martin> that you want to see in the Solaris case, and the exact fragment Martin> that you want to have in the OSX case. Can you provide these? I'll work on it and get back to you if I get completely stuck. I think I've figured a way out of my current dilemma, but a little experimentation will be required to see if I'm correct. Thx, Skip From brett at python.org Sun Jan 25 00:40:38 2009 From: brett at python.org (Brett Cannon) Date: Sat, 24 Jan 2009 15:40:38 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <497BA590.7060406@v.loewis.de> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> Message-ID: On Sat, Jan 24, 2009 at 15:34, "Martin v. L?wis" wrote: >>> Second, I think it would be good to explicitly mention the option of >>> deferring this PEP. Based on previous discussion, it sounds like there >>> are a fair number of people who think that there is a DVCS in Python's >>> future, but not now (where "now" means over the next couple of years). >> >> Sure, I can add a note somewhere that says if a clear winner doesn't >> come about the PEP can be revisited to a later date. >> > > I think the request is slightly different: consider that a potential > outcome should be "svn for the next five years, then reconsider" - not > because none of the DVCS is a clear winner, but because there is too > much resistance to DVCSes in general, at the moment. I already put a note in that no DVCS might be chosen once the PEP is finished. Whether it is because no DVCS is a clear improvement over svn or people just don't like a DVCS seems like a minor thing to worry about to spell out in the PEP. -Brett From ncoghlan at gmail.com Sun Jan 25 01:44:23 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 25 Jan 2009 10:44:23 +1000 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> Message-ID: <497BB5E7.4080606@gmail.com> Brett Cannon wrote: > On Sat, Jan 24, 2009 at 15:34, "Martin v. L?wis" wrote: >>>> Second, I think it would be good to explicitly mention the option of >>>> deferring this PEP. Based on previous discussion, it sounds like there >>>> are a fair number of people who think that there is a DVCS in Python's >>>> future, but not now (where "now" means over the next couple of years). >>> Sure, I can add a note somewhere that says if a clear winner doesn't >>> come about the PEP can be revisited to a later date. >>> >> I think the request is slightly different: consider that a potential >> outcome should be "svn for the next five years, then reconsider" - not >> because none of the DVCS is a clear winner, but because there is too >> much resistance to DVCSes in general, at the moment. > > I already put a note in that no DVCS might be chosen once the PEP is > finished. Whether it is because no DVCS is a clear improvement over > svn or people just don't like a DVCS seems like a minor thing to worry > about to spell out in the PEP. I suspect the reactions will be more nuanced than that anyway - e.g. my current position is that while I like the idea of a DVCS in principle and agree there are definite gains to be had in switching to one, I don't think the contenders have had enough time to shake out their competing feature sets and relative performance. We don't seem to lose a lot by sticking with SVN at least until after 2.7/3.1 are out the door and then revisiting the DVCS question (this is particularly so given that the current plan is go for a fairly short turnaround on those two releases). As the zen says, now is better than never, but never is often better than *right* now :) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From brett at python.org Sun Jan 25 01:48:29 2009 From: brett at python.org (Brett Cannon) Date: Sat, 24 Jan 2009 16:48:29 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <497BB5E7.4080606@gmail.com> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> Message-ID: On Sat, Jan 24, 2009 at 16:44, Nick Coghlan wrote: > Brett Cannon wrote: >> On Sat, Jan 24, 2009 at 15:34, "Martin v. L?wis" wrote: >>>>> Second, I think it would be good to explicitly mention the option of >>>>> deferring this PEP. Based on previous discussion, it sounds like there >>>>> are a fair number of people who think that there is a DVCS in Python's >>>>> future, but not now (where "now" means over the next couple of years). >>>> Sure, I can add a note somewhere that says if a clear winner doesn't >>>> come about the PEP can be revisited to a later date. >>>> >>> I think the request is slightly different: consider that a potential >>> outcome should be "svn for the next five years, then reconsider" - not >>> because none of the DVCS is a clear winner, but because there is too >>> much resistance to DVCSes in general, at the moment. >> >> I already put a note in that no DVCS might be chosen once the PEP is >> finished. Whether it is because no DVCS is a clear improvement over >> svn or people just don't like a DVCS seems like a minor thing to worry >> about to spell out in the PEP. > > I suspect the reactions will be more nuanced than that anyway - e.g. my > current position is that while I like the idea of a DVCS in principle > and agree there are definite gains to be had in switching to one, I > don't think the contenders have had enough time to shake out their > competing feature sets and relative performance. We don't seem to lose a > lot by sticking with SVN at least until after 2.7/3.1 are out the door > and then revisiting the DVCS question (this is particularly so given > that the current plan is go for a fairly short turnaround on those two > releases). > As part of my impressions I plan to also look at usage on top of svn as a viable alternative if no clear winner comes about. That way if they work well directly on top of svn we can write up very clear documentation on how to use any of them directly on top of svn and still gain the benefits of offline checkins and cheap branching. Maintenance then becomes simply keeping a read-only mirror going on code.python.org. > As the zen says, now is better than never, but never is often better > than *right* now :) Don't worry, I am not going to push something down anyone's throats if I don't feel secure that it is the best choice. From brett at python.org Sun Jan 25 01:49:51 2009 From: brett at python.org (Brett Cannon) Date: Sat, 24 Jan 2009 16:49:51 -0800 Subject: [Python-Dev] Operator module deprecations In-Reply-To: References: Message-ID: On Sat, Jan 24, 2009 at 14:46, Raymond Hettinger wrote: > I would like to deprecate some outdated functions in the operator module. > > The isSequenceType(), isMappingType(), and isNumberType() > functions never worked reliably and now their > intended purpose has been largely fulfilled by > ABCs. > > The isCallable() function has long been deprecated > and I think it's finally time to rip it out. > > The repeat() function never really corresponded to an > operator. Instead, it reflected an underlying implementation detail (namely > the naming of the sq_repeat slot and the abstract C API function > PySequence_Repeat). That functionality is already exposed by operator.mul: > > operator.mul('abc', 3) --> 'abcabcabc' +1 to all of it. From ncoghlan at gmail.com Sun Jan 25 02:18:23 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 25 Jan 2009 11:18:23 +1000 Subject: [Python-Dev] Additional behaviour for itertools.combinations In-Reply-To: References: <74401640901241200t7db8defbtb38d3f53b9ce544d@mail.gmail.com> Message-ID: <497BBDDF.3060406@gmail.com> Raymond Hettinger wrote: > Since I expect students to be among the users for the comb/perm > functions, there is some merit to keeping the API as simple as possible. > Besides, it is not hard to use the existing tool as a primitive to get to > the one you want: > > def mycombinations(iterable, r_seq): > # mycombinations('abc', [1,2]) --> A B C AB AC BC > iterable = list(iterable) > return chain.from_iterable(imap(combinations, repeat(iterable), > r_seq)) Perhaps a reasonable starting point would be to include this as one of the example itertools recipes in the documentation? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Sun Jan 25 02:22:28 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sun, 25 Jan 2009 11:22:28 +1000 Subject: [Python-Dev] Operator module deprecations In-Reply-To: References: Message-ID: <497BBED4.1090005@gmail.com> Brett Cannon wrote: > On Sat, Jan 24, 2009 at 14:46, Raymond Hettinger wrote: >> I would like to deprecate some outdated functions in the operator module. >> >> The isSequenceType(), isMappingType(), and isNumberType() >> functions never worked reliably and now their >> intended purpose has been largely fulfilled by >> ABCs. >> >> The isCallable() function has long been deprecated >> and I think it's finally time to rip it out. >> >> The repeat() function never really corresponded to an >> operator. Instead, it reflected an underlying implementation detail (namely >> the naming of the sq_repeat slot and the abstract C API function >> PySequence_Repeat). That functionality is already exposed by operator.mul: >> >> operator.mul('abc', 3) --> 'abcabcabc' > > +1 to all of it. What Brett said. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From python at rcn.com Sun Jan 25 04:33:37 2009 From: python at rcn.com (Raymond Hettinger) Date: Sat, 24 Jan 2009 19:33:37 -0800 Subject: [Python-Dev] Additional behaviour for itertools.combinations References: <74401640901241200t7db8defbtb38d3f53b9ce544d@mail.gmail.com> <497BBDDF.3060406@gmail.com> Message-ID: <43209A9EB88F49E59730F5BBC63CAF05@RaymondLaptop1> > Raymond Hettinger wrote: >> Since I expect students to be among the users for the comb/perm >> functions, there is some merit to keeping the API as simple as possible. >> Besides, it is not hard to use the existing tool as a primitive to get to >> the one you want: >> >> def mycombinations(iterable, r_seq): >> # mycombinations('abc', [1,2]) --> A B C AB AC BC >> iterable = list(iterable) >> return chain.from_iterable(imap(combinations, repeat(iterable), >> r_seq)) [Nick Coglan] > Perhaps a reasonable starting point would be to include this as one of > the example itertools recipes in the documentation? I would have suggested that but recipe itself is use case challenged. The OP did not mention any compelling use cases or motivations. Essentially, he just pointed-out that it is possible, not that it is desirable. I can't the of a case where I've wanted to loop over variable length subsequences. Having for-loops with tuple unpacking won't work because the combos have more than one possible size. This seems like a hypergeneralization to me. Raymond From ronaldoussoren at mac.com Sun Jan 25 11:11:51 2009 From: ronaldoussoren at mac.com (Ronald Oussoren) Date: Sun, 25 Jan 2009 11:11:51 +0100 Subject: [Python-Dev] Change Makefile.pre.in based on configure? In-Reply-To: <20090124210336.5213AD55391@montanaro.dyndns.org> References: <20090124210336.5213AD55391@montanaro.dyndns.org> Message-ID: On 24 Jan, 2009, at 22:03, skip at pobox.com wrote: > I'm working on issue 4111 which will add dtrace support to Python when > requested by the builder and when supported by the platform > (currently just > Solaris and Mac OSX I believe). > > Sun and Apple have quite different ways to generate the code > necessary to > link into the executable. Sun's dtrace command supports a -G flag > which > generates a .o file from a .d file. Apple instead generates an > include file > using the -h flag to dtrace (-G and .o file generation are not > supported). > This puts a bit of a crimp in generating Makefile dependencies. In > the Sun > case you have a couple extra .o files to link into libpython. In > the Apple > case you have a couple extra .h files which define Dtrace macros. > > How do I work around this difference in Makefile.pre.in? I can > detect Sun > vs. Apple in the configure script, but I see no conditional logic in > Makefile.pre.in to use as an example. It seems to only use variable > expansion on the RHS of stuff. Can I do something like > > if @WITH_DTRACE_SUN@ = 1 > then > ... Sun-style dependencies here ... > else > ... Apple-style dependencies here ... > fi > > where WITH_DTRACE_SUN is a macro defined in pyconfig.h by the > configure > script? I use configure to paste bits into Makefile.pre.in for the OSX framework support. In Makefile.pre.in: install: @FRAMEWORKINSTALLFIRST@ altinstall bininstall maninstall @FRAMEWORKINSTALLLAST@ FRAMEWORKINSTALLFIRST and FRAMEWORKINSTALLLAST are calculated in configure.in. This should work for dtrace as well. That is, in the configure script define DTRACE_HEADER_DEPS and DTRACE_OBJECT_DEPS and add @DTRACE_HEADER_DEPS@ and @DTRACE_OBJECT_DEPS@ to the proper targets in Makefile.pre.in. Ronald -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2224 bytes Desc: not available URL: From steve at pearwood.info Sun Jan 25 11:49:43 2009 From: steve at pearwood.info (Steven D'Aprano) Date: Sun, 25 Jan 2009 21:49:43 +1100 Subject: [Python-Dev] Additional behaviour for itertools.combinations In-Reply-To: <43209A9EB88F49E59730F5BBC63CAF05@RaymondLaptop1> References: <74401640901241200t7db8defbtb38d3f53b9ce544d@mail.gmail.com> <497BBDDF.3060406@gmail.com> <43209A9EB88F49E59730F5BBC63CAF05@RaymondLaptop1> Message-ID: <200901252149.43750.steve@pearwood.info> On Sun, 25 Jan 2009 02:33:37 pm Raymond Hettinger wrote: > > Raymond Hettinger wrote: > >> Since I expect students to be among the users for the comb/perm > >> functions, there is some merit to keeping the API as simple as > >> possible. Besides, it is not hard to use the existing tool as a > >> primitive to get to the one you want: > >> > >> def mycombinations(iterable, r_seq): > >> # mycombinations('abc', [1,2]) --> A B C AB AC BC > >> iterable = list(iterable) > >> return chain.from_iterable(imap(combinations, > >> repeat(iterable), r_seq)) > > [Nick Coglan] > > > Perhaps a reasonable starting point would be to include this as one > > of the example itertools recipes in the documentation? > > I would have suggested that but recipe itself is use case challenged. > The OP did not mention any compelling use cases or motivations. > Essentially, he just pointed-out that it is possible, not that it is > desirable. > > I can't the of a case where I've wanted to loop over variable length > subsequences. Having for-loops with tuple unpacking won't work > because the combos have more than one possible size. > > This seems like a hypergeneralization to me. Does answering homework questions count as a use-case? http://mathforum.org/library/drmath/view/56121.html Also calculating the odds of winning Powerball: http://mathforum.org/library/drmath/view/56122.html The number of combinations taken (1, 2, 3, ..., n) at a time is closely related to the Bell Numbers. And according to Wikipedia, the oldest known reference to combinatrics included such a question. http://en.wikipedia.org/wiki/History_of_combinatorics Having said all that, I'm inclined to agree that this is an over-generalisation. As far as I can tell, there's no name for this in mathematics, which suggests that useful applications and theorems are both rare. In any case, it's not that difficult to create a generator to yield all the combinations: (comb for k in ks for comb in itertools.combinations(seq, k)) I'm with Nick that this would make a good example for the documentation. I don't object to combinations growing the extra functionality, but if it does, people will ask why permutations doesn't as well. -- Steven D'Aprano From lkcl at lkcl.net Sun Jan 25 13:34:58 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 25 Jan 2009 12:34:58 +0000 Subject: [Python-Dev] future-proofing vector tables for python APIs: binary-module interoperability In-Reply-To: References: <497B6AE6.3050402@roumenpetrov.info> Message-ID: >> [SNIP] >> Yes it is enough to encapsulate memory allocation and file functions into >> python shared library. The python provide memory allocation functions, but >> not all modules use them. File functions are hiden by posixmodule and python >> modules can't use them. > > except ... posixmodule gets renamed to ntmodule .... oh, i see what > you mean: python modules aren't allowed _direct_ access to msvcrtNN's > file functions, they have to go via posixmodule-renamed-to-ntmodule. .... thinking about this some more... posixmodule.c is linked (by default) into pythonNN.dll, thus making pythonNN.dll totally dependent on a version of msvcrt. decoupling posixmodule.c from pythonNN.dll leaves the possibility to make python independent of msvcrt versioning. it would need to be a custom-compiled .pyd module, due to the early dependency. i'll see if this is possible. l. From barry at python.org Sun Jan 25 15:52:36 2009 From: barry at python.org (Barry Warsaw) Date: Sun, 25 Jan 2009 09:52:36 -0500 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> Message-ID: <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 24, 2009, at 7:48 PM, Brett Cannon wrote: > As part of my impressions I plan to also look at usage on top of svn > as a viable alternative if no clear winner comes about. That way if > they work well directly on top of svn we can write up very clear > documentation on how to use any of them directly on top of svn and > still gain the benefits of offline checkins and cheap branching. > Maintenance then becomes simply keeping a read-only mirror going on > code.python.org. There's a possible third way. I've heard (though haven't investigated) that some people are working on supporting the svn wire protocol in the bzr server. This would mean that anybody who's still comfortable with svn and feels no need to change their current habits can continue to work the way they always have. Those that want the extra benefits of a DVCS, or who do not have commit access to the code.python.org branches would have viable alternatives. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSXx8tHEjvBPtnXfVAQK1CgQAoDlHr9KthVr9sA6DfeXE3D35mYUop01X TD06OggbayFDGQYA0Zae+zU050R9UvuTpaF7XtSiSgBlI6n0Bb/rLAgVGskwbMHD LU8BAljNq6FpRp8QY2IHVRWKgOqzSHtz8CvCdlD1yw5CbA/pEvigoLzR0AWAeQJl tzOAetiud2c= =5qIJ -----END PGP SIGNATURE----- From lkcl at lkcl.net Sun Jan 25 16:37:47 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 25 Jan 2009 15:37:47 +0000 Subject: [Python-Dev] future-proofing vector tables for python APIs: binary-module interoperability In-Reply-To: References: <497B6AE6.3050402@roumenpetrov.info> Message-ID: > decoupling posixmodule.c from pythonNN.dll leaves the possibility to > make python independent of msvcrt versioning. > > it would need to be a custom-compiled .pyd module, due to the early dependency. > > i'll see if this is possible. i'd added PyExc_OSError, for example, as data exported from dlls. i'm finding that this causes.... problems :) so when posixmodule.c is a module (nt.pyd), doing this works: PyAPI_FUNC(PyObject *) PyErr_GetPyExc_OSError(void) { return (PyObject*)PyExc_OSError; } and thus oserr = PyErr_GetPyExc_OSError(); Py_INCREF(oserr); PyModule_AddObject(m, "error", oserr) but doing _direct_ access to PyExc_OSError fails miserably. i'll try to track down why (am adding __cdecl to PyAPI_DATA to see if that helps). l. From lkcl at lkcl.net Sun Jan 25 16:44:02 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 25 Jan 2009 15:44:02 +0000 Subject: [Python-Dev] progress: compiling python2.5 under msys (specifically but not exclusively under wine) with msvcr80 In-Reply-To: References: Message-ID: > Have you made some benchmarks like pystone? > Cheers, > Cesare Cesare, hi, thanks for responding: unfortunately, there's absolutely no point in making any benchmark figures under an emulated environment which does things like take 2 billion instruction cycles to start up a program named "c:/msys/bin/sh.exe", due to it inexplicably loading 200 GUI-only truetype fonts. and to do benchmarks on say windows would require that i install ... windows! so if somebody else would like to make some benchmarks, and publish them, they are most welcome to do so. l. From bugtrack at roumenpetrov.info Sun Jan 25 16:55:47 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Sun, 25 Jan 2009 17:55:47 +0200 Subject: [Python-Dev] future-proofing vector tables for python APIs: binary-module interoperability In-Reply-To: References: <497B6AE6.3050402@roumenpetrov.info> Message-ID: <497C8B83.7080604@roumenpetrov.info> Luke Kenneth Casson Leighton wrote: >>> [SNIP] >>> Yes it is enough to encapsulate memory allocation and file functions into >>> python shared library. The python provide memory allocation functions, but >>> not all modules use them. File functions are hiden by posixmodule and python >>> modules can't use them. >> except ... posixmodule gets renamed to ntmodule .... oh, i see what >> you mean: python modules aren't allowed _direct_ access to msvcrtNN's >> file functions, they have to go via posixmodule-renamed-to-ntmodule. > > .... thinking about this some more... posixmodule.c is linked (by > default) into pythonNN.dll, thus making pythonNN.dll totally dependent > on a version of msvcrt. This is not problem. If python*.dll hide msvcrt and other modules depend directly from python*.dll I expect issue to be resolved. i.e. python*.dll to be "portable runtime interface". > decoupling posixmodule.c from pythonNN.dll leaves the possibility to > make python independent of msvcrt versioning. > > it would need to be a custom-compiled .pyd module, due to the early dependency. > > i'll see if this is possible. ????? From lkcl at lkcl.net Sun Jan 25 16:58:48 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 25 Jan 2009 15:58:48 +0000 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. Message-ID: according to the wikipedia entry on dlls, dlls do not support data, only functions. which would explain two things: 1) why certain modules are forcibly linked into pythonNN.dll 2) why attempts to move them out of pythonNN.dll cause runtime crashes. so i will continue the experiment, and remove all the "data" references from the pythonNN.def that i added, and deal with the knock-on consequences, which will involve adding "get" functions. for example, PyAPI_FUNC(char*) _PyStructSequence_Get_UnnamedField(void) use of such functions will allow various bits and pieces - such as PyStructSequence_UnnamedField - to be converted back to static in their respective c files. any objections, speak now, because this will involve quite a bit of work. l. From lkcl at lkcl.net Sun Jan 25 17:15:28 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 25 Jan 2009 16:15:28 +0000 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: On Sun, Jan 25, 2009 at 3:58 PM, Luke Kenneth Casson Leighton wrote: > according to the wikipedia entry on dlls, dlls do not support data, > only functions. which would explain two things: > 1) why certain modules are forcibly linked into pythonNN.dll > 2) why attempts to move them out of pythonNN.dll cause runtime crashes. > so i will continue the experiment, and remove all the "data" > references from the pythonNN.def that i added, and deal with the > knock-on consequences, which will involve adding "get" functions. > > for example, PyAPI_FUNC(char*) _PyStructSequence_Get_UnnamedField(void) > > use of such functions will allow various bits and pieces - such as > PyStructSequence_UnnamedField - to be converted back to static in > their respective c files. > > any objections, speak now, because this will involve quite a bit of work. here is a starting list of data items which will require "getter" functions, found just by creating a posixmodule.pyd: Info: resolving __Py_NoneStruct by linking to __imp___Py_NoneStruct (auto-import) Info: resolving _Py_FileSystemDefaultEncoding by linking to __imp__Py_FileSystemDefaultEncoding (auto-import) Info: resolving _PyExc_OSError by linking to __imp__PyExc_OSError (auto-import) Info: resolving _PyUnicode_Type by linking to __imp__PyUnicode_Type (auto-import) Info: resolving _PyFloat_Type by linking to __imp__PyFloat_Type (auto-import) Info: resolving _PyExc_TypeError by linking to __imp__PyExc_TypeError (auto-impoModules/posixmodule.ort) Info: resolving _PyExc_RuntimeError by linking to __imp__PyExc_RuntimeError (auto-import) Info: resolving _PyExc_ValueError by linking to __imp__PyExc_ValueError (auto-import) Info: resolving _PyExc_RuntimeWarning by linking to __imp__PyExc_RuntimeWarning (auto-import) Info: resolving _PyExc_NotImplementedError by linking to __imp__PyExc_NotImplementedError (auto-import) obviously, auto-import can't happen. so getter-functions it is. l. From lkcl at lkcl.net Sun Jan 25 17:33:54 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 25 Jan 2009 16:33:54 +0000 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: > here is a starting list of data items which will require "getter" > functions, found just by creating a posixmodule.pyd: > > Info: resolving __Py_NoneStruct by linking to __imp___Py_NoneStruct > (auto-import) by no small coincidence, every single module with which we've had difficulties in the mingw port - _sre, thread, operator, locale, winreg, signal and have been forced to put them into python2N.dll - all of them _happen_ to _directly_ reference the _PyNone_Struct data variable. surpriiise. that means that the Py_None macro must call the "getter" function. l. From solipsis at pitrou.net Sun Jan 25 17:41:54 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 25 Jan 2009 16:41:54 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?microsoft_dlls_apparently_don=27t_support_?= =?utf-8?q?data=2E=09implications=3A_PyAPI_functions_required_to_ac?= =?utf-8?q?cess_data_across=09modules=2E?= References: Message-ID: Luke Kenneth Casson Leighton lkcl.net> writes: > > that means that the Py_None macro must call the "getter" function. Given the negative performance implications that it would have, chances are it is out of question. Also, while not a Windows expert *at all*, I'd question your interpretation of the problem. If data could not be found in a DLL, how could Windows builds of Python (and third-party extensions) work at all? From lkcl at lkcl.net Sun Jan 25 17:56:05 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 25 Jan 2009 16:56:05 +0000 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: On Sun, Jan 25, 2009 at 4:33 PM, Luke Kenneth Casson Leighton wrote: >> Info: resolving __Py_NoneStruct by linking to __imp___Py_NoneStruct >> (auto-import) > > by no small coincidence, every single module with which we've had > difficulties in the mingw port - _sre, thread, operator, locale, > winreg, signal and have been forced to put them into python2N.dll - > all of them _happen_ to _directly_ reference the _PyNone_Struct data > variable. > > surpriiise. > > that means that the Py_None macro must call the "getter" function. btw - if anyone has any objections, think about this: how is anyone - third party or otherwise - meant to return Py_None from c code in a dynamic module (or any other type) - and expect their code to work on windows?? i mean... has anyone _written_ a third party module that returns Py_None on a c-code module and had it compiled on windows? it wouldn't surprise me in the least if this is one of the severe issues (unresolved and unexplained) that people encounter on win32. l. From matthieu.brucher at gmail.com Sun Jan 25 18:01:31 2009 From: matthieu.brucher at gmail.com (Matthieu Brucher) Date: Sun, 25 Jan 2009 18:01:31 +0100 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: 2009/1/25 Luke Kenneth Casson Leighton : > according to the wikipedia entry on dlls, dlls do not support data, > only functions. What do you mean by "not support data"? Having global data variables in a dll? In wikipedia, it is explicitely told that this is possible to have data (http://en.wikipedia.org/wiki/Dynamic-link_library). Without them, shared library cannot be used. Matthieu -- Information System Engineer, Ph.D. Website: http://matthieu-brucher.developpez.com/ Blogs: http://matt.eifelle.com and http://blog.developpez.com/?blog=92 LinkedIn: http://www.linkedin.com/in/matthieubrucher From curt at hagenlocher.org Sun Jan 25 18:03:31 2009 From: curt at hagenlocher.org (Curt Hagenlocher) Date: Sun, 25 Jan 2009 09:03:31 -0800 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: On Sun, Jan 25, 2009 at 9:01 AM, Matthieu Brucher wrote: > 2009/1/25 Luke Kenneth Casson Leighton : >> according to the wikipedia entry on dlls, dlls do not support data, >> only functions. > > What do you mean by "not support data"? Having global data variables in a dll? > In wikipedia, it is explicitely told that this is possible to have > data (http://en.wikipedia.org/wiki/Dynamic-link_library). Without > them, shared library cannot be used. Indeed. That's why the header files contain define PyAPI_DATA(RTYPE) extern __declspec(dllexport) RTYPE define PyAPI_DATA(RTYPE) extern __declspec(dllimport) RTYPE -- Curt Hagenlocher curt at hagenlocher.org From lists at cheimes.de Sun Jan 25 18:12:15 2009 From: lists at cheimes.de (Christian Heimes) Date: Sun, 25 Jan 2009 18:12:15 +0100 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: Luke Kenneth Casson Leighton schrieb: > i mean... has anyone _written_ a third party module that returns > Py_None on a c-code module and had it compiled on windows? Lot's of people have written 3rd party extensions that work on Windows and can return a Py_None object. Please stop spamming the Python developer list with irrelevant, wrong, confusing and sometimes offensive messages. To be perfectly honest it's annoying. If you want to propose a new feature then python-ideas is the right mailing list. For everything else you should stick to the python-general or capi-sig list. Christian From lkcl at lkcl.net Sun Jan 25 18:34:42 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 25 Jan 2009 17:34:42 +0000 Subject: [Python-Dev] future-proofing vector tables for python APIs: binary-module interoperability In-Reply-To: <497C8B83.7080604@roumenpetrov.info> References: <497B6AE6.3050402@roumenpetrov.info> <497C8B83.7080604@roumenpetrov.info> Message-ID: On Sun, Jan 25, 2009 at 3:55 PM, Roumen Petrov wrote: > Luke Kenneth Casson Leighton wrote: >>>> >>>> [SNIP] >>>> Yes it is enough to encapsulate memory allocation and file functions >>>> into >>>> python shared library. The python provide memory allocation functions, >>>> but >>>> not all modules use them. File functions are hiden by posixmodule and >>>> python >>>> modules can't use them. >>> >>> except ... posixmodule gets renamed to ntmodule .... oh, i see what >>> you mean: python modules aren't allowed _direct_ access to msvcrtNN's >>> file functions, they have to go via posixmodule-renamed-to-ntmodule. >> >> .... thinking about this some more... posixmodule.c is linked (by >> default) into pythonNN.dll, thus making pythonNN.dll totally dependent >> on a version of msvcrt. > > This is not problem. If python*.dll hide msvcrt and other modules depend > directly from python*.dll I expect issue to be resolved. i.e. python*.dll to > be "portable runtime interface". yehhhh :) well - it looks like i am having success with removing all references to data e.g. Py_NoneStruct, all of the PyExc_*Warning and PyExc_*Error, all of the Py*_Types and more. i'm making sure that macros are heavily used - so that on systems where data _can_ be accessed through dynamic shared objects, it's done so. #if defined(MS_WINDOWS) || defined(__MINGW32__) /* Define macros for conveniently creating "getter" functions, * to avoid restrictions on dlls being unable to access data. * see #5056 */ /* use these for data that is already a pointer */ #define PyAPI_GETHDR(type, obj) \ PyAPI_FUNC(type) _Py_Get_##obj(void); #define PyAPI_GETIMPL(type, obj) \ PyAPI_FUNC(type) _Py_Get_##obj(void) { return (type)obj; } #define _PYGET(obj) _Py_Get_##obj() /* use these for data where a pointer (&) to the object is returned * e.g. no longer #define Py_None (&Py_NoneStruct) but * #define Py_None _PTGETPTR(Py_NoneStruct) */ #define PyAPI_GETHDRPTR(type, obj) \ PyAPI_FUNC(type) _Py_Get_##obj(void); #define PyAPI_GETIMPLPTR(type, obj) \ PyAPI_FUNC(type) _Py_Get_##obj(void) { return (type)&obj; } #define _PYGETPTR(obj) _Py_Get_##obj() #else /* on systems where data can be accessed directly in shared modules, * as an optimisation, return the object itself, directly. */ #define PyAPI_GETFNIMPL(obj) ; #define PyAPI_GETHDR(type, obj) ; #define PyAPI_GETIMPL(type, obj) ; #define PyAPI_GETIMPLPTR(type, obj) ; #define _PYGET(obj) (obj) #define _PYGETPTR(obj) (&obj) #endif /* defined(MS_WINDOWS) || defined(__MINGW32__)*/ as you can see from the Py_None example, on non-dll-based systems, you get... wow, a macro which returns... exactly what was returned before. zero impact. on windows, you end up defining, creating and using a "getter" function. two types. one which returns the object, the other returns a pointer to the object. l. From barry at python.org Sun Jan 25 18:37:29 2009 From: barry at python.org (Barry Warsaw) Date: Sun, 25 Jan 2009 12:37:29 -0500 Subject: [Python-Dev] [Python-checkins] PEP 374 In-Reply-To: References: <20090124230633.9B1801E4026@bag.python.org> <6247AFD31A7A43E7AAB7109012EFBF7C@RaymondLaptop1> <6FC2E18B-F3FD-4B3F-90C0-4EA611AF93ED@python.org> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 25, 2009, at 11:49 AM, Antoine Pitrou wrote: > Barry Warsaw python.org> writes: >> >> Besides, certain developments like support for the svn wire protocol >> in bzr would make the WFC (we fear change :) argument moot. > > This is an argument *against* the usefulness of switching! Why? This simply allows those who are happy with the status quo to live peacefully with those who want the benefits of the new capabilities. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSXyjW3EjvBPtnXfVAQLZHwP/UAwA/fcfaDDoaQI1Qa0F50u57AESc/GN bPIgUe6I93fwgAHx/+9jQWxgVJCjIOWlSavZqtLOr7nPR6gN4B27d4XpntLE7O47 3JXkV5QEZL0YDob0M33qAPSgPZsMv1++fWo9FDrk0o9SVzmsrP4OytsUsykRiOkC gkMtAPnzeAQ= =j2t2 -----END PGP SIGNATURE----- From solipsis at pitrou.net Sun Jan 25 18:44:05 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 25 Jan 2009 17:44:05 +0000 (UTC) Subject: [Python-Dev] [Python-checkins] PEP 374 References: <20090124230633.9B1801E4026@bag.python.org> <6247AFD31A7A43E7AAB7109012EFBF7C@RaymondLaptop1> <6FC2E18B-F3FD-4B3F-90C0-4EA611AF93ED@python.org> Message-ID: Barry Warsaw python.org> writes: > >> > >> Besides, certain developments like support for the svn wire protocol > >> in bzr would make the WFC (we fear change :) argument moot. > > > > This is an argument *against* the usefulness of switching! > > Why? > > This simply allows those who are happy with the status quo to live > peacefully with those who want the benefits of the new capabilities. Hmm, perhaps I misunderstood what you were saying... Would the bzr client allow accessing an svn server, or the reverse? (please note that we already have bzr, hg and git mirrors. So it's not like people wanting to use a DVCS are out of solutions) From barry at python.org Sun Jan 25 18:51:17 2009 From: barry at python.org (Barry Warsaw) Date: Sun, 25 Jan 2009 12:51:17 -0500 Subject: [Python-Dev] [Python-checkins] PEP 374 In-Reply-To: References: <20090124230633.9B1801E4026@bag.python.org> <6247AFD31A7A43E7AAB7109012EFBF7C@RaymondLaptop1> <6FC2E18B-F3FD-4B3F-90C0-4EA611AF93ED@python.org> Message-ID: <7EB43365-E8A5-444B-87FC-4AF26EE0B058@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 25, 2009, at 12:44 PM, Antoine Pitrou wrote: > Barry Warsaw python.org> writes: >>>> >>>> Besides, certain developments like support for the svn wire >>>> protocol >>>> in bzr would make the WFC (we fear change :) argument moot. >>> >>> This is an argument *against* the usefulness of switching! >> >> Why? >> >> This simply allows those who are happy with the status quo to live >> peacefully with those who want the benefits of the new capabilities. > > Hmm, perhaps I misunderstood what you were saying... Would the bzr > client allow > accessing an svn server, or the reverse? The reverse. IIUC, the bzr server would be able to speak to svn clients. bzr supports a centralized model perfectly fine along side a decentralized model, so any current developers who want to continue using their svn client can do so, and everyone else could use the bzr client to work decentralized. > (please note that we already have bzr, hg and git mirrors. So it's > not like > people wanting to use a DVCS are out of solutions) Right, but ideally the server would support the full distributed model and users wold chose which client and models they are most comfortable with. One option would be to promote the mirrors to official status, but this is clunky because svn has a less complete model than a dvcs so it's more difficult to go in that direction. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSXymlXEjvBPtnXfVAQJAoAP/SAqlCiEB883wLH1eXrkiMxc5MlChodqt BHBdaO2kZgs0rJCfGfoVD/ly65tuheahP5lwMsoa6don6uKD7lkzJkvBSNjtg1ZL 4U/MTIQWtg8WbDJUPaPT8ArV9Xo6/Y+B1yeFz+Ge5hY29+PGEop9pAYOXKUl/Jyk hTlhbuXQqkA= =haid -----END PGP SIGNATURE----- From martin at v.loewis.de Sun Jan 25 19:37:57 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sun, 25 Jan 2009 19:37:57 +0100 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> Message-ID: <497CB185.3010601@v.loewis.de> > There's a possible third way. I've heard (though haven't investigated) > that some people are working on supporting the svn wire protocol in the > bzr server. This would mean that anybody who's still comfortable with > svn and feels no need to change their current habits can continue to > work the way they always have. Those that want the extra benefits of a > DVCS, or who do not have commit access to the code.python.org branches > would have viable alternatives. Of course, those without commit access *already* have viable alternatives, IIUC, by means of the automatic ongoing conversion of the svn repository to bzr and hg (and, IIUC, git - or perhaps you can use git-svn without the need for server-side conversion). So a conversion to a DVCS would only benefit those committers who see a benefit in using a DVCS (*) (and would put a burden on those committers who see a DVCS as a burden). It would also put a burden on contributors who are uncomfortable with using a DVCS. Regards, Martin (*) I'm probably missing something, but ISTM that committers can already use the DVCS - they only need to create a patch just before committing. This, of course, is somewhat more complicated than directly pushing the changes to the server, but it still gives them most of what is often reported as the advantage of a DVCS (local commits, ability to have many branches simultaneously, ability to share work-in-progress, etc). In essence, committers wanting to use a DVCS can do so today, by acting as if they were non-committers, and only using svn for actual changes to the master repository. From lkcl at lkcl.net Sun Jan 25 19:45:13 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 25 Jan 2009 18:45:13 +0000 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: > On Sun, Jan 25, 2009 at 9:01 AM, Matthieu Brucher > wrote: > > 2009/1/25 Luke Kenneth Casson Leighton : > >> according to the wikipedia entry on dlls, dlls do not support data, > >> only functions. > > > > What do you mean by "not support data"? Having global data variables in a dll? > > In wikipedia, it is explicitely told that this is possible to have > > data (http://en.wikipedia.org/wiki/Dynamic-link_library). Without > > them, shared library cannot be used. matthieu, thank you for responding. from http://en.wikipedia.org/wiki/Dynamic-link_library: "Third, dynamic linking is inherently the wrong model for paged memory managed systems. Such systems work best with the idea that code is invariant from the time of assembly/compilation on. ........... Data references do not need to be so vectored because DLLs do not share data." ^^^^^^^^^^^^^^^^^^^^ does anyone happen to know what this means? also, what do you mean by "without data, shared library cannot be used"? you can _always_ call a function which returns a pointer to the data, rather than access the data directly. > Indeed. That's why the header files contain > define PyAPI_DATA(RTYPE) extern __declspec(dllexport) RTYPE > define PyAPI_DATA(RTYPE) extern __declspec(dllimport) RTYPE curt, thank you for responding. i'd seen this: i understood it - and... yet... mingw happily segfaults when asked to access _any_ data in _any_ object file of the python2N dll. Py_NoneStruct, PyExc_* (of which there are about 50), Py*_Type - all of them. solutions so far involve ensuring that anything declared with PyAPI_DATA is *NEVER* accessed [across a dll boundary] - by for example moving the module into python2N.dll. also, yes i had looked up how to do .def files, and how __declspec(dllexport) etc. work. of all the examples that you find about dlltool, mingw, dlls, defs, etc. they _all_ say "function." to declare a _function_ you do X, Y and Z. not one of them says "to place data in a dll, you do X Y and Z". then, looking at the wine dlls and .defs, i haven't yet found a _single_ one which returns data - they're _all_ functions (from looking so far. e.g. i expected MSVCRT.DLL errno to be an int - it's not: it's a function). *sigh*. if this turns out to be yet another gcc / mingw bug i'm going to be slightly annoyed. only slightly, because this _is_ free software, after all :) l. From solipsis at pitrou.net Sun Jan 25 19:45:21 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sun, 25 Jan 2009 18:45:21 +0000 (UTC) Subject: [Python-Dev] PEP 374 (DVCS) now in reST References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > In > essence, committers wanting to use a DVCS can do so today, by acting > as if they were non-committers, and only using svn for actual changes > to the master repository. Indeed. It is how I work. Regards Antoine. From rlight2 at gmail.com Sun Jan 25 19:42:06 2009 From: rlight2 at gmail.com (Ross Light) Date: Sun, 25 Jan 2009 10:42:06 -0800 Subject: [Python-Dev] Issue 4285 Patch Message-ID: <4553f0901251042j12a63434g6b4c56cceb665a98@mail.gmail.com> Hello, python-dev! My name is Ross Light. I was a documentation contributor at GHOP a couple years back and I wanted to start contributing to the core interpreter. I found Issue 4285 and decided to write a patch for it. This is my first patch, so I'd like someone to review my patch and make sure I'm doing things right. http://bugs.python.org/issue4285 Thanks! Cheers, Ross Light From lkcl at lkcl.net Sun Jan 25 19:57:19 2009 From: lkcl at lkcl.net (Luke Kenneth Casson Leighton) Date: Sun, 25 Jan 2009 18:57:19 +0000 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: > Luke Kenneth Casson Leighton schrieb: > > i mean... has anyone _written_ a third party module that returns > > Py_None on a c-code module and had it compiled on windows? > Lot's of people have written 3rd party extensions that work on Windows > and can return a Py_None object. ahh - ok. so... i will have to find out what the heck is going on... ohno, it couldn't be as simple as i left out "DATA" on the lines of the export files, could it?? :) > Please stop spamming the Python developer list with irrelevant, wrong, > confusing and sometimes offensive messages. To be perfectly honest it's > annoying. i'm sorry to hear that you believe my messages to be sometimes offensive. i'm sorry that you are annoyed. i'm sorry that i am learning about things and that i believe that people would like to help cooperate on the development of python as a free software project, by helping point me in the right directions. i'm sorry that i am unable to get things perfect the first time, so that i have to ask people for help and advice, and i'm sorry that you are annoyed by my asking. > If you want to propose a new feature then python-ideas is the right > mailing list. thank you for informing me of that - i was not aware of that list: i believed that the python-dev mailing list would be the location for discussion of development and ports of python. l. From rlight2 at gmail.com Sun Jan 25 20:11:55 2009 From: rlight2 at gmail.com (Ross Light) Date: Sun, 25 Jan 2009 11:11:55 -0800 Subject: [Python-Dev] Issue 4285 Patch Message-ID: <4553f0901251111w262bb595h7cc1edc1f69d8048@mail.gmail.com> Hello, python-dev! My name is Ross Light. I was a documentation contributor at GHOP a couple years back and I wanted to start contributing to the core interpreter. I found Issue 4285 and decided to write a patch for it. This is my first patch, so I'd like someone to review my patch and make sure I'm doing things right. http://bugs.python.org/issue4285 Thanks! Cheers, Ross Light From brett at python.org Sun Jan 25 22:00:04 2009 From: brett at python.org (Brett Cannon) Date: Sun, 25 Jan 2009 13:00:04 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <497CB185.3010601@v.loewis.de> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> Message-ID: On Sun, Jan 25, 2009 at 10:37, "Martin v. L?wis" wrote: >> There's a possible third way. I've heard (though haven't investigated) >> that some people are working on supporting the svn wire protocol in the >> bzr server. This would mean that anybody who's still comfortable with >> svn and feels no need to change their current habits can continue to >> work the way they always have. Those that want the extra benefits of a >> DVCS, or who do not have commit access to the code.python.org branches >> would have viable alternatives. > > Of course, those without commit access *already* have viable > alternatives, IIUC, by means of the automatic ongoing conversion of > the svn repository to bzr and hg (and, IIUC, git - or perhaps you > can use git-svn without the need for server-side conversion). > > So a conversion to a DVCS would only benefit those committers who > see a benefit in using a DVCS (*) (and would put a burden on those > committers who see a DVCS as a burden). It would also put a burden > on contributors who are uncomfortable with using a DVCS. > > Regards, > Martin > > (*) I'm probably missing something, but ISTM that committers can already > use the DVCS - they only need to create a patch just before committing. > This, of course, is somewhat more complicated than directly pushing the > changes to the server, but it still gives them most of what is often > reported as the advantage of a DVCS (local commits, ability to have many > branches simultaneously, ability to share work-in-progress, etc). In > essence, committers wanting to use a DVCS can do so today, by acting > as if they were non-committers, and only using svn for actual changes > to the master repository. > If I can't choose a clear winner I am going to look into what it take to run directly on top of svn to avoid the extra step for committers. Otherwise I will get standardized instructions for the three DVCSs and maybe write a script or three to make it dead-simple to work with the DVCSs but have our official repository be svn so we can all use the DVCSs as we see fit until a clear winner springs up. -Brett From fuzzyman at voidspace.org.uk Sun Jan 25 22:03:26 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Sun, 25 Jan 2009 21:03:26 +0000 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> Message-ID: <497CD39E.6010600@voidspace.org.uk> Brett Cannon wrote: > On Sun, Jan 25, 2009 at 10:37, "Martin v. L?wis" wrote: > >>> There's a possible third way. I've heard (though haven't investigated) >>> that some people are working on supporting the svn wire protocol in the >>> bzr server. This would mean that anybody who's still comfortable with >>> svn and feels no need to change their current habits can continue to >>> work the way they always have. Those that want the extra benefits of a >>> DVCS, or who do not have commit access to the code.python.org branches >>> would have viable alternatives. >>> >> Of course, those without commit access *already* have viable >> alternatives, IIUC, by means of the automatic ongoing conversion of >> the svn repository to bzr and hg (and, IIUC, git - or perhaps you >> can use git-svn without the need for server-side conversion). >> >> So a conversion to a DVCS would only benefit those committers who >> see a benefit in using a DVCS (*) (and would put a burden on those >> committers who see a DVCS as a burden). It would also put a burden >> on contributors who are uncomfortable with using a DVCS. >> >> Regards, >> Martin >> >> (*) I'm probably missing something, but ISTM that committers can already >> use the DVCS - they only need to create a patch just before committing. >> This, of course, is somewhat more complicated than directly pushing the >> changes to the server, but it still gives them most of what is often >> reported as the advantage of a DVCS (local commits, ability to have many >> branches simultaneously, ability to share work-in-progress, etc). In >> essence, committers wanting to use a DVCS can do so today, by acting >> as if they were non-committers, and only using svn for actual changes >> to the master repository. >> >> > > If I can't choose a clear winner I am going to look into what it take > to run directly on top of svn to avoid the extra step for committers. > Otherwise I will get standardized instructions for the three DVCSs and > maybe write a script or three to make it dead-simple to work with the > DVCSs but have our official repository be svn so we can all use the > DVCSs as we see fit until a clear winner springs up. > Well, that sounds like an ideal situation to end up in. Is there a downside other than the work it creates for you? Michael > -Brett > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog From bugtrack at roumenpetrov.info Sun Jan 25 22:05:31 2009 From: bugtrack at roumenpetrov.info (Roumen Petrov) Date: Sun, 25 Jan 2009 23:05:31 +0200 Subject: [Python-Dev] progress: compiling python2.5 under msys (specifically but not exclusively under wine) with msvcr80 In-Reply-To: <4978FDA4.8050101@roumenpetrov.info> References: <50019.151.53.150.247.1232570935.squirrel@webmail1.pair.com> <4978FDA4.8050101@roumenpetrov.info> Message-ID: <497CD41B.4000900@roumenpetrov.info> Roumen Petrov wrote: > Cesare Di Mauro wrote: >> Have you made some benchmarks like pystone? Its seems to me version 2.6.1 is not optimized build so I remove(uninstall) it. I repeat the pystone tests with an optimized GCC(mingw32) build. - python-trunk-GCC(mingw32, local, native, optimized) -- shell=cmd.exe 35453,3; 35700,4; 35747,3; 35615,5; 35632,3; 35661,8; 35547,1 average=35622,5 deviation=98,0 -- shell=bash.exe(msys) 36002,1; 35884,4; 35961,7; 35859,5; 35997,3; 36062,9; 35747,1 average=35930,7 deviation=107,2 - python-2.6-MSVC -- shell=cmd.exe 35891,3; 35827,9; 35791,3; 35901,7; 35876,5; 36081,1; 36149,2 average=35931,3 deviation=132,7 -- shell=bash.exe(msys) 35532,9; 35621,1; 35526,8; 35639,4; 35671,2; 35702,4; 35633,0; average=35618,1 deviation=66,1 I don't have idea why performance of official python 2.6 goes down(see previous results below). It is same PC. Every tested program load own files. The result show unexpected behaviour: - the MSVC build is faster by ~0.9% if it is run under cmd.exe then msys bash; - the GCC build is faster by ~0.9% if it is run under msys bash. Otherwise results lock similar but note that builds use different source base and in this case we may can't compare. The old results: > There is result from pystone test test run an old PC (NT 5.1): > - 2.6(official build): > 42194,6; 42302,4; 41990,8; 42658,0; 42660,6; 42770,1 > average=42429,4 > deviation=311,6 > - 2.6.1(official build): > 35612,1; 35778,8; 35666,7; 35697,9; 35514,9; 35654,0 > average=35654,1 > deviation=88,1 > - trunk(my mingw based build): > 35256,7; 35272,5; 35247,2; 35270,7; 35225,6; 35233,5 > average=35251,0 > deviation=19,2 > > There is problem with python performance between 2.6 and 2.6.1 ~ 19% :(. > Also the test for GCC-mingw is not with same source base. > > Roumen Roumen From brett at python.org Sun Jan 25 22:23:20 2009 From: brett at python.org (Brett Cannon) Date: Sun, 25 Jan 2009 13:23:20 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <497CD39E.6010600@voidspace.org.uk> References: <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <497CD39E.6010600@voidspace.org.uk> Message-ID: On Sun, Jan 25, 2009 at 13:03, Michael Foord wrote: > Brett Cannon wrote: >> >> On Sun, Jan 25, 2009 at 10:37, "Martin v. L?wis" >> wrote: >> >>>> >>>> There's a possible third way. I've heard (though haven't investigated) >>>> that some people are working on supporting the svn wire protocol in the >>>> bzr server. This would mean that anybody who's still comfortable with >>>> svn and feels no need to change their current habits can continue to >>>> work the way they always have. Those that want the extra benefits of a >>>> DVCS, or who do not have commit access to the code.python.org branches >>>> would have viable alternatives. >>>> >>> >>> Of course, those without commit access *already* have viable >>> alternatives, IIUC, by means of the automatic ongoing conversion of >>> the svn repository to bzr and hg (and, IIUC, git - or perhaps you >>> can use git-svn without the need for server-side conversion). >>> >>> So a conversion to a DVCS would only benefit those committers who >>> see a benefit in using a DVCS (*) (and would put a burden on those >>> committers who see a DVCS as a burden). It would also put a burden >>> on contributors who are uncomfortable with using a DVCS. >>> >>> Regards, >>> Martin >>> >>> (*) I'm probably missing something, but ISTM that committers can already >>> use the DVCS - they only need to create a patch just before committing. >>> This, of course, is somewhat more complicated than directly pushing the >>> changes to the server, but it still gives them most of what is often >>> reported as the advantage of a DVCS (local commits, ability to have many >>> branches simultaneously, ability to share work-in-progress, etc). In >>> essence, committers wanting to use a DVCS can do so today, by acting >>> as if they were non-committers, and only using svn for actual changes >>> to the master repository. >>> >>> >> >> If I can't choose a clear winner I am going to look into what it take >> to run directly on top of svn to avoid the extra step for committers. >> Otherwise I will get standardized instructions for the three DVCSs and >> maybe write a script or three to make it dead-simple to work with the >> DVCSs but have our official repository be svn so we can all use the >> DVCSs as we see fit until a clear winner springs up. >> > > Well, that sounds like an ideal situation to end up in. Is there a downside > other than the work it creates for you? What, isn't creating even more work from me enough of a downside? =) There is also the issue of support. If we as a development team start using four different VCSs then that will severely cut down on who can help whom. The only reason I have been able to keep the dev FAQ full of such key svn commands is because inevitably someone on this list knew how to do something if I didn't. Spread that across three more DVCSs and the chances of someone knowing the best solution for something dwindles. It also means three more VCSs to keep up and running on code.python.org. While it has worked so far, that's just because we have been going with what Debian stable has. If you want some new-fangled thing, e.g. bzr's 1.9 tree support which apparently makes a huge performance difference (Barry is on vacation but I am prodding him to put the details in the PEP when he gets back) someone will then need to step up that we trust to stay on top of security patches, etc. So yes, it's a nice solution if a winner cannot be chosen, but I don't think that should necessarily be the end of this quite yet. -Brett From barry at python.org Sun Jan 25 23:15:41 2009 From: barry at python.org (Barry Warsaw) Date: Sun, 25 Jan 2009 17:15:41 -0500 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <497CB185.3010601@v.loewis.de> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 25, 2009, at 1:37 PM, Martin v. L?wis wrote: > (*) I'm probably missing something, but ISTM that committers can > already > use the DVCS - they only need to create a patch just before > committing. > This, of course, is somewhat more complicated than directly pushing > the > changes to the server, but it still gives them most of what is often > reported as the advantage of a DVCS (local commits, ability to have > many > branches simultaneously, ability to share work-in-progress, etc). In > essence, committers wanting to use a DVCS can do so today, by acting > as if they were non-committers, and only using svn for actual changes > to the master repository. The approach you outline also has the disadvantages of losing history at the point of patch generation, and causing a discontinuity in the chain of revisions leading up to that point. Depending on the specific changes being merged, this may or may not be important. You're right that we can do this today, but I still believe there are advantages to supporting a DVCS for the official branches. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSXzkjnEjvBPtnXfVAQIUOAP/SLkPAkIqDKNpoIpbaCJTsoLwFsSKj58P ISKqF7QkMgjl+cnw4YngHHwJr+OniX4cR1Wc5S9LPB3xMgsoOtxqYWmvfG1ReJRs fbmI1iOOCmOY1MltRlPErihS3wk7+37pc4lIkEvClvZMRcoLq3JjborIQjiy0ORY pqmovGlx/AI= =wXVD -----END PGP SIGNATURE----- From guido at python.org Sun Jan 25 23:50:59 2009 From: guido at python.org (Guido van Rossum) Date: Sun, 25 Jan 2009 14:50:59 -0800 Subject: [Python-Dev] Operator module deprecations In-Reply-To: <497BBED4.1090005@gmail.com> References: <497BBED4.1090005@gmail.com> Message-ID: +1 indeedy. On Sat, Jan 24, 2009 at 5:22 PM, Nick Coghlan wrote: > Brett Cannon wrote: >> On Sat, Jan 24, 2009 at 14:46, Raymond Hettinger wrote: >>> I would like to deprecate some outdated functions in the operator module. >>> >>> The isSequenceType(), isMappingType(), and isNumberType() >>> functions never worked reliably and now their >>> intended purpose has been largely fulfilled by >>> ABCs. >>> >>> The isCallable() function has long been deprecated >>> and I think it's finally time to rip it out. >>> >>> The repeat() function never really corresponded to an >>> operator. Instead, it reflected an underlying implementation detail (namely >>> the naming of the sq_repeat slot and the abstract C API function >>> PySequence_Repeat). That functionality is already exposed by operator.mul: >>> >>> operator.mul('abc', 3) --> 'abcabcabc' >> >> +1 to all of it. > > What Brett said. > > Cheers, > Nick. > > -- > Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia > --------------------------------------------------------------- > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ptoal at takeflight.ca Sun Jan 25 23:05:41 2009 From: ptoal at takeflight.ca (Patrick Toal) Date: Sun, 25 Jan 2009 17:05:41 -0500 Subject: [Python-Dev] Changes needed for python-2.6.spec to build successfully Message-ID: Hello, I'm not subscribed to this list, but this is the best place I could think of to send this. Please feel free to ignore if this has already been addressed, or if I've approached it completely wrong. When trying to perform an rpmbuild of the python-2.6.1 tarball on my CentOS 5.1 system using the included python-2.6.spec file, I ran into a bunch of vexing problems. My solution to them is included in the diff to the specfile attached. Some of these fixes are probably not appropriate for everyone (eg: my need for shared libs, vs static). I hope this saves someone else a bit of time. :) -------------- next part -------------- A non-text attachment was scrubbed... Name: python-2.6.spec.diff Type: application/octet-stream Size: 5726 bytes Desc: not available URL: -------------- next part -------------- Cheers, Pat ---- Patrick Toal ptoal at takeflight.ca From lists at cheimes.de Mon Jan 26 00:16:35 2009 From: lists at cheimes.de (Christian Heimes) Date: Mon, 26 Jan 2009 00:16:35 +0100 Subject: [Python-Dev] Changes needed for python-2.6.spec to build successfully In-Reply-To: References: Message-ID: Patrick Toal schrieb: > Hello, > > I'm not subscribed to this list, but this is the best place I could > think of to send this. Please feel free to ignore if this has already > been addressed, or if I've approached it completely wrong. > > When trying to perform an rpmbuild of the python-2.6.1 tarball on my > CentOS 5.1 system using the included python-2.6.spec file, I ran into a > bunch of vexing problems. My solution to them is included in the diff > to the specfile attached. Some of these fixes are probably not > appropriate for everyone (eg: my need for shared libs, vs static). > > I hope this saves someone else a bit of time. :) Thanks Patrick! Please open a ticket in our tracker and attach your patch. Patches in mails tend to get lost. Christian From martin at v.loewis.de Mon Jan 26 00:27:42 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 26 Jan 2009 00:27:42 +0100 Subject: [Python-Dev] Changes needed for python-2.6.spec to build successfully In-Reply-To: References: Message-ID: <497CF56E.10802@v.loewis.de> > I hope this saves someone else a bit of time. :) Please submit the parts that you consider of general use to the bug tracker, so we can include it in future releases. Regards, Martin From python at rcn.com Mon Jan 26 00:50:19 2009 From: python at rcn.com (Raymond Hettinger) Date: Sun, 25 Jan 2009 15:50:19 -0800 Subject: [Python-Dev] Operator module deprecations References: <497BBED4.1090005@gmail.com> Message-ID: <3F6A09444036498DA949DA776D20726A@RaymondLaptop1> For Py3.0.1, can we just rip these out and skip deprecation? I don't think they will be missed at all. Raymond ----- Original Message ----- From: "Guido van Rossum" To: "Nick Coghlan" Cc: Sent: Sunday, January 25, 2009 2:50 PM Subject: Re: [Python-Dev] Operator module deprecations > +1 indeedy. > > On Sat, Jan 24, 2009 at 5:22 PM, Nick Coghlan wrote: >> Brett Cannon wrote: >>> On Sat, Jan 24, 2009 at 14:46, Raymond Hettinger wrote: >>>> I would like to deprecate some outdated functions in the operator module. >>>> >>>> The isSequenceType(), isMappingType(), and isNumberType() >>>> functions never worked reliably and now their >>>> intended purpose has been largely fulfilled by >>>> ABCs. >>>> >>>> The isCallable() function has long been deprecated >>>> and I think it's finally time to rip it out. >>>> >>>> The repeat() function never really corresponded to an >>>> operator. Instead, it reflected an underlying implementation detail (namely >>>> the naming of the sq_repeat slot and the abstract C API function >>>> PySequence_Repeat). That functionality is already exposed by operator.mul: >>>> >>>> operator.mul('abc', 3) --> 'abcabcabc' >>> >>> +1 to all of it. >> >> What Brett said. >> >> Cheers, >> Nick. >> >> -- >> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >> --------------------------------------------------------------- >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/python%40rcn.com From guido at python.org Mon Jan 26 00:55:24 2009 From: guido at python.org (Guido van Rossum) Date: Sun, 25 Jan 2009 15:55:24 -0800 Subject: [Python-Dev] Operator module deprecations In-Reply-To: <3F6A09444036498DA949DA776D20726A@RaymondLaptop1> References: <497BBED4.1090005@gmail.com> <3F6A09444036498DA949DA776D20726A@RaymondLaptop1> Message-ID: Since 3.0.1 is going to do a couple of these already, I think that's fine. On Sun, Jan 25, 2009 at 3:50 PM, Raymond Hettinger wrote: > For Py3.0.1, can we just rip these out and skip deprecation? > I don't think they will be missed at all. > > Raymond > > ----- Original Message ----- From: "Guido van Rossum" > To: "Nick Coghlan" > Cc: > Sent: Sunday, January 25, 2009 2:50 PM > Subject: Re: [Python-Dev] Operator module deprecations > > >> +1 indeedy. >> >> On Sat, Jan 24, 2009 at 5:22 PM, Nick Coghlan wrote: >>> >>> Brett Cannon wrote: >>>> >>>> On Sat, Jan 24, 2009 at 14:46, Raymond Hettinger wrote: >>>>> >>>>> I would like to deprecate some outdated functions in the operator >>>>> module. >>>>> >>>>> The isSequenceType(), isMappingType(), and isNumberType() >>>>> functions never worked reliably and now their >>>>> intended purpose has been largely fulfilled by >>>>> ABCs. >>>>> >>>>> The isCallable() function has long been deprecated >>>>> and I think it's finally time to rip it out. >>>>> >>>>> The repeat() function never really corresponded to an >>>>> operator. Instead, it reflected an underlying implementation detail >>>>> (namely >>>>> the naming of the sq_repeat slot and the abstract C API function >>>>> PySequence_Repeat). That functionality is already exposed by >>>>> operator.mul: >>>>> >>>>> operator.mul('abc', 3) --> 'abcabcabc' >>>> >>>> +1 to all of it. >>> >>> What Brett said. >>> >>> Cheers, >>> Nick. >>> >>> -- >>> Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia >>> --------------------------------------------------------------- >>> _______________________________________________ >>> Python-Dev mailing list >>> Python-Dev at python.org >>> http://mail.python.org/mailman/listinfo/python-dev >>> Unsubscribe: >>> http://mail.python.org/mailman/options/python-dev/guido%40python.org >>> >> >> >> >> -- >> --Guido van Rossum (home page: http://www.python.org/~guido/) >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: >> http://mail.python.org/mailman/options/python-dev/python%40rcn.com > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From carl at carlsensei.com Mon Jan 26 01:53:02 2009 From: carl at carlsensei.com (Carl Johnson) Date: Sun, 25 Jan 2009 14:53:02 -1000 Subject: [Python-Dev] Incorrect documentation (and possibly implementation) for rlcompleter.Completer? Message-ID: The documentation at http://docs.python.org/library/rlcompleter.html claims that > Completer.complete(text, state)? > > Return the state*th completion for *text. > > If called for text that doesn?t include a period character > ('.'), it will complete from names currently defined in __main__, > __builtin__ and keywords (as defined by the keyword module). > > If called for a dotted name, it will try to evaluate anything > without obvious side-effects (functions will not be evaluated, but > it can generate calls to __getattr__()) up to the last part, and > find matches for the rest via the dir() function. Any exception > raised during the evaluation of the expression is caught, silenced > and None is returned. In other words, it claims to use dir(obj) as part of the tab completion process. This is not true (using Python 2.6.1 on OS X): >>> class B(object): ... def __dir__(self): return dir(u"") #Makes B objects look like strings ... >>> b = B() >>> dir(b) ['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__getslice__', '__gt__', '__hash__', '__init__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '_formatter_field_name_split', '_formatter_parser', 'capitalize', 'center', 'count', 'decode', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'index', 'isalnum', 'isalpha', 'isdecimal', 'isdigit', 'islower', 'isnumeric', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill'] >>> c = rlcompleter.Completer() >>> c.complete("b.", 0) #Notice that it does NOT return __add__ u'b.__class__(' >>> c.matches #Notice that this list is completely different from the list given by dir(b) [u'b.__class__(', u'b.__delattr__(', u'b.__doc__', u'b.__format__(', u'b.__getattribute__(', u'b.__hash__(', u'b.__init__(', u'b.__new__(', u'b.__reduce__(', u'b.__reduce_ex__(', u'b.__repr__(', u'b.__setattr__(', u'b.__sizeof__(', u'b.__str__(', u'b.__subclasshook__(', u'b.__class__(', u'b.__class__(', u'b.__delattr__(', u'b.__dict__', u'b.__dir__(', u'b.__doc__', u'b.__format__(', u'b.__getattribute__(', u'b.__hash__(', u'b.__init__(', u'b.__module__', u'b.__new__(', u'b.__reduce__(', u'b.__reduce_ex__(', u'b.__repr__(', u'b.__setattr__(', u'b.__sizeof__(', u'b.__str__(', u'b.__subclasshook__(', u'b.__weakref__', u'b.__class__(', u'b.__delattr__(', u'b.__doc__', u'b.__format__(', u'b.__getattribute__(', u'b.__hash__(', u'b.__init__(', u'b.__new__(', u'b.__reduce__(', u'b.__reduce_ex__(', u'b.__repr__(', u'b.__setattr__(', u'b.__sizeof__(', u'b.__str__(', u'b.__subclasshook__('] As I see it, there are two ways to fix the problem: Change the documentation or change rlcompleter.Complete. I think the latter option is preferable (although it might have to wait for Python 2.7/3.1), but I thought I would ask other people if I'm missing something and if not which fix is preferred. If other people agree that it's a bug, I'll file it. -- Carl Johnson From ncoghlan at gmail.com Mon Jan 26 02:22:53 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 26 Jan 2009 11:22:53 +1000 Subject: [Python-Dev] Incorrect documentation (and possibly implementation) for rlcompleter.Completer? In-Reply-To: References: Message-ID: <497D106D.6070401@gmail.com> Carl Johnson wrote: > As I see it, there are two ways to fix the problem: Change the > documentation or change rlcompleter.Complete. I think the latter option > is preferable (although it might have to wait for Python 2.7/3.1), but I > thought I would ask other people if I'm missing something and if not > which fix is preferred. If other people agree that it's a bug, I'll file > it. It needs to go on the tracker regardless of whether the problem is in the documentation or in the implementation, so file away. Given that rlcompleter already evaluates the expression preceding the last "." when asked to perform a completion, you're probably right that actually invoking dir() on the result as the documentation claims is the way to go. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From carl at carlsensei.com Mon Jan 26 03:02:18 2009 From: carl at carlsensei.com (Carl Johnson) Date: Sun, 25 Jan 2009 16:02:18 -1000 Subject: [Python-Dev] Incorrect documentation (and possibly implementation) for rlcompleter.Completer? In-Reply-To: <497D106D.6070401@gmail.com> References: <497D106D.6070401@gmail.com> Message-ID: <35D480C0-17FB-4E05-A3F7-EF00442335A3@carlsensei.com> On 2009/01/25, at 3:22 pm, Nick Coghlan wrote: > It needs to go on the tracker regardless of whether the problem is in > the documentation or in the implementation, so file away. Issue #5062: http://bugs.python.org/issue5062 -- Carl Johnson From scott+python-dev at scottdial.com Mon Jan 26 03:08:13 2009 From: scott+python-dev at scottdial.com (Scott Dial) Date: Sun, 25 Jan 2009 21:08:13 -0500 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: <497D1B0D.5090206@scottdial.com> Luke Kenneth Casson Leighton wrote: > i'm sorry to hear that you believe my messages to be sometimes > offensive. i'm sorry that you are annoyed. i'm sorry that i am > learning about things and that i believe that people would like to > help cooperate on the development of python as a free software > project, by helping point me in the right directions. i'm sorry that i > am unable to get things perfect the first time, so that i have to ask > people for help and advice, and i'm sorry that you are annoyed by my > asking. Nice job with the fake-apology-that-is-actually-an-attack maneuver there. I believe the main complaint is that you clearly have not exercised enough due diligence to find the answers yourself before asking on the list. There are a number of examples in this very thread of you replying to yourself because you just figured out something new that you didn't in the previous email. You should not consider this list an open forum for stream-of-thought style emails. Pointed, well-explored questions are the only sort that will be useful to you (in getting people to read your questions and answer them) and the community (in not having an enormous amount of low-SNR emails to sort through). These threads with obviously disprovable "facts" are not useful to anyone at-large, whether they are (dubiously) educational to you or not -- it would be more educational for you to explore your assumption and find out it's wrong all on your own. >> If you want to propose a new feature then python-ideas is the right >> mailing list. > > thank you for informing me of that - i was not aware of that list: i > believed that the python-dev mailing list would be the location for > discussion of development and ports of python. As far as I can tell, you have replied to your own threads more than anyone else on the mailing list, and you should interpret that as a general lack of interest from the developers reading this list. I think it's been made clear that nobody is opposed to having an all-free build of Python for Win32, however it is not the focus of anyone's interest here because it's "free enough" for our purposes. I believe Martin wrote you a reply that explained that quite well. -- Scott Dial scott at scottdial.com scodial at cs.indiana.edu From jared.grubb at gmail.com Mon Jan 26 04:27:09 2009 From: jared.grubb at gmail.com (Jared Grubb) Date: Sun, 25 Jan 2009 19:27:09 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <497CB185.3010601@v.loewis.de> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> Message-ID: <573EFD3B-2215-4747-B4B0-45C35A9F9F86@gmail.com> Regardless of the outcome, those that want to use SVN can use SVN, and those that want to use "chosen DVCS" can use that. In the end, which is the more "lossy" repository? It seems like if the change is transparent to everyone who is using it, then the only thing that we care about is that the chosen backend will preserve all the information to make it truly transparent to everyone involved. Jared On 25 Jan 2009, at 10:37, Martin v. L?wis wrote: >> There's a possible third way. I've heard (though haven't >> investigated) >> that some people are working on supporting the svn wire protocol in >> the >> bzr server. This would mean that anybody who's still comfortable >> with >> svn and feels no need to change their current habits can continue to >> work the way they always have. Those that want the extra benefits >> of a >> DVCS, or who do not have commit access to the code.python.org >> branches >> would have viable alternatives. > > Of course, those without commit access *already* have viable > alternatives, IIUC, by means of the automatic ongoing conversion of > the svn repository to bzr and hg (and, IIUC, git - or perhaps you > can use git-svn without the need for server-side conversion). > > So a conversion to a DVCS would only benefit those committers who > see a benefit in using a DVCS (*) (and would put a burden on those > committers who see a DVCS as a burden). It would also put a burden > on contributors who are uncomfortable with using a DVCS. > > Regards, > Martin > > (*) I'm probably missing something, but ISTM that committers can > already > use the DVCS - they only need to create a patch just before > committing. > This, of course, is somewhat more complicated than directly pushing > the > changes to the server, but it still gives them most of what is often > reported as the advantage of a DVCS (local commits, ability to have > many > branches simultaneously, ability to share work-in-progress, etc). In > essence, committers wanting to use a DVCS can do so today, by acting > as if they were non-committers, and only using svn for actual changes > to the master repository. > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/jared.grubb%40gmail.com From stephen at xemacs.org Mon Jan 26 05:38:28 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Mon, 26 Jan 2009 13:38:28 +0900 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <497CB185.3010601@v.loewis.de> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> Message-ID: <87eiyqfzqz.fsf@xemacs.org> "Martin v. L?wis" writes: > So a conversion to a DVCS would only benefit those committers who > see a benefit in using a DVCS (*) (and would put a burden on those > committers who see a DVCS as a burden). That's false. Especially with bzr, they would see improved log formats by default, and with git they'd have access to some very advanced and powerful history querying, even if the update-hack- commit side of the workflow doesn't change. Not having used these tools in most cases, the fact that they don't currently *perceive* a benefit from switching doesn't mean there won't be one. There's also the very high likelihood that if Python does switch to a DVCS, many such committers will start to use the distributed VCS features actively. That doesn't mean that it will fully offset the costs of the switch for them; it does mean that the net cost of the switch for them is probably a lot lower than it would appear from your description. > It would also put a burden on contributors who are uncomfortable > with using a DVCS. "Discomfort" is not something I take lightly, but I have to wonder how long that discomfort would persist. All of the DVCSes support exactly the same workflow as CVS/Subversion, with some change in naming, and possibly a need to have a trivial commit+push script to give a single command with the semantics of "svn commit". This is crystal clear if you look at the draft PEP 374, which deliberately emulates the Subversion workflow in a variety of scenarios. It's true that the svn workflow looks more efficient and/or natural in many places, but please remember that the current workflow has evolved to deal with the specific features of Subversion compared to the DVCSes, and has had years to evolve best practices, where the PEP authors have only had a few days. I suspect that (if a DVCS is adopted) both the workflow and the best practices will evolve naturally and without much additional cost of learning to users, and arrive at a similarly efficient, but more powerful, workflow, well within a year after adoption. So there might be some rough places around the edges, especially in coming up with a way to get the functionality of "svnmerge.py block", but as far as I can see, unless the project decides that it wants to adopt a decentralized workflow, the full cost of DVCS is simply learning a new VCS; the "D" has nothing to do with it. It won't be much harder than switching from CVS to Subversion. Again, I don't take the cost of learning a new tool lightly, but please let's call that cost by its name, and not bring "distributed" into it. > (*) I'm probably missing something, but ISTM that committers can already > use the DVCS - they only need to create a patch just before committing. That's true. It's also true that to have the benefit of distributed version control with Subversion, "they only need to run a Subversion server locally". In both cases, it amounts to a fair amount of extra effort, and misses out on all the benefits of improved merging, automatic propagation of history from a merged local branch to the master repo, etc., etc. > In essence, committers wanting to use a DVCS can do so today, by > acting as if they were non-committers, and only using svn for > actual changes to the master repository. That's false. Again, those people who want to use a DVCS as a DVCS will benefit from having the master repository(s) coordinate history for them. This doesn't work very well across VCS systems, essentially forcing all committers who want to use the distributed features to coordinate with each other directly, and only with those who use the same DVCS. The mental models used by git users, hg users, and bzr users differ significantly, though they probably differ more from the model that's appropriate for Subversion. Nevertheless, there is a lot of potential benefit to be gained from having a common DVCS for all developers. Whether the benefits available *today* and in the near future outweigh the costs of an early transition, I make no claims. But those benefits *are* fairly large, and much of the cost is subjective (based on the bad reputation of the DVCSes for UI awkwardness, especially that of git) and *may* (at this stage, I don't say "will") dissipate quickly with a rather small amount of experience. From stephen at xemacs.org Mon Jan 26 05:46:35 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Mon, 26 Jan 2009 13:46:35 +0900 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <497CD39E.6010600@voidspace.org.uk> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <497CD39E.6010600@voidspace.org.uk> Message-ID: <87d4eafzdg.fsf@xemacs.org> Michael Foord writes: > > If I can't choose a clear winner I am going to look into what it take > > to run directly on top of svn to avoid the extra step for committers. > Well, that sounds like an ideal situation to end up in. Is there a > downside other than the work it creates for you? I'm with Brett, the extra work for him is more than downside enough. But over and above that, the various DVCSes have different strengths and weaknesses, and their proponents have different mental models of how DVCS is "supposed" to work. I believe this is reflected to a great extent in their capabilities, creating great friction to trying to work with a different VCS from the "native" one of the master repositories. You just end up using a buttload of extra CPU cycles to achieve a Subversion-based workflow. The big advantage, IMO, to going to a DVCS for the master repo is that you can start with the same workflow currently used, and gradually adapt it to the capabilities of the more powerful tools. If we don't do that, the workflow will never really change, and the project-wide advantages of the tools will be lost. From curt at hagenlocher.org Mon Jan 26 08:15:01 2009 From: curt at hagenlocher.org (Curt Hagenlocher) Date: Sun, 25 Jan 2009 23:15:01 -0800 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: On Sun, Jan 25, 2009 at 10:45 AM, Luke Kenneth Casson Leighton wrote: > > from http://en.wikipedia.org/wiki/Dynamic-link_library: > > "Third, dynamic linking is inherently the wrong model for paged memory > managed systems. Such systems work best with the idea that code is > invariant from the time of assembly/compilation on. > ........... Data references do not need to be so vectored because > DLLs do not share data." That section ("The case against DLLs") should probably be ignored. It appears to have been written by a single individual with a particular axe to grind. Much of what it contains is opinion rather than fact, and some of its facts are downright inaccurate -- at least by my recollection. I haven't thought much about any of this in well over ten years, but here's what I remember: The reason for the vectored importing of function addresses is strictly performance -- it means that you only need to fixup one location with the address of the target function instead of each location in the code. This also has obvious advantages for paging. But this may very well be a feature of the linker rather than the operating system; I imagine the loader will happily fixup the same address multiple times if you ask it to. There are differences between importing code and importing data: the code produced by the compiler for calling a function does not depend on whether or not that function is defined in the current module or in a different one -- under x86, they're both just CALL instructions. But when accessing data, addresses in the current module can be used directly while those in a different module must be indirected -- which means that different opcodes must be generated. I don't know if it's up-to-date, but the page at http://sourceware.org/binutils/docs/ld/WIN32.html suggests some ways of dealing with this for cygwin/mingw. Look for the section entitled "automatic data imports". If you have specific questions related to DLL or loader behavior under Windows, feel free to ping me off-list. I can't guarantee that I can provide an answer, but I may be able to point you in a particular direction. -- Curt Hagenlocher curt at hagenlocher.org From techtonik at gmail.com Mon Jan 26 09:44:01 2009 From: techtonik at gmail.com (anatoly techtonik) Date: Mon, 26 Jan 2009 10:44:01 +0200 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: References: Message-ID: Hi, Guido You can't launch a process and communicate with it without blocking at some point. The scenario where you can periodically check output and error pipes without blocking and send input is not supported. Scenario One: control console program. This implies reading from input and writing replies asyncronously - Popen.stdin, stdout, stderr - stdin.write(), stdout.read() or stderr.read() block script execution if any of the other OS pipe buffers fills up and blocks the child process. Advice is to use communicate(), but it waits for process to exit and hence doesn't allow to send the input based on reply from previous send. Scenario Two: waiting for process that terminates with extensive output blocks - Popen.wait() blocks if the child process generates enough output to a stdout or stderr pipe, advice is to use Popen.communicate() http://docs.python.org/library/subprocess.html#subprocess.Popen.wait - Popen.communicate() - Waits for process to terminate, reads data from stdout and stderr until end-of-file. Cashes data in memory - should not be used for if the data size is large or unlimited. Solution - read(maxsize) and write(maxsize) functions that will return immediately. -- anatoly t. On Sat, Jan 24, 2009 at 5:58 PM, Guido van Rossum wrote: > Anatoly, > > I'm confused. The subprocess already allows reading/writing its > stdin/stdout/stderr, and AFAIK it's a platform-neutral API. I'm sure > there's something missing, but your post doesn't make it clear what > exactly, and the recipe you reference is too large to digest easily. > Can you explain what it is that the current subprocess does't have > beyond saying "async communication" (which could mean many things to > many people)? > > --Guido > > On Sat, Jan 24, 2009 at 5:07 AM, anatoly techtonik wrote: >> Greetings, >> >> This turned out to be a rather long post that in short can be summarized as: >> "please-please-please, include asynchronous process communication in >> subprocess module and do not allow "available only on ..." >> functionality", because it hurts the brain". >> >> Code to speak for itself: http://code.activestate.com/recipes/440554/ >> >> >> The subprocess module was a great step forward to unify various spawn >> and system and exec and etc. calls in one module, and more importantly >> - in one uniform API. But this API is partly crossplatform, and I >> believe I've seen recent commits to docs with more unix-only >> differences in this module. >> >> The main point of this module is to "allows you to spawn new >> processes, connect to their input/output/error pipes, and obtain their >> return codes". PEP 324 goal is also to make "make Python an even >> better replacement language for over-complicated shell scripts". >> >> Citing pre-subrocess PEP 324, "Currently, Python has a large number of >> different functions for process creation. This makes it hard for >> developers to choose." Now there is one class with many methods and >> many platform-specific comments and notices. To make thing worse >> people on Unix use subprocess with fcntl and people on windows tend >> not to use it at all, because it looks complicated and doesn't solve >> the problem with asynchronous communication. >> >> That I suggest is to add either support for async crossplatfrom >> read/write/probing of executed process or a comment at the top of >> module documentation that will warn that subprocess works in blocking >> mode. With async mode you can emulate blocking, the opposite is not >> possible. This will save python users a lot of time. >> >> Thanks for reading my rant. >> >> >> BTW, the proposed change is top 10 python recipe on ActiveState >> http://code.activestate.com/recipes/langs/python/ >> >> -- >> --anatoly t. >> _______________________________________________ >> Python-Dev mailing list >> Python-Dev at python.org >> http://mail.python.org/mailman/listinfo/python-dev >> Unsubscribe: http://mail.python.org/mailman/options/python-dev/guido%40python.org >> > > > > -- > --Guido van Rossum (home page: http://www.python.org/~guido/) > -- --anatoly t. From cournape at gmail.com Mon Jan 26 09:44:15 2009 From: cournape at gmail.com (David Cournapeau) Date: Mon, 26 Jan 2009 17:44:15 +0900 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <87eiyqfzqz.fsf@xemacs.org> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <87eiyqfzqz.fsf@xemacs.org> Message-ID: <5b8d13220901260044p633fd312i77bed5152059cde5@mail.gmail.com> On Mon, Jan 26, 2009 at 1:38 PM, Stephen J. Turnbull wrote: > > Again, I don't take the cost of learning a new tool lightly, but > please let's call that cost by its name, and not bring "distributed" > into it. I can only strongly agree on this point - most people asserting that DVCS are much more complicated than CVS/SVN/etc..., forget their long experience with the later. I had little experience with svn before using bzr, and I find bzr much simpler than svn in almost every way, both for personal projects and more significant open source projects. > > That's false. Again, those people who want to use a DVCS as a DVCS > will benefit from having the master repository(s) coordinate history > for them. This doesn't work very well across VCS systems, essentially > forcing all committers who want to use the distributed features to > coordinate with each other directly, and only with those who use the > same DVCS. The mental models used by git users, hg users, and bzr > users differ significantly, though they probably differ more from the > model that's appropriate for Subversion. Nevertheless, there is a lot > of potential benefit to be gained from having a common DVCS for all > developers. Agreed. A point shared by all svn-to-bzr/git/whatever in my experience is the pain of merging. In particular, although git-svn on top of svn is very useful, and brings some power of git without forcing git pain on other users, merging between branches is not really doable without going back to svn. And that's certainly a big plus of DVCS compared to svn: since svn is inherently incapable of tracking merge (at least until recently, I have no experience with 1.5), you can't use svn as a backend and benefeting from all the DVCS advantages at the same time. cheers, David From nick at craig-wood.com Mon Jan 26 13:15:30 2009 From: nick at craig-wood.com (Nick Craig-Wood) Date: Mon, 26 Jan 2009 12:15:30 +0000 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: References: Message-ID: <20090126121529.GA29551@craig-wood.com> On Sat, Jan 24, 2009 at 07:58:40AM -0800, Guido van Rossum wrote: > I'm confused. The subprocess already allows reading/writing its > stdin/stdout/stderr, and AFAIK it's a platform-neutral API. I'm sure > there's something missing, but your post doesn't make it clear what > exactly, and the recipe you reference is too large to digest easily. > Can you explain what it is that the current subprocess does't have > beyond saying "async communication" (which could mean many things to > many people)? The main problem with subprocess is that it doesn't work if you want to have a conversation / interact with your child process. subprocess works very well indeed for this case :- run child send stuff to stdin child reads stdin and writes stdout child exits read stuff from stdout But for the conversational case (eg using it to do a remote login) it doesn't work at all :- run child send stuff to stdin child reads stdin and writes stdout read stuff from stdout send stuff to stdin child reads stdin and writes stdout read stuff from stdout send stuff to stdin child reads stdin and writes stdout read stuff from stdout child exits In subprocess "read stuff from stdout" means read stdout until the other end closes it, not read what is available and return it, so it blocks on reading the first reply and never returns. Hence Anatoly's request for "async communication" and the existence of that recipe. http://code.activestate.com/recipes/440554/ I've spend quite a lot of time explaning this to people on comp.lang.python. I usually suggest either the recipe as suggested by Anatoly or if on unix the pexpect module. There are other solutions I know of, eg in twisted and wxPython. I heard rumours of a pexpect port to Windows but I don't know how far that has progressed. A cross platform async subprocess would indeed be a boon! -- Nick Craig-Wood -- http://www.craig-wood.com/nick From ncoghlan at gmail.com Mon Jan 26 13:31:56 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Mon, 26 Jan 2009 22:31:56 +1000 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: <497D1B0D.5090206@scottdial.com> References: <497D1B0D.5090206@scottdial.com> Message-ID: <497DAD3C.2070904@gmail.com> Scott Dial wrote: > I think > it's been made clear that nobody is opposed to having an all-free build > of Python for Win32, however it is not the focus of anyone's interest > here because it's "free enough" for our purposes. I believe Martin wrote > you a reply that explained that quite well. One thing to keep in mind is the fact that CPython uses a BSD-style licensing model and hence will tend to attract developers that have no problem with the idea of someone making a proprietary fork of our code. One consequence of this self-selection process is that the Python core developers aren't likely to see anything inherently wrong with the idea of closed source proprietary software (it may be an inefficient and wasteful method of development when it comes to commodity software, but it isn't actually morally *wrong* in any way). Visual Studio is the best available tool for native Windows C/C++ development and these days it even comes with the free-as-in-beer Express edition. The fact that VS is itself a non-free closed source application may bother developers out there with a stronger philosophical preference for free software, but it doesn't really bother me or, I believe, most of the core committers in the slightest. I have no problem with anyone that dislikes non-free software and chooses to opt out of the Windows world altogether (I myself use my Windows machine almost solely to play games, as I prefer Linux for development and general computing tasks). But if a developer decides (for whatever reason) to opt into that world and support the platform, it doesn't make any sense to me to complain that the recommended tools for developing in a non-free environment are themselves non-free (at least in the software libre sense). Going "Oh, I may be targeting a non-free platform, but at least I used free software tools to do it" strikes me as sheer sophistry and a fairly pointless waste of time. If a developer can't even find someone to either build Windows binaries for them or else to donate the cash for a single Windows license to run Visual Studio Express in a virtual machine, then it seems to me that any supposed demand for Windows support must be pretty damn tenuous. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From hrvoje.niksic at avl.com Mon Jan 26 14:10:04 2009 From: hrvoje.niksic at avl.com (Hrvoje Niksic) Date: Mon, 26 Jan 2009 14:10:04 +0100 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: <20407628.490931.1232972271521.JavaMail.xicrypt@atgrzls001> References: <20407628.490931.1232972271521.JavaMail.xicrypt@atgrzls001> Message-ID: <497DB62C.4050108@avl.com> Nick Craig-Wood wrote: > But for the conversational case (eg using it to do a remote login) it > doesn't work at all :- > > run child > send stuff to stdin > child reads stdin and writes stdout Can this really be made safe without an explicit flow control protocol, such as a pseudo-TTY? stdio reads data from pipes such as stdin in 4K or so chunks. I can easily imagine the child blocking while it waits for its stdin buffer to fill, while the parent in turn blocks waiting for the child's output arrive. Shell pipelines (and the subprocess module as it stands) don't have this problem because they're unidirectional: you read input from one process and write output to another, but you typically don't feed data back to the process you've read it from. From daniel at stutzbachenterprises.com Mon Jan 26 16:19:01 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Mon, 26 Jan 2009 09:19:01 -0600 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: <497DB62C.4050108@avl.com> References: <20407628.490931.1232972271521.JavaMail.xicrypt@atgrzls001> <497DB62C.4050108@avl.com> Message-ID: I'm confused. What's wrong with the following? p = Popen('do_something', stdin=PIPE, stdout=PIPE) p.stdin.write('la la la\n') p.stdin.flush() line = p.stdout.readline() p.stdin.write(process(line)) p.stdin.flush() If you want to see if data is available on p.stdout, use the select module (unless you're on Windows). The child process has to flush its output buffer for this to work, but that isn't Python's problem. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at eventuallyanyway.com Mon Jan 26 17:08:46 2009 From: paul at eventuallyanyway.com (Paul Hummer) Date: Mon, 26 Jan 2009 09:08:46 -0700 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <5b8d13220901260044p633fd312i77bed5152059cde5@mail.gmail.com> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <87eiyqfzqz.fsf@xemacs.org> <5b8d13220901260044p633fd312i77bed5152059cde5@mail.gmail.com> Message-ID: <20090126090846.6ad1895c@megatron> On Mon, 26 Jan 2009 17:44:15 +0900, David Cournapeau wrote: > > > > Again, I don't take the cost of learning a new tool lightly, but > > please let's call that cost by its name, and not bring "distributed" > > into it. > > I can only strongly agree on this point - most people asserting that > DVCS are much more complicated than CVS/SVN/etc..., forget their long > experience with the later. I had little experience with svn before > using bzr, and I find bzr much simpler than svn in almost every way, > both for personal projects and more significant open source projects. > > > > > That's false. Again, those people who want to use a DVCS as a DVCS > > will benefit from having the master repository(s) coordinate history > > for them. This doesn't work very well across VCS systems, essentially > > forcing all committers who want to use the distributed features to > > coordinate with each other directly, and only with those who use the > > same DVCS. The mental models used by git users, hg users, and bzr > > users differ significantly, though they probably differ more from the > > model that's appropriate for Subversion. Nevertheless, there is a lot > > of potential benefit to be gained from having a common DVCS for all > > developers. > > Agreed. A point shared by all svn-to-bzr/git/whatever in my experience > is the pain of merging. In particular, although git-svn on top of svn > is very useful, and brings some power of git without forcing git pain > on other users, merging between branches is not really doable without > going back to svn. And that's certainly a big plus of DVCS compared to > svn: since svn is inherently incapable of tracking merge (at least > until recently, I have no experience with 1.5), you can't use svn as a > backend and benefeting from all the DVCS advantages at the same time. > At a previous employer, we had this same discussion about switching to a DVCS, and the time and cost required to learn the new tool. We switched to bzr, and while there were days where someone got lost in the DVCS, the overall advantages with merging allowed that cost to be offset by the fact that merging was so cheap (and we merged a lot). That's a big consideration to be made when you're considering a DVCS. Merges in SVN and CVS can be painful, where merging well is a core feature of any DVCS. -- Paul Hummer http://theironlion.net 1024/862FF08F C921 E962 58F8 5547 6723 0E8C 1C4D 8AC5 862F F08F From eckhardt at satorlaser.com Mon Jan 26 17:31:37 2009 From: eckhardt at satorlaser.com (Ulrich Eckhardt) Date: Mon, 26 Jan 2009 17:31:37 +0100 Subject: [Python-Dev] microsoft dlls apparently don't support data. implications: PyAPI functions required to access data across modules. In-Reply-To: References: Message-ID: <200901261731.37142.eckhardt@satorlaser.com> On Sunday 25 January 2009, Luke Kenneth Casson Leighton wrote: > matthieu, thank you for responding. from > http://en.wikipedia.org/wiki/Dynamic-link_library: > > "Third, dynamic linking is inherently the wrong model for paged memory > managed systems. Such systems work best with the idea that code is > invariant from the time of assembly/compilation on. > ........... Data references do not need to be so vectored because > DLLs do not share data." > ^^^^^^^^^^^^^^^^^^^^ > > does anyone happen to know what this means? I can only guess: The difference between code and data is that code can be loaded into a process by simply mapping it into the virtual memory. For data that is constant, the same applies. For non-const data, you absolutely must not do that though, because it would make processes interfere with each other, and that is what the above text probably means. So, the important difference is rather that read-only stuff can be memory-mapped while read-write stuff can't. Since code is read-only (barring self-modifying code and trampolines etc), it is automatically always sharable. > curt, thank you for responding. i'd seen this: i understood it - > and... yet... mingw happily segfaults when asked to access _any_ data > in _any_ object file of the python2N dll. Dump the address of said data and its size from inside that DLL and from outside just to see if they differ, both from the same process. I'd also dump the size, in case different compiler settings messed up padding or something like that. > from looking so far. e.g. i expected MSVCRT.DLL errno to be an > int - it's not: it's a function). 'errno' can't be an int, because it needs to be thread-local. Also, note the important difference between "errno is an int" and "errno yields an lvalue of type int". The latter is how the standard defines it. > *sigh*. if this turns out to be yet another gcc / mingw bug i'm going > to be slightly annoyed. only slightly, because this _is_ free > software, after all :) Can you reproduce this with a separate example? Uli -- Sator Laser GmbH Gesch?ftsf?hrer: Thorsten F?cking, Amtsgericht Hamburg HR B62 932 ************************************************************************************** Sator Laser GmbH, Fangdieckstra?e 75a, 22547 Hamburg, Deutschland Gesch?ftsf?hrer: Thorsten F?cking, Amtsgericht Hamburg HR B62 932 ************************************************************************************** Visit our website at ************************************************************************************** Diese E-Mail einschlie?lich s?mtlicher Anh?nge ist nur f?r den Adressaten bestimmt und kann vertrauliche Informationen enthalten. Bitte benachrichtigen Sie den Absender umgehend, falls Sie nicht der beabsichtigte Empf?nger sein sollten. Die E-Mail ist in diesem Fall zu l?schen und darf weder gelesen, weitergeleitet, ver?ffentlicht oder anderweitig benutzt werden. E-Mails k?nnen durch Dritte gelesen werden und Viren sowie nichtautorisierte ?nderungen enthalten. Sator Laser GmbH ist f?r diese Folgen nicht verantwortlich. ************************************************************************************** From eckhardt at satorlaser.com Mon Jan 26 19:04:17 2009 From: eckhardt at satorlaser.com (Ulrich Eckhardt) Date: Mon, 26 Jan 2009 19:04:17 +0100 Subject: [Python-Dev] FormatError() in callproc.c under win32 Message-ID: <200901261904.17978.eckhardt@satorlaser.com> Hi! In callproc.c from trunk is a function called SetException(), which calls FormatError() only to discard the contents. Can anyone enlighten me to the reasons thereof? Is it just to check if the errorcode is registered in the stringtables? The reason I ask is the CE port. The FormatMessage API exists there (or, rather, the FormatMessageW API), but the tables containing the error messages are optional for the OS and for space reasons many vendors chose not to include them. That means that the function there regularly fails to retrieve the requested string. My first approach was to fall back to simply providing a sting with a numeric representation of the errorcode, but that would change the meaning of above function, because then it could never fails. My second approach was to enhance PyErr_SetFromWindowsErr() to handle the additional error codes that are checked in SetException(). However, those require more context than just the error code, they use the EXCEPTION_RECORD passed to SetException() for that. My third approach would be to filter out the special error codes first and delegate all others to PyErr_SetFromWindowsErr(). The latter already handles the lack of a string for the code by formatting it numerically. This would also improve consistency, since the two functions use different ways to format unrecognised errors numerically. This approach would change where and how a completely unrecognised error code is formatted, but would otherwise be pretty equivalent. Suggestions? Uli -- Sator Laser GmbH Gesch?ftsf?hrer: Thorsten F?cking, Amtsgericht Hamburg HR B62 932 ************************************************************************************** Sator Laser GmbH, Fangdieckstra?e 75a, 22547 Hamburg, Deutschland Gesch?ftsf?hrer: Thorsten F?cking, Amtsgericht Hamburg HR B62 932 ************************************************************************************** Visit our website at ************************************************************************************** Diese E-Mail einschlie?lich s?mtlicher Anh?nge ist nur f?r den Adressaten bestimmt und kann vertrauliche Informationen enthalten. Bitte benachrichtigen Sie den Absender umgehend, falls Sie nicht der beabsichtigte Empf?nger sein sollten. Die E-Mail ist in diesem Fall zu l?schen und darf weder gelesen, weitergeleitet, ver?ffentlicht oder anderweitig benutzt werden. E-Mails k?nnen durch Dritte gelesen werden und Viren sowie nichtautorisierte ?nderungen enthalten. Sator Laser GmbH ist f?r diese Folgen nicht verantwortlich. ************************************************************************************** From guido at python.org Mon Jan 26 19:31:55 2009 From: guido at python.org (Guido van Rossum) Date: Mon, 26 Jan 2009 10:31:55 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <20090126090846.6ad1895c@megatron> References: <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <87eiyqfzqz.fsf@xemacs.org> <5b8d13220901260044p633fd312i77bed5152059cde5@mail.gmail.com> <20090126090846.6ad1895c@megatron> Message-ID: On Mon, Jan 26, 2009 at 8:08 AM, Paul Hummer wrote: > At a previous employer, we had this same discussion about switching to a DVCS, > and the time and cost required to learn the new tool. We switched to bzr, and > while there were days where someone got lost in the DVCS, the overall > advantages with merging allowed that cost to be offset by the fact that merging > was so cheap (and we merged a lot). > > That's a big consideration to be made when you're considering a DVCS. Merges > in SVN and CVS can be painful, where merging well is a core feature of any > DVCS. I hear you. I for one have been frustrated (now that you mention it) by the inability to track changes across merges. We do lots of merges from the trunk into the py3k branch, and the merge revisions in the branch quotes the full text of the changes merged from the trunk, but not the list of affected files for each revision merged. Since merges typically combine a lot of revisions, it is very painful to find out more about a particular change to a file when that change came from such a merge -- often even after reading through the entire list of descriptions you still have no idea which merged revision is responsible for a particular change. Assuming this problem does not exist in DVCS, that would be a huge bonus from switching to a DVCS! -- --Guido van Rossum (home page: http://www.python.org/~guido/) From g.brandl at gmx.net Mon Jan 26 19:32:02 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Mon, 26 Jan 2009 19:32:02 +0100 Subject: [Python-Dev] IDLE docs at python.org/idle Message-ID: (re #5066) Is that documentation maintained in some way? Shouldn't it be merged into the main docs? Georg From guido at python.org Mon Jan 26 19:36:42 2009 From: guido at python.org (Guido van Rossum) Date: Mon, 26 Jan 2009 10:36:42 -0800 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: References: Message-ID: On Mon, Jan 26, 2009 at 12:44 AM, anatoly techtonik wrote: > You can't launch a process and communicate with it without blocking at > some point. The scenario where you can periodically check output and > error pipes without blocking and send input is not supported. > > Scenario One: control console program. This implies reading from input > and writing replies asyncronously > - Popen.stdin, stdout, stderr - stdin.write(), stdout.read() or > stderr.read() block script execution if any of the other OS pipe > buffers fills up and blocks the child process. Advice is to use > communicate(), but it waits for process to exit and hence doesn't > allow to send the input based on reply from previous send. > > Scenario Two: waiting for process that terminates with extensive output blocks > - Popen.wait() blocks if the child process generates enough output to > a stdout or stderr pipe, advice is to use Popen.communicate() > http://docs.python.org/library/subprocess.html#subprocess.Popen.wait > - Popen.communicate() - Waits for process to terminate, reads data > from stdout and stderr until end-of-file. Cashes data in memory - > should not be used for if the data size is large or unlimited. > > Solution - read(maxsize) and write(maxsize) functions that will return > immediately. Hi Anatoly -- thanks for clarifying your issues. I hope other developers more familiar with subprocess.py will chime in and either help you figure out a way to do this without changes to the subprocess module, or, if that would be too painful, help you develop additional APIs. You could start by proposing a set of changes to subprocess.py and submit them as a patch to bugs.python.org; that is easier to deal with than pointing to a recipe on the Activestate site. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From tjreedy at udel.edu Mon Jan 26 20:56:06 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Mon, 26 Jan 2009 14:56:06 -0500 Subject: [Python-Dev] IDLE docs at python.org/idle In-Reply-To: References: Message-ID: Georg Brandl wrote: > re http://bugs.python.org/issue5066 > Is that documentation maintained in some way? Not currently, pretty obviously. Screenshots are 1.5.2. Windows was 95/98. > Shouldn't it be merged into the main docs? If and only if updated. As noted on the issue, I am willing to help. Terry Jan Reedy From theller at ctypes.org Mon Jan 26 21:12:18 2009 From: theller at ctypes.org (Thomas Heller) Date: Mon, 26 Jan 2009 21:12:18 +0100 Subject: [Python-Dev] FormatError() in callproc.c under win32 In-Reply-To: <200901261904.17978.eckhardt@satorlaser.com> References: <200901261904.17978.eckhardt@satorlaser.com> Message-ID: Ulrich Eckhardt schrieb: > Hi! > > In callproc.c from trunk is a function called SetException(), which calls > FormatError() only to discard the contents. Can anyone enlighten me to the > reasons thereof? Is it just to check if the errorcode is registered in the > stringtables? I think that your guess is somewhat similar to what I thought when I wrote the code. > > The reason I ask is the CE port. The FormatMessage API exists there (or, > rather, the FormatMessageW API), but the tables containing the error messages > are optional for the OS and for space reasons many vendors chose not to > include them. That means that the function there regularly fails to retrieve > the requested string. > > My first approach was to fall back to simply providing a sting with a numeric > representation of the errorcode, but that would change the meaning of above > function, because then it could never fails. > > My second approach was to enhance PyErr_SetFromWindowsErr() to handle the > additional error codes that are checked in SetException(). However, those > require more context than just the error code, they use the EXCEPTION_RECORD > passed to SetException() for that. > > My third approach would be to filter out the special error codes first and > delegate all others to PyErr_SetFromWindowsErr(). The latter already handles > the lack of a string for the code by formatting it numerically. This would > also improve consistency, since the two functions use different ways to > format unrecognised errors numerically. This approach would change where and > how a completely unrecognised error code is formatted, but would otherwise be > pretty equivalent. The third approach is fine with me. Sidenote: The only error codes that I remember having seen in practice are 'access violation reading...' and 'access violation writing...', although it may be that on WinCE 'datatype misalignment' may also be possible. -- Thanks, Thomas From solipsis at pitrou.net Mon Jan 26 22:54:14 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 26 Jan 2009 21:54:14 +0000 (UTC) Subject: [Python-Dev] enabling a configure option Message-ID: Hello python-dev, r68924 in py3k introduced a new configure option named --with-computed-gotos. It would be nice if one of the buildbots could exercise this option, so that the code doesn't rot (the buildbot has to use gcc). Whom should I ask for this? Speaking of which, there are only five buildbots remaining in the "stable" bunch... What has happened to the others? Regards Antoine. From rasky at develer.com Mon Jan 26 22:57:36 2009 From: rasky at develer.com (Giovanni Bajo) Date: Mon, 26 Jan 2009 21:57:36 +0000 (UTC) Subject: [Python-Dev] PEP 374 (DVCS) now in reST References: <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <87eiyqfzqz.fsf@xemacs.org> <5b8d13220901260044p633fd312i77bed5152059cde5@mail.gmail.com> <20090126090846.6ad1895c@megatron> Message-ID: On Mon, 26 Jan 2009 10:31:55 -0800, Guido van Rossum wrote: > On Mon, Jan 26, 2009 at 8:08 AM, Paul Hummer > wrote: >> At a previous employer, we had this same discussion about switching to >> a DVCS, and the time and cost required to learn the new tool. We >> switched to bzr, and while there were days where someone got lost in >> the DVCS, the overall advantages with merging allowed that cost to be >> offset by the fact that merging was so cheap (and we merged a lot). >> >> That's a big consideration to be made when you're considering a DVCS. >> Merges in SVN and CVS can be painful, where merging well is a core >> feature of any DVCS. > > I hear you. I for one have been frustrated (now that you mention it) by > the inability to track changes across merges. We do lots of merges from > the trunk into the py3k branch, and the merge revisions in the branch > quotes the full text of the changes merged from the trunk, but not the > list of affected files for each revision merged. Since merges typically > combine a lot of revisions, it is very painful to find out more about a > particular change to a file when that change came from such a merge -- > often even after reading through the entire list of descriptions you > still have no idea which merged revision is responsible for a particular > change. Assuming this problem does not exist in DVCS, that would be a > huge bonus from switching to a DVCS! Well, not only it does not exist by design in any DVCS, but I have a better news: it does not exist anymore in Subversion 1.5. You just need to upgrade your SVN server to 1.5, migrate your merge history from the format of svnmerge to the new builtin format (using the official script), and you're done: say hello to "-g/--use-merge-history", to be use with svn log and svn blame. This is a good writeup of the new features: http://chestofbooks.com/computers/revision-control/subversion-svn/Merge- Sensitive-Logs-And-Annotations-Branchmerge-Advanced-Lo.html -- Giovanni Bajo Develer S.r.l. http://www.develer.com From solipsis at pitrou.net Mon Jan 26 22:59:29 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 26 Jan 2009 21:59:29 +0000 (UTC) Subject: [Python-Dev] enabling a configure option on a buildbot References: Message-ID: (Apologies for the incomplete title! I sometimes eat my words...) Antoine Pitrou pitrou.net> writes: > > Hello python-dev, > [snip] From guido at python.org Mon Jan 26 23:00:19 2009 From: guido at python.org (Guido van Rossum) Date: Mon, 26 Jan 2009 14:00:19 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <87eiyqfzqz.fsf@xemacs.org> <5b8d13220901260044p633fd312i77bed5152059cde5@mail.gmail.com> <20090126090846.6ad1895c@megatron> Message-ID: On Mon, Jan 26, 2009 at 1:57 PM, Giovanni Bajo wrote: > On Mon, 26 Jan 2009 10:31:55 -0800, Guido van Rossum wrote: > >> On Mon, Jan 26, 2009 at 8:08 AM, Paul Hummer >> wrote: >>> At a previous employer, we had this same discussion about switching to >>> a DVCS, and the time and cost required to learn the new tool. We >>> switched to bzr, and while there were days where someone got lost in >>> the DVCS, the overall advantages with merging allowed that cost to be >>> offset by the fact that merging was so cheap (and we merged a lot). >>> >>> That's a big consideration to be made when you're considering a DVCS. >>> Merges in SVN and CVS can be painful, where merging well is a core >>> feature of any DVCS. >> >> I hear you. I for one have been frustrated (now that you mention it) by >> the inability to track changes across merges. We do lots of merges from >> the trunk into the py3k branch, and the merge revisions in the branch >> quotes the full text of the changes merged from the trunk, but not the >> list of affected files for each revision merged. Since merges typically >> combine a lot of revisions, it is very painful to find out more about a >> particular change to a file when that change came from such a merge -- >> often even after reading through the entire list of descriptions you >> still have no idea which merged revision is responsible for a particular >> change. Assuming this problem does not exist in DVCS, that would be a >> huge bonus from switching to a DVCS! > > Well, not only it does not exist by design in any DVCS, but I have a > better news: it does not exist anymore in Subversion 1.5. You just need > to upgrade your SVN server to 1.5, migrate your merge history from the > format of svnmerge to the new builtin format (using the official script), > and you're done: say hello to "-g/--use-merge-history", to be use with > svn log and svn blame. > > This is a good writeup of the new features: > http://chestofbooks.com/computers/revision-control/subversion-svn/Merge- > Sensitive-Logs-And-Annotations-Branchmerge-Advanced-Lo.html Unfortunately I've heard we shouldn't upgrade to svn 1.5 until more Linux distributions ship with it by default. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Mon Jan 26 23:12:22 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 26 Jan 2009 23:12:22 +0100 Subject: [Python-Dev] FormatError() in callproc.c under win32 In-Reply-To: <200901261904.17978.eckhardt@satorlaser.com> References: <200901261904.17978.eckhardt@satorlaser.com> Message-ID: <497E3546.8000408@v.loewis.de> > In callproc.c from trunk is a function called SetException(), which calls > FormatError() only to discard the contents. Can anyone enlighten me to the > reasons thereof? Interestingly enough, the code used to say PyErr_SetString(PyExc_WindowsError, lpMsgBuf); Then it was changed to its current form, with a log message of Changes for windows CE, contributed by Luke Dunstan. Thanks a lot! See http://ctypes.cvs.sourceforge.net/viewvc/ctypes/ctypes/source/callproc.c?hideattic=0&r1=1.127.2.15&r2=1.127.2.16 I suggest you ask Thomas Heller and Luke Dunstan (if available) what the rationale for this partial change was. Regards, Martin From martin at v.loewis.de Mon Jan 26 23:15:22 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 26 Jan 2009 23:15:22 +0100 Subject: [Python-Dev] IDLE docs at python.org/idle In-Reply-To: References: Message-ID: <497E35FA.9050706@v.loewis.de> > Is that documentation maintained in some way? I don't think so. It isn't in the pydotorg repository, and the files were last touched in 2005. Regards, Martin From jyasskin at gmail.com Mon Jan 26 23:18:15 2009 From: jyasskin at gmail.com (Jeffrey Yasskin) Date: Mon, 26 Jan 2009 14:18:15 -0800 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <87eiyqfzqz.fsf@xemacs.org> <5b8d13220901260044p633fd312i77bed5152059cde5@mail.gmail.com> <20090126090846.6ad1895c@megatron> Message-ID: <5d44f72f0901261418y8de7bc0o3165e4bf28e01c16@mail.gmail.com> On Mon, Jan 26, 2009 at 2:00 PM, Guido van Rossum wrote: > On Mon, Jan 26, 2009 at 1:57 PM, Giovanni Bajo wrote: >> On Mon, 26 Jan 2009 10:31:55 -0800, Guido van Rossum wrote: >> >>> On Mon, Jan 26, 2009 at 8:08 AM, Paul Hummer >>> wrote: >>>> At a previous employer, we had this same discussion about switching to >>>> a DVCS, and the time and cost required to learn the new tool. We >>>> switched to bzr, and while there were days where someone got lost in >>>> the DVCS, the overall advantages with merging allowed that cost to be >>>> offset by the fact that merging was so cheap (and we merged a lot). >>>> >>>> That's a big consideration to be made when you're considering a DVCS. >>>> Merges in SVN and CVS can be painful, where merging well is a core >>>> feature of any DVCS. >>> >>> I hear you. I for one have been frustrated (now that you mention it) by >>> the inability to track changes across merges. We do lots of merges from >>> the trunk into the py3k branch, and the merge revisions in the branch >>> quotes the full text of the changes merged from the trunk, but not the >>> list of affected files for each revision merged. Since merges typically >>> combine a lot of revisions, it is very painful to find out more about a >>> particular change to a file when that change came from such a merge -- >>> often even after reading through the entire list of descriptions you >>> still have no idea which merged revision is responsible for a particular >>> change. Assuming this problem does not exist in DVCS, that would be a >>> huge bonus from switching to a DVCS! >> >> Well, not only it does not exist by design in any DVCS, but I have a >> better news: it does not exist anymore in Subversion 1.5. You just need >> to upgrade your SVN server to 1.5, migrate your merge history from the >> format of svnmerge to the new builtin format (using the official script), >> and you're done: say hello to "-g/--use-merge-history", to be use with >> svn log and svn blame. >> >> This is a good writeup of the new features: >> http://chestofbooks.com/computers/revision-control/subversion-svn/Merge- >> Sensitive-Logs-And-Annotations-Branchmerge-Advanced-Lo.html > > Unfortunately I've heard we shouldn't upgrade to svn 1.5 until more > Linux distributions ship with it by default. Besides that, `svn merge` cannot handle parallel branches like trunk/py3k without lots of handholding. Unlike svnmerge.py, when you merge to and then from a branch, it tries to merge changes that came from trunk and produces lots of conflicts. (Before you point me at --reintegrate, note "In Subversion 1.5, once a --reintegrate merge is done from branch to trunk, the branch is no longer usable for further work." from the book.) In principle, the svn devs could fix this, but they didn't in svn 1.5. To keep this slighly on topic ... maybe the abilities and limits of svnmerge.py and `svn merge` should be mentioned in the PEP? From martin at v.loewis.de Mon Jan 26 23:21:54 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 26 Jan 2009 23:21:54 +0100 Subject: [Python-Dev] enabling a configure option In-Reply-To: References: Message-ID: <497E3782.5080307@v.loewis.de> Antoine Pitrou wrote: > Hello python-dev, > > r68924 in py3k introduced a new configure option named --with-computed-gotos. It > would be nice if one of the buildbots could exercise this option, so that the > code doesn't rot (the buildbot has to use gcc). Whom should I ask for this? Me. Does it have to be a configure option? It is difficult to invoke different commands in different branches; better if the configures in all branches get the same options. Of course, the configure command doesn't have to be "configure"; any other script available in all branches would work (there is already Tools/buildbot for such scripts). > Speaking of which, there are only five buildbots remaining in the "stable" > bunch... What has happened to the others? I've removed all slaves that were down and where the owners either didn't respond, or indicated that they can't bring the slaves up anytime soon. Regards, Martin From jnoller at gmail.com Mon Jan 26 23:22:07 2009 From: jnoller at gmail.com (Jesse Noller) Date: Mon, 26 Jan 2009 17:22:07 -0500 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <5d44f72f0901261418y8de7bc0o3165e4bf28e01c16@mail.gmail.com> References: <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <87eiyqfzqz.fsf@xemacs.org> <5b8d13220901260044p633fd312i77bed5152059cde5@mail.gmail.com> <20090126090846.6ad1895c@megatron> <5d44f72f0901261418y8de7bc0o3165e4bf28e01c16@mail.gmail.com> Message-ID: <6C0E7BE0-3F18-423F-890C-BDB29B48E64D@gmail.com> On Jan 26, 2009, at 5:18 PM, Jeffrey Yasskin wrote: > On Mon, Jan 26, 2009 at 2:00 PM, Guido van Rossum > wrote: >> On Mon, Jan 26, 2009 at 1:57 PM, Giovanni Bajo >> wrote: >>> On Mon, 26 Jan 2009 10:31:55 -0800, Guido van Rossum wrote: >>> >>>> On Mon, Jan 26, 2009 at 8:08 AM, Paul Hummer >>> > >>>> wrote: >>>>> At a previous employer, we had this same discussion about >>>>> switching to >>>>> a DVCS, and the time and cost required to learn the new tool. We >>>>> switched to bzr, and while there were days where someone got >>>>> lost in >>>>> the DVCS, the overall advantages with merging allowed that cost >>>>> to be >>>>> offset by the fact that merging was so cheap (and we merged a >>>>> lot). >>>>> >>>>> That's a big consideration to be made when you're considering a >>>>> DVCS. >>>>> Merges in SVN and CVS can be painful, where merging well is a core >>>>> feature of any DVCS. >>>> >>>> I hear you. I for one have been frustrated (now that you mention >>>> it) by >>>> the inability to track changes across merges. We do lots of >>>> merges from >>>> the trunk into the py3k branch, and the merge revisions in the >>>> branch >>>> quotes the full text of the changes merged from the trunk, but >>>> not the >>>> list of affected files for each revision merged. Since merges >>>> typically >>>> combine a lot of revisions, it is very painful to find out more >>>> about a >>>> particular change to a file when that change came from such a >>>> merge -- >>>> often even after reading through the entire list of descriptions >>>> you >>>> still have no idea which merged revision is responsible for a >>>> particular >>>> change. Assuming this problem does not exist in DVCS, that would >>>> be a >>>> huge bonus from switching to a DVCS! >>> >>> Well, not only it does not exist by design in any DVCS, but I have a >>> better news: it does not exist anymore in Subversion 1.5. You just >>> need >>> to upgrade your SVN server to 1.5, migrate your merge history from >>> the >>> format of svnmerge to the new builtin format (using the official >>> script), >>> and you're done: say hello to "-g/--use-merge-history", to be use >>> with >>> svn log and svn blame. >>> >>> This is a good writeup of the new features: >>> http://chestofbooks.com/computers/revision-control/subversion-svn/Merge- >>> Sensitive-Logs-And-Annotations-Branchmerge-Advanced-Lo.html >> >> Unfortunately I've heard we shouldn't upgrade to svn 1.5 until more >> Linux distributions ship with it by default. > > Besides that, `svn merge` cannot handle parallel branches like > trunk/py3k without lots of handholding. Unlike svnmerge.py, when you > merge to and then from a branch, it tries to merge changes that came > from trunk and produces lots of conflicts. (Before you point me at > --reintegrate, note "In Subversion 1.5, once a --reintegrate merge is > done from branch to trunk, the branch is no longer usable for further > work." from the book.) In principle, the svn devs could fix this, but > they didn't in svn 1.5. > > To keep this slighly on topic ... maybe the abilities and limits of > svnmerge.py and `svn merge` should be mentioned in the PEP? Everytime I merge with subversion, it makes me appreciate perforce's branching and merging that much more. Jesse From martin at v.loewis.de Mon Jan 26 23:24:42 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Mon, 26 Jan 2009 23:24:42 +0100 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <87eiyqfzqz.fsf@xemacs.org> <5b8d13220901260044p633fd312i77bed5152059cde5@mail.gmail.com> <20090126090846.6ad1895c@megatron> Message-ID: <497E382A.80809@v.loewis.de> > Unfortunately I've heard we shouldn't upgrade to svn 1.5 until more > Linux distributions ship with it by default. We *could* upgrade to subversion 1.5 on the server (if only Debian would get their ... together and release the version they promised for last September). The question is then whether we would drop svnmerge, in favour of the 1.5 merge tracking. IIUC, that would require all committers to use 1.5 - I'm not sure whether this poses a challenge to any committer. Regards, Martin From solipsis at pitrou.net Mon Jan 26 23:31:12 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Mon, 26 Jan 2009 22:31:12 +0000 (UTC) Subject: [Python-Dev] enabling a configure option References: <497E3782.5080307@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > Me. Does it have to be a configure option? It is difficult to invoke > different commands in different branches; better if the configures in > all branches get the same options. Well, after a quick test, it seems that configure doesn't complain if you pass it an unknown option (at least one that begins with '--with'). So we can still use the same options on all branches. (as for the need for it to be a configure option, it was the consensus which emerged after discussion in the tracker entry, both to provide some flexibility and for fear that enabling it by default could trigger some compiler bugs -- although the latter is of course unlikely) Regards Antoine. From ncoghlan at gmail.com Mon Jan 26 23:43:24 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 27 Jan 2009 08:43:24 +1000 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: References: <20407628.490931.1232972271521.JavaMail.xicrypt@atgrzls001> <497DB62C.4050108@avl.com> Message-ID: <497E3C8C.2030303@gmail.com> Daniel Stutzbach wrote: > I'm confused. What's wrong with the following? > > p = Popen('do_something', stdin=PIPE, stdout=PIPE) > p.stdin.write('la la la\n') > p.stdin.flush() > line = p.stdout.readline() > p.stdin.write(process(line)) > p.stdin.flush() > > If you want to see if data is available on p.stdout, use the select > module (unless you're on Windows). > > The child process has to flush its output buffer for this to work, but > that isn't Python's problem. Anatoly covered that in his response to Guido: "You can't launch a process and communicate with it without blocking at some point. The scenario where you can periodically check output and error pipes without blocking and send input is not supported." With the vanilla subprocess Popen implmentation, the stdin.write calls can actually both block if the stdin buffer is full (waiting for the child process to clear space) and the stdout.readline call can definitely block (waiting for the child process to end the line). So it isn't async communication in general that is the concern: it is *non-blocking* async communication. (Since I'm happy with the idea of threaded programs, I'd personally just farm the task off to some worker threads and make it non-blocking that way, but that approach effectively abandons the whole concept of non-blocking IO and it's ability to avoid using threads). As Guido said though, while there doesn't seem to be anything fundamentally wrong with the idea of adding better support for non-blocking IO to subprocess, it's difficult to have an opinion without a concrete proposal for API changes to subprocess.Popen. The linked recipe certainly can't be adopted as is (e.g. due to the dependency on pywin32) Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From daniel at stutzbachenterprises.com Tue Jan 27 00:36:37 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Mon, 26 Jan 2009 17:36:37 -0600 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: <497E3C8C.2030303@gmail.com> References: <20407628.490931.1232972271521.JavaMail.xicrypt@atgrzls001> <497DB62C.4050108@avl.com> <497E3C8C.2030303@gmail.com> Message-ID: On Mon, Jan 26, 2009 at 4:43 PM, Nick Coghlan wrote: > With the vanilla subprocess Popen implmentation, the stdin.write calls > can actually both block if the stdin buffer is full (waiting for the > child process to clear space) and the stdout.readline call can > definitely block (waiting for the child process to end the line). That's true, but for the use cases presented so far, .readline() is satisfactory, unless you have an unusual application that will fill the pipe without sending a single newline (in which case, see below). > So it isn't async communication in general that is the concern: it is > *non-blocking* async communication. (Since I'm happy with the idea of > threaded programs, I'd personally just farm the task off to some worker > threads and make it non-blocking that way, but that approach effectively > abandons the whole concept of non-blocking IO and it's ability to avoid > using threads). If you really need to communicate with multiple subprocesses (which so far has not been suggested as a motivating example), then you can use select(). You don't need non-blocking IO to avoid using threads (although that is a common misconception). If a program never blocks, then it uses 100% of CPU by definition, which is undesirable. ;-) A program just needs select() so it knows which file descriptors it can call os.read() or os.write() on without blocking. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew-pythondev at puzzling.org Tue Jan 27 00:45:02 2009 From: andrew-pythondev at puzzling.org (Andrew Bennetts) Date: Tue, 27 Jan 2009 10:45:02 +1100 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: References: <20407628.490931.1232972271521.JavaMail.xicrypt@atgrzls001> <497DB62C.4050108@avl.com> <497E3C8C.2030303@gmail.com> Message-ID: <20090126234502.GA8567@steerpike.home.puzzling.org> Daniel Stutzbach wrote: [...] > > If you really need to communicate with multiple subprocesses (which so far has > not been suggested as a motivating example), then you can use select(). Not portably. select() on windows only works on sockets. -Andrew. From ncoghlan at gmail.com Tue Jan 27 00:58:42 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Tue, 27 Jan 2009 09:58:42 +1000 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: <20090126234502.GA8567@steerpike.home.puzzling.org> References: <20407628.490931.1232972271521.JavaMail.xicrypt@atgrzls001> <497DB62C.4050108@avl.com> <497E3C8C.2030303@gmail.com> <20090126234502.GA8567@steerpike.home.puzzling.org> Message-ID: <497E4E32.7020200@gmail.com> Andrew Bennetts wrote: > Daniel Stutzbach wrote: > [...] >> If you really need to communicate with multiple subprocesses (which so far has >> not been suggested as a motivating example), then you can use select(). > > Not portably. select() on windows only works on sockets. In addition, select() is exactly what the linked recipe uses to implement non-blocking I/O for subprocesses on non-Windows platforms. I agree the actual use cases need to be better articulated in any RFE that is actually posted, but a cleanly encapsulated approach to non-blocking communication with subprocesses over stdin/out/err certainly sounds like something that could reasonably be added to the subprocess module. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From loewis at informatik.hu-berlin.de Mon Jan 26 22:55:29 2009 From: loewis at informatik.hu-berlin.de (Martin v. Loewis) Date: Mon, 26 Jan 2009 22:55:29 +0100 Subject: [Python-Dev] subprocess crossplatformness and async communication In-Reply-To: <497DB62C.4050108@avl.com> References: <20407628.490931.1232972271521.JavaMail.xicrypt@atgrzls001> <497DB62C.4050108@avl.com> Message-ID: <497E3151.9090703@informatik.hu-berlin.de> > Can this really be made safe without an explicit flow control protocol, > such as a pseudo-TTY? stdio reads data from pipes such as stdin in 4K > or so chunks. I don't think the subprocess module uses stdio. > I can easily imagine the child blocking while it waits > for its stdin buffer to fill, while the parent in turn blocks waiting > for the child's output arrive. That would be a bug in the parent process, for expecting output when none will arrive. As a consequence, some child programs might not be suitable for this kind of operation. This is no surprise - some programs are not suitable for automated operation at all, because they bring up windows, and communicate with their environment by means other than stdin and stdout (or, if you want to operate them automatically, you have to use AppleScript, or COM). Regards, Martin From barry at python.org Tue Jan 27 03:20:22 2009 From: barry at python.org (Barry Warsaw) Date: Mon, 26 Jan 2009 21:20:22 -0500 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: <573EFD3B-2215-4747-B4B0-45C35A9F9F86@gmail.com> References: <20090124152507.GA23994@panix.com> <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <573EFD3B-2215-4747-B4B0-45C35A9F9F86@gmail.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 25, 2009, at 10:27 PM, Jared Grubb wrote: > Regardless of the outcome, those that want to use SVN can use SVN, > and those that want to use "chosen DVCS" can use that. In the end, > which is the more "lossy" repository? It seems like if the change is > transparent to everyone who is using it, then the only thing that we > care about is that the chosen backend will preserve all the > information to make it truly transparent to everyone involved. svn is the more lossy repository format. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSX5vZnEjvBPtnXfVAQJzcgP/SweUwXoECPJpO5BEkmdTLDoEfPP1X1Lg m4AALSFZ3cfRUPX3UgGmT7anY604o5oaElFR8b0HkIScJvhF56nzs9oAR0Yqi8jN zThG1rizDHh+RSqUZ0yXKHVF6ScNf8aRg/cLoVtV+J6KGpYtTCSGTQWGnvSQxCj9 I+BY75DHOI8= =9A3a -----END PGP SIGNATURE----- From nnorwitz at gmail.com Tue Jan 27 06:17:58 2009 From: nnorwitz at gmail.com (Neal Norwitz) Date: Mon, 26 Jan 2009 21:17:58 -0800 Subject: [Python-Dev] enabling a configure option In-Reply-To: References: <497E3782.5080307@v.loewis.de> Message-ID: If you only care about this running on a single machine to get some coverage and don't care about all architectures, you can change Misc/build.sh to add the configure option. n On Mon, Jan 26, 2009 at 2:31 PM, Antoine Pitrou wrote: > Martin v. L?wis v.loewis.de> writes: >> >> Me. Does it have to be a configure option? It is difficult to invoke >> different commands in different branches; better if the configures in >> all branches get the same options. > > Well, after a quick test, it seems that configure doesn't complain if you pass > it an unknown option (at least one that begins with '--with'). So we can still > use the same options on all branches. > > (as for the need for it to be a configure option, it was the consensus which > emerged after discussion in the tracker entry, both to provide some flexibility > and for fear that enabling it by default could trigger some compiler bugs -- > although the latter is of course unlikely) > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/nnorwitz%40gmail.com > From eckhardt at satorlaser.com Tue Jan 27 11:36:40 2009 From: eckhardt at satorlaser.com (Ulrich Eckhardt) Date: Tue, 27 Jan 2009 11:36:40 +0100 Subject: [Python-Dev] FormatError() in callproc.c under win32 In-Reply-To: References: <200901261904.17978.eckhardt@satorlaser.com> Message-ID: <200901271136.40940.eckhardt@satorlaser.com> On Monday 26 January 2009, Thomas Heller wrote: > Ulrich Eckhardt schrieb: > > In callproc.c from trunk is a function called SetException(), which calls [...] > > My third approach would be to filter out the special error codes first > > and delegate all others to PyErr_SetFromWindowsErr(). The latter already > > handles the lack of a string for the code by formatting it numerically. > > This would also improve consistency, since the two functions use > > different ways to format unrecognised errors numerically. This approach > > would change where and how a completely unrecognised error code is > > formatted, but would otherwise be pretty equivalent. > > The third approach is fine with me. Sidenote: The only error codes that I > remember having seen in practice are 'access violation reading...' and > 'access violation writing...', although it may be that on WinCE 'datatype > misalignment' may also be possible. Submitted as patch for issue #5078. Note: under CE, you can actually encounter datatype misalignments, since it runs on CPUs that don't emulate them. I wonder if the same doesn't also apply to win64.... Uli -- Sator Laser GmbH Gesch?ftsf?hrer: Thorsten F?cking, Amtsgericht Hamburg HR B62 932 ************************************************************************************** Sator Laser GmbH, Fangdieckstra?e 75a, 22547 Hamburg, Deutschland Gesch?ftsf?hrer: Thorsten F?cking, Amtsgericht Hamburg HR B62 932 ************************************************************************************** Visit our website at ************************************************************************************** Diese E-Mail einschlie?lich s?mtlicher Anh?nge ist nur f?r den Adressaten bestimmt und kann vertrauliche Informationen enthalten. Bitte benachrichtigen Sie den Absender umgehend, falls Sie nicht der beabsichtigte Empf?nger sein sollten. Die E-Mail ist in diesem Fall zu l?schen und darf weder gelesen, weitergeleitet, ver?ffentlicht oder anderweitig benutzt werden. E-Mails k?nnen durch Dritte gelesen werden und Viren sowie nichtautorisierte ?nderungen enthalten. Sator Laser GmbH ist f?r diese Folgen nicht verantwortlich. ************************************************************************************** From eckhardt at satorlaser.com Tue Jan 27 12:16:01 2009 From: eckhardt at satorlaser.com (Ulrich Eckhardt) Date: Tue, 27 Jan 2009 12:16:01 +0100 Subject: [Python-Dev] FormatError() in callproc.c under win32 In-Reply-To: <497E3546.8000408@v.loewis.de> References: <200901261904.17978.eckhardt@satorlaser.com> <497E3546.8000408@v.loewis.de> Message-ID: <200901271216.01726.eckhardt@satorlaser.com> On Monday 26 January 2009, Martin v. L?wis wrote: > > In callproc.c from trunk is a function called SetException(), which calls > > FormatError() only to discard the contents. Can anyone enlighten me to > > the reasons thereof? > > Interestingly enough, the code used to say > > PyErr_SetString(PyExc_WindowsError, lpMsgBuf); > > Then it was changed to its current form, with a log message of > > Changes for windows CE, contributed by Luke Dunstan. Thanks a lot! > > See > > http://ctypes.cvs.sourceforge.net/viewvc/ctypes/ctypes/source/callproc.c?hideattic=0&r1=1.127.2.15&r2=1.127.2.16 > > I suggest you ask Thomas Heller and Luke Dunstan (if available) what the > rationale for this partial change was. I can only guess: 1. Those changes seem to generate TCHAR strings. This is necessary to compile it on both win9x (TCHAR=char) and CE (TCHAR=wchar_t). Since win9x was dropped from the supported platforms, that isn't necessary any more and all the code could use WCHAR directly. 2. Those changes also seem to change a few byte-strings to Unicode-strings, see format_error(). This is a questionable step, since those are changes that are visible to Python code. Worse, even on the same platform it could return different string types when the lookup of the errorcode fails. I wonder if that is intentional. In any case, CCing Luke on the issue, maybe he can clarify things. cheers Uli ************************************************************************************** Sator Laser GmbH, Fangdieckstra?e 75a, 22547 Hamburg, Deutschland Gesch?ftsf?hrer: Thorsten F?cking, Amtsgericht Hamburg HR B62 932 ************************************************************************************** Visit our website at ************************************************************************************** Diese E-Mail einschlie?lich s?mtlicher Anh?nge ist nur f?r den Adressaten bestimmt und kann vertrauliche Informationen enthalten. Bitte benachrichtigen Sie den Absender umgehend, falls Sie nicht der beabsichtigte Empf?nger sein sollten. Die E-Mail ist in diesem Fall zu l?schen und darf weder gelesen, weitergeleitet, ver?ffentlicht oder anderweitig benutzt werden. E-Mails k?nnen durch Dritte gelesen werden und Viren sowie nichtautorisierte ?nderungen enthalten. Sator Laser GmbH ist f?r diese Folgen nicht verantwortlich. ************************************************************************************** From steve at holdenweb.com Tue Jan 27 13:08:46 2009 From: steve at holdenweb.com (Steve Holden) Date: Tue, 27 Jan 2009 07:08:46 -0500 Subject: [Python-Dev] [PSF-Board] I've got a surprise for you! In-Reply-To: <497E9320.2030404@sun.com> References: <20090126233246.GA37662@wind.teleri.net> <497E9320.2030404@sun.com> Message-ID: <497EF94E.3050302@holdenweb.com> Jim Walker wrote: [Trent's announcement] > > Great stuff Trent! I was wondering how you were doing. > > I really appreciate what it takes to put these open resources > together ;) There's a lot of moving parts :) > > Cheers, > Jim > > BTW. > > We now have zone servers in the OpenSolaris test farm, and > I plan to add guest os servers in the next few weeks using > ldoms (sparc) and xvm (x64). The zone servers provide whole > root zones, which should be a good development environment > for most projects. Check it out: > > http://test.opensolaris.org/testfarm > http://www.opensolaris.org/os/community/testing/testfarm/zones/ > > Let me know if there is interest from the python community to > manage one of the test farm servers for python development. > Besides the general use machines, the php community is already > managing a T2000 server. Jim: Thanks, this is a terrific offer. I am copying it to the Python developers list so they can discuss it - I know that Solaris is one of the platforms we do get quite a few build questions about. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Chairman, PSF http://www.holdenweb.com/ From jake at youtube.com Tue Jan 27 10:49:29 2009 From: jake at youtube.com (Jake McGuire) Date: Tue, 27 Jan 2009 01:49:29 -0800 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix Message-ID: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> Instance attribute names are normally interned - this is done in PyObject_SetAttr (among other places). Unpickling (in pickle and cPickle) directly updates __dict__ on the instance object. This bypasses the interning so you end up with many copies of the strings representing your attribute names, which wastes a lot of space, both in RAM and in pickles of sequences of objects created from pickles. Note that the native python memcached client uses pickle to serialize objects. >>> import pickle >>> class C(object): ... def __init__(self, x): ... self.long_attribute_name = x ... >>> len(pickle.dumps([pickle.loads(pickle.dumps(C(None), pickle.HIGHEST_PROTOCOL)) for i in range(100)], pickle.HIGHEST_PROTOCOL)) 3658 >>> len(pickle.dumps([C(None) for i in range(100)], pickle.HIGHEST_PROTOCOL)) 1441 >>> Interning the strings on unpickling makes the pickles smaller, and at least for cPickle actually makes unpickling sequences of many objects slightly faster. I have included proposed patches to cPickle.c and pickle.py, and would appreciate any feedback. dhcp-172-31-170-32:~ mcguire$ diff -u Downloads/Python-2.4.3/Modules/ cPickle.c cPickle.c --- Downloads/Python-2.4.3/Modules/cPickle.c 2004-07-26 22:22:33.000000000 -0700 +++ cPickle.c 2009-01-26 23:30:31.000000000 -0800 @@ -4258,6 +4258,8 @@ PyObject *state, *inst, *slotstate; PyObject *__setstate__; PyObject *d_key, *d_value; + PyObject *name; + char * key_str; int i; int res = -1; @@ -4319,8 +4321,24 @@ i = 0; while (PyDict_Next(state, &i, &d_key, &d_value)) { - if (PyObject_SetItem(dict, d_key, d_value) < 0) - goto finally; + /* normally the keys for instance attributes are + interned. we should try to do that here. */ + if (PyString_CheckExact(d_key)) { + key_str = PyString_AsString(d_key); + name = PyString_FromString(key_str); + if (! name) + goto finally; + + PyString_InternInPlace(&name); + if (PyObject_SetItem(dict, name, d_value) < 0) { + Py_DECREF(name); + goto finally; + } + Py_DECREF(name); + } else { + if (PyObject_SetItem(dict, d_key, d_value) < 0) + goto finally; + } } Py_DECREF(dict); } dhcp-172-31-170-32:~ mcguire$ diff -u Downloads/Python-2.4.3/Lib/ pickle.py pickle.py --- Downloads/Python-2.4.3/Lib/pickle.py 2009-01-27 01:41:43.000000000 -0800 +++ pickle.py 2009-01-27 01:41:31.000000000 -0800 @@ -1241,7 +1241,15 @@ state, slotstate = state if state: try: - inst.__dict__.update(state) + d = inst.__dict__ + try: + for k,v in state.items(): + d[intern(k)] = v + # keys in state don't have to be strings + # don't blow up, but don't go out of our way + except TypeError: + d.update(state) + except RuntimeError: # XXX In restricted execution, the instance's __dict__ # is not accessible. Use the old way of unpickling From jnoller at gmail.com Tue Jan 27 15:23:06 2009 From: jnoller at gmail.com (Jesse Noller) Date: Tue, 27 Jan 2009 09:23:06 -0500 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> Message-ID: <4222a8490901270623l765523e8gc9bdef65287fd010@mail.gmail.com> On Tue, Jan 27, 2009 at 4:49 AM, Jake McGuire wrote: > Instance attribute names are normally interned - this is done in > PyObject_SetAttr (among other places). Unpickling (in pickle and cPickle) > directly updates __dict__ on the instance object. This bypasses the > interning so you end up with many copies of the strings representing your > attribute names, which wastes a lot of space, both in RAM and in pickles of > sequences of objects created from pickles. Note that the native python > memcached client uses pickle to serialize objects. > >>>> import pickle >>>> class C(object): > ... def __init__(self, x): > ... self.long_attribute_name = x > ... >>>> len(pickle.dumps([pickle.loads(pickle.dumps(C(None), >>>> pickle.HIGHEST_PROTOCOL)) for i in range(100)], pickle.HIGHEST_PROTOCOL)) > 3658 >>>> len(pickle.dumps([C(None) for i in range(100)], >>>> pickle.HIGHEST_PROTOCOL)) > 1441 >>>> > > Interning the strings on unpickling makes the pickles smaller, and at least > for cPickle actually makes unpickling sequences of many objects slightly > faster. I have included proposed patches to cPickle.c and pickle.py, and > would appreciate any feedback. > > dhcp-172-31-170-32:~ mcguire$ diff -u > Downloads/Python-2.4.3/Modules/cPickle.c cPickle.c > --- Downloads/Python-2.4.3/Modules/cPickle.c 2004-07-26 > 22:22:33.000000000 -0700 > +++ cPickle.c 2009-01-26 23:30:31.000000000 -0800 > @@ -4258,6 +4258,8 @@ > PyObject *state, *inst, *slotstate; > PyObject *__setstate__; > PyObject *d_key, *d_value; > + PyObject *name; > + char * key_str; > int i; > int res = -1; > > @@ -4319,8 +4321,24 @@ > > i = 0; > while (PyDict_Next(state, &i, &d_key, &d_value)) { > - if (PyObject_SetItem(dict, d_key, d_value) < 0) > - goto finally; > + /* normally the keys for instance attributes are > + interned. we should try to do that here. */ > + if (PyString_CheckExact(d_key)) { > + key_str = PyString_AsString(d_key); > + name = PyString_FromString(key_str); > + if (! name) > + goto finally; > + > + PyString_InternInPlace(&name); > + if (PyObject_SetItem(dict, name, d_value) < > 0) { > + Py_DECREF(name); > + goto finally; > + } > + Py_DECREF(name); > + } else { > + if (PyObject_SetItem(dict, d_key, d_value) < > 0) > + goto finally; > + } > } > Py_DECREF(dict); > } > > dhcp-172-31-170-32:~ mcguire$ diff -u Downloads/Python-2.4.3/Lib/pickle.py > pickle.py > --- Downloads/Python-2.4.3/Lib/pickle.py 2009-01-27 > 01:41:43.000000000 -0800 > +++ pickle.py 2009-01-27 01:41:31.000000000 -0800 > @@ -1241,7 +1241,15 @@ > state, slotstate = state > if state: > try: > - inst.__dict__.update(state) > + d = inst.__dict__ > + try: > + for k,v in state.items(): > + d[intern(k)] = v > + # keys in state don't have to be strings > + # don't blow up, but don't go out of our way > + except TypeError: > + d.update(state) > + > except RuntimeError: > # XXX In restricted execution, the instance's __dict__ > # is not accessible. Use the old way of unpickling > Hi Jake, You should really post this to bugs.python.org as an enhancement so we can track the discussion there. -jesse From mrts.pydev at gmail.com Tue Jan 27 15:50:12 2009 From: mrts.pydev at gmail.com (=?ISO-8859-1?Q?Mart_S=F5mermaa?=) Date: Tue, 27 Jan 2009 16:50:12 +0200 Subject: [Python-Dev] V8, TraceMonkey, SquirrelFish and Python Message-ID: As most of you know there's constant struggle on the JavaScript front to get even faster performance out of interpreters. V8, TraceMonkey and SquirrelFish have brought novel ideas to interpreter design, wouldn't it make sense to reap the best bits and bring them to Python? Has anyone delved into the designs and considered their applicability to Python? Hoping-to-see-some-V8-and-Python-teams-collaboration-in-Mountain-View-ly yours, Mart S?mermaa -------------- next part -------------- An HTML attachment was scrubbed... URL: From jnoller at gmail.com Tue Jan 27 16:04:36 2009 From: jnoller at gmail.com (Jesse Noller) Date: Tue, 27 Jan 2009 10:04:36 -0500 Subject: [Python-Dev] V8, TraceMonkey, SquirrelFish and Python In-Reply-To: References: Message-ID: <4222a8490901270704y41308595i2bdb65dab7bc4b07@mail.gmail.com> On Tue, Jan 27, 2009 at 9:50 AM, Mart S?mermaa wrote: > As most of you know there's constant struggle on the JavaScript front to get > even faster performance out of interpreters. > V8, TraceMonkey and SquirrelFish have brought novel ideas to interpreter > design, wouldn't it make sense to reap the best bits and bring them to > Python? > > Has anyone delved into the designs and considered their applicability to > Python? > > Hoping-to-see-some-V8-and-Python-teams-collaboration-in-Mountain-View-ly > yours, > Mart S?mermaa > Hi Mart, This is a better discussion for the python-ideas list. That being said, there was a thread discussing this last year, see: http://mail.python.org/pipermail/python-dev/2008-October/083176.html -jesse From steve at holdenweb.com Tue Jan 27 16:17:08 2009 From: steve at holdenweb.com (Steve Holden) Date: Tue, 27 Jan 2009 10:17:08 -0500 Subject: [Python-Dev] V8, TraceMonkey, SquirrelFish and Python In-Reply-To: <4222a8490901270704y41308595i2bdb65dab7bc4b07@mail.gmail.com> References: <4222a8490901270704y41308595i2bdb65dab7bc4b07@mail.gmail.com> Message-ID: <497F2574.1090007@holdenweb.com> Jesse Noller wrote: > On Tue, Jan 27, 2009 at 9:50 AM, Mart S?mermaa wrote: >> As most of you know there's constant struggle on the JavaScript front to get >> even faster performance out of interpreters. >> V8, TraceMonkey and SquirrelFish have brought novel ideas to interpreter >> design, wouldn't it make sense to reap the best bits and bring them to >> Python? >> >> Has anyone delved into the designs and considered their applicability to >> Python? >> >> Hoping-to-see-some-V8-and-Python-teams-collaboration-in-Mountain-View-ly >> yours, >> Mart S?mermaa >> > > Hi Mart, > > This is a better discussion for the python-ideas list. That being > said, there was a thread discussing this last year, see: > > http://mail.python.org/pipermail/python-dev/2008-October/083176.html > I am sure this will be included as a part of the discussion at the VM summit that's taking place as a part of the pre-PyCon activity. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From coder_infidel at hotmail.com Tue Jan 27 16:29:25 2009 From: coder_infidel at hotmail.com (Luke Dunstan) Date: Wed, 28 Jan 2009 00:29:25 +0900 Subject: [Python-Dev] FormatError() in callproc.c under win32 In-Reply-To: <200901271216.01726.eckhardt@satorlaser.com> References: <200901261904.17978.eckhardt@satorlaser.com> <497E3546.8000408@v.loewis.de> <200901271216.01726.eckhardt@satorlaser.com> Message-ID: > From: eckhardt at satorlaser.com > To: python-dev at python.org > Subject: Re: [Python-Dev] FormatError() in callproc.c under win32 > Date: Tue, 27 Jan 2009 12:16:01 +0100 > CC: coder_infidel at hotmail.com > > On Monday 26 January 2009, Martin v. L?wis wrote: > > > In callproc.c from trunk is a function called SetException(), which calls > > > FormatError() only to discard the contents. Can anyone enlighten me to > > > the reasons thereof? The left over call to FormatError() looks like a mistake to me. > > > > Interestingly enough, the code used to say > > > > PyErr_SetString(PyExc_WindowsError, lpMsgBuf); > > > > Then it was changed to its current form, with a log message of > > > > Changes for windows CE, contributed by Luke Dunstan. Thanks a lot! > > > > See > > > > > http://ctypes.cvs.sourceforge.net/viewvc/ctypes/ctypes/source/callproc.c?hideattic=0&r1=1.127.2.15&r2=1.127.2.16 > > > > I suggest you ask Thomas Heller and Luke Dunstan (if available) what the > > rationale for this partial change was. > > I can only guess: > 1. Those changes seem to generate TCHAR strings. This is necessary to compile > it on both win9x (TCHAR=char) and CE (TCHAR=wchar_t). Since win9x was dropped > from the supported platforms, that isn't necessary any more and all the code > could use WCHAR directly. As far as I remember TCHAR was char for Windows NT/2K/XP Python builds too, at least at that time, but yes it would be clearer to use WCHAR instead now. > 2. Those changes also seem to change a few byte-strings to Unicode-strings, > see format_error(). This is a questionable step, since those are changes that > are visible to Python code. Worse, even on the same platform it could return > different string types when the lookup of the errorcode fails. I wonder if > that is intentional. Probably not intentional. Yes, it would be better if the return value was either always char or always WCHAR. > > In any case, CCing Luke on the issue, maybe he can clarify things. > > cheers > > Uli Good luck, Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at holdenweb.com Tue Jan 27 18:32:57 2009 From: steve at holdenweb.com (Steve Holden) Date: Tue, 27 Jan 2009 12:32:57 -0500 Subject: [Python-Dev] Sun resources [Was: I've got a surprise for you!] Message-ID: <497F4549.3030106@holdenweb.com> So, if anyone wants to run a Sun buildbot or whatever, Jim would be the person to contact. Synchronize on this list to ensure Jim doesn't get multiple approaches, please. regards Steve -------- Original Message -------- Subject: Re: [PSF-Board] I've got a surprise for you! Date: Tue, 27 Jan 2009 10:49:29 -0700 From: Jim Walker Reply-To: James.Walker at Sun.COM Organization: Sun Microsystems, Inc. To: Steve Holden References: <20090126233246.GA37662 at wind.teleri.net> <497E9320.2030404 at sun.com> <497EF94E.3050302 at holdenweb.com> Steve Holden wrote: > > Thanks, this is a terrific offer. I am copying it to the Python > developers list so they can discuss it - I know that Solaris is one of > the platforms we do get quite a few build questions about. > Sounds good. Depending on what you want to do, I can assign a system to your group within a week or two. Cheers, Jim -- Jim Walker, http://blogs.sun.com/jwalker Sun Microsystems, Software, Solaris QE x77744, 500 Eldorado Blvd, Broomfield CO 80021 -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From guido at python.org Tue Jan 27 18:57:40 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 27 Jan 2009 09:57:40 -0800 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: <4222a8490901270623l765523e8gc9bdef65287fd010@mail.gmail.com> References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> <4222a8490901270623l765523e8gc9bdef65287fd010@mail.gmail.com> Message-ID: On Tue, Jan 27, 2009 at 6:23 AM, Jesse Noller wrote: > On Tue, Jan 27, 2009 at 4:49 AM, Jake McGuire wrote: >> Instance attribute names are normally interned - this is done in >> PyObject_SetAttr (among other places). Unpickling (in pickle and cPickle) >> directly updates __dict__ on the instance object. This bypasses the >> interning so you end up with many copies of the strings representing your >> attribute names, which wastes a lot of space, both in RAM and in pickles of >> sequences of objects created from pickles. Note that the native python >> memcached client uses pickle to serialize objects. >> >>>>> import pickle >>>>> class C(object): >> ... def __init__(self, x): >> ... self.long_attribute_name = x >> ... >>>>> len(pickle.dumps([pickle.loads(pickle.dumps(C(None), >>>>> pickle.HIGHEST_PROTOCOL)) for i in range(100)], pickle.HIGHEST_PROTOCOL)) >> 3658 >>>>> len(pickle.dumps([C(None) for i in range(100)], >>>>> pickle.HIGHEST_PROTOCOL)) >> 1441 >>>>> >> >> Interning the strings on unpickling makes the pickles smaller, and at least >> for cPickle actually makes unpickling sequences of many objects slightly >> faster. I have included proposed patches to cPickle.c and pickle.py, and >> would appreciate any feedback. >> >> dhcp-172-31-170-32:~ mcguire$ diff -u >> Downloads/Python-2.4.3/Modules/cPickle.c cPickle.c >> --- Downloads/Python-2.4.3/Modules/cPickle.c 2004-07-26 >> 22:22:33.000000000 -0700 >> +++ cPickle.c 2009-01-26 23:30:31.000000000 -0800 >> @@ -4258,6 +4258,8 @@ >> PyObject *state, *inst, *slotstate; >> PyObject *__setstate__; >> PyObject *d_key, *d_value; >> + PyObject *name; >> + char * key_str; >> int i; >> int res = -1; >> >> @@ -4319,8 +4321,24 @@ >> >> i = 0; >> while (PyDict_Next(state, &i, &d_key, &d_value)) { >> - if (PyObject_SetItem(dict, d_key, d_value) < 0) >> - goto finally; >> + /* normally the keys for instance attributes are >> + interned. we should try to do that here. */ >> + if (PyString_CheckExact(d_key)) { >> + key_str = PyString_AsString(d_key); >> + name = PyString_FromString(key_str); >> + if (! name) >> + goto finally; >> + >> + PyString_InternInPlace(&name); >> + if (PyObject_SetItem(dict, name, d_value) < >> 0) { >> + Py_DECREF(name); >> + goto finally; >> + } >> + Py_DECREF(name); >> + } else { >> + if (PyObject_SetItem(dict, d_key, d_value) < >> 0) >> + goto finally; >> + } >> } >> Py_DECREF(dict); >> } >> >> dhcp-172-31-170-32:~ mcguire$ diff -u Downloads/Python-2.4.3/Lib/pickle.py >> pickle.py >> --- Downloads/Python-2.4.3/Lib/pickle.py 2009-01-27 >> 01:41:43.000000000 -0800 >> +++ pickle.py 2009-01-27 01:41:31.000000000 -0800 >> @@ -1241,7 +1241,15 @@ >> state, slotstate = state >> if state: >> try: >> - inst.__dict__.update(state) >> + d = inst.__dict__ >> + try: >> + for k,v in state.items(): >> + d[intern(k)] = v >> + # keys in state don't have to be strings >> + # don't blow up, but don't go out of our way >> + except TypeError: >> + d.update(state) >> + >> except RuntimeError: >> # XXX In restricted execution, the instance's __dict__ >> # is not accessible. Use the old way of unpickling >> > > Hi Jake, > > You should really post this to bugs.python.org as an enhancement so we > can track the discussion there. > > -jesse Seconded, with eagerness -- interning attribute names when unpickling makes a lot of sense! -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Tue Jan 27 19:43:30 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 27 Jan 2009 19:43:30 +0100 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> Message-ID: <497F55D2.1020805@v.loewis.de> > Interning the strings on unpickling makes the pickles smaller, and at > least for cPickle actually makes unpickling sequences of many objects > slightly faster. I have included proposed patches to cPickle.c and > pickle.py, and would appreciate any feedback. Please submit patches always to the bug tracker. On the proposed change: While it is fairly unintrusive, I would like to propose a different approach - pickle interned strings special. The marshal module already uses this approach, and it should extend to pickle (although it would probably require a new protocol). On pickling, inspect each string and check whether it is interned. If so, emit a different code, and record it into the object id dictionary. On a second occurrence of the string, only pickle a backward reference. (Alternatively, check whether pickling the same string a second time would be more compact). On unpickling, support the new code to intern the result strings; subsequent references to it will go to the standard backreferencing algorithm. Regards, Martin From martin at v.loewis.de Tue Jan 27 19:45:27 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 27 Jan 2009 19:45:27 +0100 Subject: [Python-Dev] FormatError() in callproc.c under win32 In-Reply-To: <200901271136.40940.eckhardt@satorlaser.com> References: <200901261904.17978.eckhardt@satorlaser.com> <200901271136.40940.eckhardt@satorlaser.com> Message-ID: <497F5647.4070806@v.loewis.de> > Note: under CE, you can actually encounter datatype misalignments, since it > runs on CPUs that don't emulate them. I wonder if the same doesn't also apply > to win64.... I don't think you can get misalignment traps on AMD64. Not sure about IA-64: I know that the processor will trap on misaligned accesses, but the operating system might silently fix the access. Regards, Martin From guido at python.org Tue Jan 27 19:57:22 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 27 Jan 2009 10:57:22 -0800 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: <497F55D2.1020805@v.loewis.de> References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> <497F55D2.1020805@v.loewis.de> Message-ID: On Tue, Jan 27, 2009 at 10:43 AM, "Martin v. L?wis" wrote: >> Interning the strings on unpickling makes the pickles smaller, and at >> least for cPickle actually makes unpickling sequences of many objects >> slightly faster. I have included proposed patches to cPickle.c and >> pickle.py, and would appreciate any feedback. > > Please submit patches always to the bug tracker. > > On the proposed change: While it is fairly unintrusive, I would like to > propose a different approach - pickle interned strings special. The > marshal module already uses this approach, and it should extend to > pickle (although it would probably require a new protocol). > > On pickling, inspect each string and check whether it is interned. If > so, emit a different code, and record it into the object id dictionary. > On a second occurrence of the string, only pickle a backward reference. > (Alternatively, check whether pickling the same string a second time > would be more compact). > > On unpickling, support the new code to intern the result strings; > subsequent references to it will go to the standard backreferencing > algorithm. Hm. This would change the pickling format though. Wouldn't just interning (short) strings on unpickling be simpler? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From python at rcn.com Tue Jan 27 20:00:11 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 27 Jan 2009 11:00:11 -0800 Subject: [Python-Dev] Python 3.0.1 Message-ID: With the extensive changes in the works, Python 3.0.1 is shaping-up to be a complete rerelease of 3.0 with API changes and major usability fixes. It will fully supplant the original 3.0 release which was hobbled by poor IO performance. I propose to make the new release more attractive by backporting several module improvements already in 3.1, including two new itertools and one collections class. These are already fully documented, tested, and checked-in to 3.1 and it would be ashamed to let them sit idle for a year or so, when the module updates are already ready-to-ship. Raymond From guido at python.org Tue Jan 27 20:05:03 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 27 Jan 2009 11:05:03 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: Message-ID: On Tue, Jan 27, 2009 at 11:00 AM, Raymond Hettinger wrote: > With the extensive changes in the works, Python 3.0.1 is shaping-up to be a > complete rerelease of 3.0 with API changes and major usability fixes. It > will fully supplant the original 3.0 release which was hobbled by poor IO > performance. > > I propose to make the new release more attractive by backporting several > module improvements already in 3.1, including two new itertools and one > collections class. These are already fully documented, tested, and > checked-in to 3.1 and it would be ashamed to let them sit idle for a year or > so, when the module updates are already ready-to-ship. In that case, I recommend just releasing it as 3.1. I had always anticipated a 3.1 release much sooner than the typical release schedule. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From python at rcn.com Tue Jan 27 20:09:30 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 27 Jan 2009 11:09:30 -0800 Subject: [Python-Dev] Python 3.0.1 References: Message-ID: <2D3E0C7651454837ACF976E1D0563116@RaymondLaptop1> From: "Guido van Rossum" > In that case, I recommend just releasing it as 3.1. I had always > anticipated a 3.1 release much sooner than the typical release > schedule. That is great idea. It's a strong cue that there is a somewhat major break with 3.0 (removed functions, API fixes, huge performance fixes, and whatnot). Raymond From barry at python.org Tue Jan 27 20:29:03 2009 From: barry at python.org (Barry Warsaw) Date: Tue, 27 Jan 2009 14:29:03 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: Message-ID: <66CA6609-A9B6-4544-BDB4-9B5E0F618446@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 27, 2009, at 2:05 PM, Guido van Rossum wrote: > On Tue, Jan 27, 2009 at 11:00 AM, Raymond Hettinger > wrote: >> With the extensive changes in the works, Python 3.0.1 is shaping-up >> to be a >> complete rerelease of 3.0 with API changes and major usability >> fixes. It >> will fully supplant the original 3.0 release which was hobbled by >> poor IO >> performance. >> >> I propose to make the new release more attractive by backporting >> several >> module improvements already in 3.1, including two new itertools and >> one >> collections class. These are already fully documented, tested, and >> checked-in to 3.1 and it would be ashamed to let them sit idle for >> a year or >> so, when the module updates are already ready-to-ship. > > In that case, I recommend just releasing it as 3.1. I had always > anticipated a 3.1 release much sooner than the typical release > schedule. I was going to object on principle to Raymond's suggestion to rip out the operator module functions in Python 3.0.1. I have no objection to ripping them out for 3.1. If you really think we need a Python 3.1 soon, then I won't worry about trying to get a 3.0.1 out soon. 3.1 is Benjamin's baby :). If OTOH we do intend to get a 3.0.1 out, say by the end of February, then please be careful to adhere to our guidelines for which version various changes can go in. For example, the operator methods needs to be restored to the 3.0 maintenance branch, and any other API changes added to 3.0 need to be backed out and applied only to the python3 trunk. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSX9ggHEjvBPtnXfVAQJkTwQAmpKLlXwiIdgHANxlj85wNko4kB7o8Xv8 8wKT6/ZZeU8t09eelchklhw9rAB4I/BQcoQYPg9jiUydbFWdPd/0/G8xrr+F+dTO J2fkGEK1GVorcAZ3iWywpLQXPnHgfrelUBhKT5KzIu5xWzuEnLBDT3c+r2fwNZia hNpAu1Ihj+s= =g69v -----END PGP SIGNATURE----- From brett at python.org Tue Jan 27 20:39:13 2009 From: brett at python.org (Brett Cannon) Date: Tue, 27 Jan 2009 11:39:13 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <66CA6609-A9B6-4544-BDB4-9B5E0F618446@python.org> References: <66CA6609-A9B6-4544-BDB4-9B5E0F618446@python.org> Message-ID: On Tue, Jan 27, 2009 at 11:29, Barry Warsaw wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Jan 27, 2009, at 2:05 PM, Guido van Rossum wrote: > >> On Tue, Jan 27, 2009 at 11:00 AM, Raymond Hettinger >> wrote: >>> >>> With the extensive changes in the works, Python 3.0.1 is shaping-up to be >>> a >>> complete rerelease of 3.0 with API changes and major usability fixes. It >>> will fully supplant the original 3.0 release which was hobbled by poor IO >>> performance. >>> >>> I propose to make the new release more attractive by backporting several >>> module improvements already in 3.1, including two new itertools and one >>> collections class. These are already fully documented, tested, and >>> checked-in to 3.1 and it would be ashamed to let them sit idle for a year >>> or >>> so, when the module updates are already ready-to-ship. >> >> In that case, I recommend just releasing it as 3.1. I had always >> anticipated a 3.1 release much sooner than the typical release >> schedule. > A quick 3.1 release also shows how committed we are to 3.x and that we realize that 3.0 had some initial growing pains that needed to be worked out. > I was going to object on principle to Raymond's suggestion to rip out the > operator module functions in Python 3.0.1. I thought it was for 3.1? > I have no objection to ripping > them out for 3.1. > > If you really think we need a Python 3.1 soon, then I won't worry about > trying to get a 3.0.1 out soon. 3.1 is Benjamin's baby :). > Depending on what Benjamin wants to do we could try for something like a release by PyCon or at PyCon during the sprints. Actually the sprint one is a rather nice idea if Benjamin is willing to spend sprint time on it (and he is sticking around for the sprints) as I assume you, Barry, will be there to be able to help in person and we can squash last minute issues really quickly. > If OTOH we do intend to get a 3.0.1 out, say by the end of February, then > please be careful to adhere to our guidelines for which version various > changes can go in. For example, the operator methods needs to be restored > to the 3.0 maintenance branch, and any other API changes added to 3.0 need > to be backed out and applied only to the python3 trunk. If you have the time for it, Barry, I am +1 on an end of February 3.0.1 with a March/April 3.1 if that works for Benjamin. -Brett From martin at v.loewis.de Tue Jan 27 20:40:06 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 27 Jan 2009 20:40:06 +0100 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> <497F55D2.1020805@v.loewis.de> Message-ID: <497F6316.60902@v.loewis.de> > Hm. This would change the pickling format though. Wouldn't just > interning (short) strings on unpickling be simpler? Sure - that's what Jake had proposed. However, it is always difficult to select which strings to intern - his heuristics (IIUC) is to intern all strings that appear as dictionary keys. Whether this is good enough, I don't know. In particular, it might intern very large strings that aren't identifiers at all. Regards, Martin From barry at python.org Tue Jan 27 20:52:57 2009 From: barry at python.org (Barry Warsaw) Date: Tue, 27 Jan 2009 14:52:57 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <66CA6609-A9B6-4544-BDB4-9B5E0F618446@python.org> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 27, 2009, at 2:39 PM, Brett Cannon wrote: >> I was going to object on principle to Raymond's suggestion to rip >> out the >> operator module functions in Python 3.0.1. > > I thought it was for 3.1? Sorry, I probably misread Raymond's suggestion. >> I have no objection to ripping >> them out for 3.1. >> >> If you really think we need a Python 3.1 soon, then I won't worry >> about >> trying to get a 3.0.1 out soon. 3.1 is Benjamin's baby :). >> > > Depending on what Benjamin wants to do we could try for something like > a release by PyCon or at PyCon during the sprints. Actually the sprint > one is a rather nice idea if Benjamin is willing to spend sprint time > on it (and he is sticking around for the sprints) as I assume you, > Barry, will be there to be able to help in person and we can squash > last minute issues really quickly. Yep, I'm planning on sticking around, so that's a great idea. >> If OTOH we do intend to get a 3.0.1 out, say by the end of >> February, then >> please be careful to adhere to our guidelines for which version >> various >> changes can go in. For example, the operator methods needs to be >> restored >> to the 3.0 maintenance branch, and any other API changes added to >> 3.0 need >> to be backed out and applied only to the python3 trunk. > > If you have the time for it, Barry, I am +1 on an end of February > 3.0.1 with a March/April 3.1 if that works for Benjamin. Or at least a 3.1alpha/beta/whatever during Pycon. I'm sure I can find the time to do a 3.0.1 before Pycon. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSX9mGXEjvBPtnXfVAQL5BgP+JXX43hbNlrjeV9YBFBbCB9SfnFlImTTx ZHhilw12yH13Ha2RLbre+sWlBDQFdTeAJkjUWg2/iZ7Ti8g9eD7sp1KRRuLkbTx0 83h+ciTd9Fdp+sv4JRKfP609X0dlAfbrjjVU/NzXCHePXb++Tr2liHRtHwnr3DgL kZNp1jOTG8Q= =nVHs -----END PGP SIGNATURE----- From guido at python.org Tue Jan 27 20:57:46 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 27 Jan 2009 11:57:46 -0800 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: <497F6316.60902@v.loewis.de> References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> <497F55D2.1020805@v.loewis.de> <497F6316.60902@v.loewis.de> Message-ID: On Tue, Jan 27, 2009 at 11:40 AM, "Martin v. L?wis" wrote: >> Hm. This would change the pickling format though. Wouldn't just >> interning (short) strings on unpickling be simpler? > > Sure - that's what Jake had proposed. However, it is always difficult > to select which strings to intern - his heuristics (IIUC) is to intern > all strings that appear as dictionary keys. Whether this is good enough, > I don't know. In particular, it might intern very large strings that > aren't identifiers at all. Just set a size limit, e.g. 30 or 100. It's just a heuristic. I believe somewhere in Python itself I intern string literals if they are reasonably short and fit the pattern of an identifier; I'd worry that the pattern matching would slow down unpickling more than the expected benefit though, so perhaps just a size test would be better. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Tue Jan 27 21:07:53 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 27 Jan 2009 21:07:53 +0100 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> <497F55D2.1020805@v.loewis.de> <497F6316.60902@v.loewis.de> Message-ID: <497F6999.1050403@v.loewis.de> > Just set a size limit, e.g. 30 or 100. It's just a heuristic. I > believe somewhere in Python itself I intern string literals if they > are reasonably short and fit the pattern of an identifier; I'd worry > that the pattern matching would slow down unpickling more than the > expected benefit though, so perhaps just a size test would be better. Ok. So, Jake, it's back to my original request - please submit this to the tracker (preferably along with test cases). Regards, Martin From benjamin at python.org Tue Jan 27 21:22:19 2009 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 27 Jan 2009 14:22:19 -0600 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: Message-ID: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> On Tue, Jan 27, 2009 at 1:00 PM, Raymond Hettinger wrote: > With the extensive changes in the works, Python 3.0.1 is shaping-up to be a > complete rerelease of 3.0 with API changes and major usability fixes. It > will fully supplant the original 3.0 release which was hobbled by poor IO > performance. > > I propose to make the new release more attractive by backporting several > module improvements already in 3.1, including two new itertools and one > collections class. These are already fully documented, tested, and > checked-in to 3.1 and it would be ashamed to let them sit idle for a year or > so, when the module updates are already ready-to-ship. At the moment, there are 4 release blockers for 3.0.1. I'd like to see 3.0.1 released soon (within the next month.) It would fix the hugest mistakes in the initial release most of which have been done committed since December. I'm sure it would be attractive enough with the nasty bugs fixed in it! Let's not completely open the flood gates. Releasing 3.1 in March or April also sounds good. I will be at least at the first day of sprints. -- Regards, Benjamin From jake at youtube.com Tue Jan 27 21:25:02 2009 From: jake at youtube.com (Jake McGuire) Date: Tue, 27 Jan 2009 12:25:02 -0800 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: <497F6316.60902@v.loewis.de> References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> <497F55D2.1020805@v.loewis.de> <497F6316.60902@v.loewis.de> Message-ID: On Jan 27, 2009, at 11:40 AM, Martin v. L?wis wrote: >> Hm. This would change the pickling format though. Wouldn't just >> interning (short) strings on unpickling be simpler? > > Sure - that's what Jake had proposed. However, it is always difficult > to select which strings to intern - his heuristics (IIUC) is to intern > all strings that appear as dictionary keys. Whether this is good > enough, > I don't know. In particular, it might intern very large strings that > aren't identifiers at all. I may have misunderstood how unpickling works, but I believe that my path only interns strings that are keys in a dictionary used to populate an instance. This is very similar to how instance creation and modification works in Python now. The only difference is if you set an attribute via "inst.__dict__['attribute_name'] = value" then 'attribute_name' will not be automatically interned, but if you pickle the instance, 'attribute_name' will be interned on unpickling. There may be cases where users specifically go through __dict__ to avoid interning attribute names, but I would be surprised to hear about it and very interested in talking to the person who did that. Creating a new pickle protocol to handle this case seems excessive... -jake From martin at v.loewis.de Tue Jan 27 21:28:05 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 27 Jan 2009 21:28:05 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> Message-ID: <497F6E55.6090608@v.loewis.de> > At the moment, there are 4 release blockers for 3.0.1. I'd like to see > 3.0.1 released soon (within the next month.) I agree. In December, there was a huge sense of urgency that we absolutely must have a 3.0.1 last year - and now people talk about giving up 3.0 entirely. Releasing 3.1 6 months after 3.0 sounds reasonable; I don't think it should be released earlier (else 3.0 looks fairly ridiculous). Regards, Martin From bioinformed at gmail.com Tue Jan 27 21:32:00 2009 From: bioinformed at gmail.com (Kevin Jacobs ) Date: Tue, 27 Jan 2009 15:32:00 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> Message-ID: <2e1434c10901271232l139018q360e662b006c0580@mail.gmail.com> On Tue, Jan 27, 2009 at 3:22 PM, Benjamin Peterson wrote: > > At the moment, there are 4 release blockers for 3.0.1. I'd like to see > 3.0.1 released soon (within the next month.) It would fix the hugest > mistakes in the initial release most of which have been done committed > since December. I'm sure it would be attractive enough with the nasty > bugs fixed in it! Let's not completely open the flood gates. > > Releasing 3.1 in March or April also sounds good. I will be at least > at the first day of sprints. > As an interested observer, but not yet user of the 3.x series, I was wondering about progress on restoring io performance and what release those improvements were slated for. This is the major blocker for me to begin porting my non-numpy/scipy dependent code. Much of my current work is in bioinformatics, often dealing with multi-gigabyte datasets, so file io fast is critical. Otherwise, I'll have to live with 2.x for the indefinite future. Thanks, ~Kevin -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin at v.loewis.de Tue Jan 27 21:39:40 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 27 Jan 2009 21:39:40 +0100 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> <497F55D2.1020805@v.loewis.de> <497F6316.60902@v.loewis.de> Message-ID: <497F710C.8080406@v.loewis.de> > I may have misunderstood how unpickling works Perhaps I have misunderstood your patch. Posting it to Rietveld might also be useful. Regards, Martin From guido at python.org Tue Jan 27 21:40:20 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 27 Jan 2009 12:40:20 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <497F6E55.6090608@v.loewis.de> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> Message-ID: On Tue, Jan 27, 2009 at 12:28 PM, "Martin v. L?wis" wrote: >> At the moment, there are 4 release blockers for 3.0.1. I'd like to see >> 3.0.1 released soon (within the next month.) > > I agree. In December, there was a huge sense of urgency that we > absolutely must have a 3.0.1 last year - and now people talk about > giving up 3.0 entirely. > > Releasing 3.1 6 months after 3.0 sounds reasonable; I don't think > it should be released earlier (else 3.0 looks fairly ridiculous). It sounds like my approval of Raymond's removal of certain (admittedly obsolete) operators from the 3.0 branch was premature. Barry at least thinks those should be rolled back. Others? -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Tue Jan 27 21:48:37 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 27 Jan 2009 21:48:37 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> Message-ID: <497F7325.7070802@v.loewis.de> >> Releasing 3.1 6 months after 3.0 sounds reasonable; I don't think >> it should be released earlier (else 3.0 looks fairly ridiculous). > > It sounds like my approval of Raymond's removal of certain (admittedly > obsolete) operators from the 3.0 branch was premature. Barry at least > thinks those should be rolled back. Others? I agree that not too much harm is done by removing stuff in 3.0.1 that erroneously had been left in the 3.0 release - in particular if 3.0.1 gets released quickly (e.g. within two months of the original release). If that is an acceptable policy, then those changes would fall under the policy. If the policy is *not* acceptable, a lot of changes to 3.0.1 need to be rolled back (e.g. the ongoing removal of __cmp__ fragments) Regards, Martin From python at rcn.com Tue Jan 27 22:19:21 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 27 Jan 2009 13:19:21 -0800 Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> Message-ID: From: ""Martin v. L?wis"" > Releasing 3.1 6 months after 3.0 sounds reasonable; I don't think > it should be released earlier (else 3.0 looks fairly ridiculous). I think it should be released earlier and completely supplant 3.0 before more third-party developers spend time migrating code. We needed 3.0 to get released so we could get the feedback necessary to shake it out. Now, it is time for it to fade into history and take advantage of the lessons learned. The principles for the 2.x series don't really apply here. In 2.x, there was always a useful, stable, clean release already fielded and there were tons of third-party apps that needed a slow rate of change. In contrast, 3.0 has a near zero installed user base (at least in terms of being used in production). It has very few migrated apps. It is not particularly clean and some of the work for it was incomplete when it was released. My preference is to drop 3.0 entirely (no incompatable bugfix release) and in early February release 3.1 as the real 3.x that migrators ought to aim for and that won't have incompatable bugfix releases. Then at PyCon, we can have a real bug day and fix-up any chips in the paint. If 3.1 goes out right away, then it doesn't matter if 3.0 looks ridiculous. All eyes go to the latest release. Better to get this done before more people download 3.0 to kick the tires. Raymond From mrts.pydev at gmail.com Tue Jan 27 22:21:44 2009 From: mrts.pydev at gmail.com (=?ISO-8859-1?Q?Mart_S=F5mermaa?=) Date: Tue, 27 Jan 2009 23:21:44 +0200 Subject: [Python-Dev] V8, TraceMonkey, SquirrelFish and Python In-Reply-To: <4222a8490901270704y41308595i2bdb65dab7bc4b07@mail.gmail.com> References: <4222a8490901270704y41308595i2bdb65dab7bc4b07@mail.gmail.com> Message-ID: On Tue, Jan 27, 2009 at 5:04 PM, Jesse Noller wrote: > Hi Mart, > > This is a better discussion for the python-ideas list. That being > said, there was a thread discussing this last year, see: > > http://mail.python.org/pipermail/python-dev/2008-October/083176.html > > -jesse > Indeed, sorry. Incidentally, there is a similar discussion going on just now. -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Jan 27 22:44:35 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 27 Jan 2009 21:44:35 +0000 (UTC) Subject: [Python-Dev] IO performance References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <2e1434c10901271232l139018q360e662b006c0580@mail.gmail.com> Message-ID: Hello Kevin, > As an interested observer, but not yet user of the 3.x series, I was wondering about progress on restoring io performance and what release those improvements were slated for. There is an SVN branch with a complete rewrite (in C) of the IO stack. You can find it in branches/io-c. Apart from a problem in _ssl.c, it should be quite usable. Your tests and observations are welcome! Regards Antoine. From python at rcn.com Tue Jan 27 22:46:35 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 27 Jan 2009 13:46:35 -0800 Subject: [Python-Dev] pprint(iterator) Message-ID: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> It is becoming the norm in 3.x for functions to return iterators, generators, or views whereever possible. I had a thought that pprint() ought to be taught to print iterators: pprint(enumerate(seq)) pprint(map(somefunc, somedata)) pprint(permutations(elements)) pprint(mydict.items()) Currently, all four of those will print something like: >>> pprint(d.items()) >>> pprint(enumerate(d)) If pprint() is to give a more useful result, the question is how best to represent the iterators. In the examples for itertools, I adopted the convention of displaying results like a collection with no commas or enclosing delimiters: # chain('ABC', 'DEF') --> A B C D E F The equivalent for pprint would be the same for items, using space for items on one row or using linefeeds for output too long for one row. Another idea is to make-up an angle-bracket style to provide a visual cue for iterator output: <'A' 'B' 'C' 'D' 'E' 'F'> Perhaps with commas: <'A', 'B', 'C', 'D', 'E', 'F'> None of those ideas can be run through eval, nor do they identify the type of iterator. Perhaps these would be better: or iter(['A', 'B', 'C', 'D', 'E', 'F']) Do you guys have any thoughts on the subject? Raymond From benjamin at python.org Tue Jan 27 22:48:08 2009 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 27 Jan 2009 15:48:08 -0600 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> Message-ID: <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> On Tue, Jan 27, 2009 at 3:19 PM, Raymond Hettinger wrote: > If 3.1 goes out right away, then it doesn't matter if 3.0 looks ridiculous. > All eyes go to the latest release. Better to get this done before more > people download 3.0 to kick the tires. It seems like we are arguing over the version number of basically the same thing. I would like to see 3.0.1 released in early February for nearly the reasons you name. However, it seems to me that there are two kinds of issues: those like __cmp__ removal and some silly IO bugs that have been fixed for a while and our waiting to be released. There's also projects like io in c which are important, but would not make the schedule you and I want for 3.0.1/3.1. It's for those longer term features that I want 3.0.1 and 3.1. If we immedatly released 3.1, when would those longer term projects that are important for migration make it to stable? 3.2 is probably a while off. -- Regards, Benjamin From guido at python.org Tue Jan 27 22:49:22 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 27 Jan 2009 13:49:22 -0800 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> Message-ID: My only thought is that whatever you do, target Python 3.1, not 3.0.1. On Tue, Jan 27, 2009 at 1:46 PM, Raymond Hettinger wrote: > It is becoming the norm in 3.x for functions to return iterators, > generators, or views whereever possible. > > I had a thought that pprint() ought to be taught to print iterators: > > pprint(enumerate(seq)) > pprint(map(somefunc, somedata)) > pprint(permutations(elements)) > pprint(mydict.items()) > > Currently, all four of those will print something like: > > >>> pprint(d.items()) > > >>> pprint(enumerate(d)) > > > If pprint() is to give a more useful result, the question is how best to > represent the iterators. > > In the examples for itertools, I adopted the convention of displaying > results > like a collection with no commas or enclosing delimiters: > > # chain('ABC', 'DEF') --> A B C D E F > > The equivalent for pprint would be the same for items, using space for items > on one row or using linefeeds for output too long for one row. > > Another idea is to make-up an angle-bracket style to provide a visual cue > for iterator output: > > <'A' 'B' 'C' 'D' 'E' 'F'> > > Perhaps with commas: > > <'A', 'B', 'C', 'D', 'E', 'F'> > > None of those ideas can be run through eval, nor do they identify the type > of iterator. Perhaps these would be better: > > > > or > > iter(['A', 'B', 'C', 'D', 'E', 'F']) > > > Do you guys have any thoughts on the subject? > > > Raymond > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/guido%40python.org > -- --Guido van Rossum (home page: http://www.python.org/~guido/) From solipsis at pitrou.net Tue Jan 27 22:49:58 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 27 Jan 2009 21:49:58 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> Message-ID: Benjamin Peterson python.org> writes: > > At the moment, there are 4 release blockers for 3.0.1. I'd like to see > 3.0.1 released soon (within the next month.) It would fix the hugest > mistakes in the initial release most of which have been done committed > since December. I'm sure it would be attractive enough with the nasty > bugs fixed in it! Let's not completely open the flood gates. > > Releasing 3.1 in March or April also sounds good. I will be at least > at the first day of sprints. +1 on all Benjamin said. The IO-in-C branch cannot be reasonably pulled in release30-maint, but it will be ready for 3.1. Speaking of which, testers are welcome (the branch is in branches/io-c). Also, I need someone to update the Windows build files. Regards Antoine. From phd at phd.pp.ru Tue Jan 27 23:06:34 2009 From: phd at phd.pp.ru (Oleg Broytmann) Date: Wed, 28 Jan 2009 01:06:34 +0300 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> Message-ID: <20090127220634.GA16885@phd.pp.ru> On Tue, Jan 27, 2009 at 01:46:35PM -0800, Raymond Hettinger wrote: > I like the idea, and I prefer this formatting. Also bear in mind there are infinite generators, and there are iterators that cannot be reset. For infinite generators pprint() must have a parameter, say, 'max_items', and print . The situation with iterators that cannot be reset should be documented. Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd at phd.pp.ru Programmers don't die, they just GOSUB without RETURN. From benjamin at python.org Tue Jan 27 23:12:46 2009 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 27 Jan 2009 16:12:46 -0600 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> Message-ID: <1afaf6160901271412p45d5e33dw14d487c7183870a1@mail.gmail.com> On Tue, Jan 27, 2009 at 3:46 PM, Raymond Hettinger wrote: > It is becoming the norm in 3.x for functions to return iterators, > generators, or views whereever possible. > Do you guys have any thoughts on the subject? Maybe a solution like this could help with bugs like #2610? -- Regards, Benjamin From guido at python.org Tue Jan 27 23:14:13 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 27 Jan 2009 14:14:13 -0800 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <20090127220634.GA16885@phd.pp.ru> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090127220634.GA16885@phd.pp.ru> Message-ID: On Tue, Jan 27, 2009 at 2:06 PM, Oleg Broytmann wrote: > On Tue, Jan 27, 2009 at 01:46:35PM -0800, Raymond Hettinger wrote: >> > > I like the idea, and I prefer this formatting. Also bear in mind there > are infinite generators, and there are iterators that cannot be reset. For > infinite generators pprint() must have a parameter, say, 'max_items', and > print . The situation with > iterators that cannot be reset should be documented. This pretty much kills the proposal. Calling a "print" function like pprint() should not have a side effect on the object being printed. I'd be okay of pprint() special-cased the views returned by e.g. dict.keys(), but if all we know is that the argument has a __next__ method, pprint() should *not* be calling that. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From ncoghlan at gmail.com Tue Jan 27 23:14:37 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Wed, 28 Jan 2009 08:14:37 +1000 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> Message-ID: <497F874D.9090202@gmail.com> Raymond Hettinger wrote: > I quite like the idea of something along those lines. For example: try: itr = iter(obj) except TypeError: pass else: return "" % (obj.__class__.__name__, )) Doing this only in pprint also reduces the chances of accidentally consuming an iterator (which was a reasonable objection when I suggested changing the __str__ implementation on some of the standard iterators some time ago). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From guido at python.org Tue Jan 27 23:15:07 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 27 Jan 2009 14:15:07 -0800 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <1afaf6160901271412p45d5e33dw14d487c7183870a1@mail.gmail.com> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <1afaf6160901271412p45d5e33dw14d487c7183870a1@mail.gmail.com> Message-ID: On Tue, Jan 27, 2009 at 2:12 PM, Benjamin Peterson wrote: > On Tue, Jan 27, 2009 at 3:46 PM, Raymond Hettinger wrote: >> It is becoming the norm in 3.x for functions to return iterators, >> generators, or views whereever possible. > >> Do you guys have any thoughts on the subject? > > Maybe a solution like this could help with bugs like #2610? It would have to special-case range() objects. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From jake at youtube.com Tue Jan 27 23:16:29 2009 From: jake at youtube.com (Jake McGuire) Date: Tue, 27 Jan 2009 14:16:29 -0800 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: <497F710C.8080406@v.loewis.de> References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> <497F55D2.1020805@v.loewis.de> <497F6316.60902@v.loewis.de> <497F710C.8080406@v.loewis.de> Message-ID: <11495282-A381-4C98-85B0-F811ECD9A072@youtube.com> On Jan 27, 2009, at 12:39 PM, Martin v. L?wis wrote: >> I may have misunderstood how unpickling works > > Perhaps I have misunderstood your patch. Posting it to Rietveld might > also be useful. It is not immediately clear to me how Rietveld works. But I have created an issue on tracker: http://bugs.python.org/issue5084 Another vaguely related change would be to store string and unicode objects in the pickler memo keyed as themselves rather than their object ids. Depending on the data set, you can have many copies of the same string, e.g. "application/octet-stream". This may marginally increase memory usage during pickling, depending on the data being pickled and the way in which the code was written. I'm happy to write this up if people are interested... -jake From python at rcn.com Tue Jan 27 23:24:50 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 27 Jan 2009 14:24:50 -0800 Subject: [Python-Dev] pprint(iterator) References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> Message-ID: <0850D72339234C9F9CB7BF17FCAE1D78@RaymondLaptop1> [Guido van Rossum] > My only thought is that whatever you do, target Python 3.1, not 3.0.1. Of course. Do you have any thoughts on the most useful display format? What do you want to see from pprint(mydict.items())? Raymond From python at rcn.com Tue Jan 27 23:28:44 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 27 Jan 2009 14:28:44 -0800 Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: [Benjamin Peterson] > It seems like we are arguing over the version number of basically the > same thing. I would like to see 3.0.1 released in early February for > nearly the reasons you name. However, it seems to me that there are > two kinds of issues: those like __cmp__ removal and some silly IO bugs > that have been fixed for a while and our waiting to be released. > There's also projects like io in c which are important, but would not > make the schedule you and I want for 3.0.1/3.1. What is involved in finishing io-in-c? ISTM, that is critical and that its absence is a serious barrier to adoption in a production environment. How far away is it? Raymond From martin at v.loewis.de Tue Jan 27 23:31:05 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 27 Jan 2009 23:31:05 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> Message-ID: <497F8B29.5030206@v.loewis.de> > My preference is to drop 3.0 entirely (no incompatable bugfix release) > and in early February release 3.1 as the real 3.x that migrators ought > to aim for and that won't have incompatable bugfix releases. Then at > PyCon, we can have a real bug day and fix-up any chips in the paint. I would fear that than 3.1 gets the same fate as 3.0. In May, we will all think "what piece of junk was that 3.1 release, let's put it to history", and replace it with 3.2. By then, users will wonder if there is ever a 3.x release that is any good. Regards, Martin From martin at v.loewis.de Tue Jan 27 23:34:52 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Tue, 27 Jan 2009 23:34:52 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> Message-ID: <497F8C0C.7090502@v.loewis.de> > The IO-in-C branch cannot be reasonably pulled in release30-maint, but it will > be ready for 3.1. Even if 3.1 is released in February? Regards, Martin From brett at python.org Tue Jan 27 23:36:57 2009 From: brett at python.org (Brett Cannon) Date: Tue, 27 Jan 2009 14:36:57 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <497F8B29.5030206@v.loewis.de> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <497F8B29.5030206@v.loewis.de> Message-ID: On Tue, Jan 27, 2009 at 14:31, "Martin v. L?wis" wrote: >> My preference is to drop 3.0 entirely (no incompatable bugfix release) >> and in early February release 3.1 as the real 3.x that migrators ought >> to aim for and that won't have incompatable bugfix releases. Then at >> PyCon, we can have a real bug day and fix-up any chips in the paint. > > I would fear that than 3.1 gets the same fate as 3.0. In May, we will > all think "what piece of junk was that 3.1 release, let's put it to > history", and replace it with 3.2. By then, users will wonder if there > is ever a 3.x release that is any good. That's my fear as well. I have no problem doing a quick 3.0.1 release any time between now and the end of February and start with the first alpha or beta of 3.1 at PyCon. -Brett From guido at python.org Tue Jan 27 23:42:59 2009 From: guido at python.org (Guido van Rossum) Date: Tue, 27 Jan 2009 14:42:59 -0800 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <0850D72339234C9F9CB7BF17FCAE1D78@RaymondLaptop1> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <0850D72339234C9F9CB7BF17FCAE1D78@RaymondLaptop1> Message-ID: On Tue, Jan 27, 2009 at 2:24 PM, Raymond Hettinger wrote: > > [Guido van Rossum] >> >> My only thought is that whatever you do, target Python 3.1, not 3.0.1. > > Of course. > Do you have any thoughts on the most useful display format? > What do you want to see from pprint(mydict.items())? Perhaps <['a', 'b', ...]> ? The list display is familiar to everyone; the surrounding <> make it clear that it's not really a list without adding much noise. Another idea would be which helpfully includes the name of the type of the object that was passed into pprint(). Regarding range(), I wonder if we really need to show more than 'range(0, 10)' -- anything besides that would be wasteful IMO. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From janssen at parc.com Tue Jan 27 23:41:49 2009 From: janssen at parc.com (Bill Janssen) Date: Tue, 27 Jan 2009 14:41:49 PST Subject: [Python-Dev] IO performance In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <2e1434c10901271232l139018q360e662b006c0580@mail.gmail.com> Message-ID: <35189.1233096109@pippin.parc.xerox.com> Antoine Pitrou wrote: > There is an SVN branch with a complete rewrite (in C) of the IO stack. You can > find it in branches/io-c. Apart from a problem in _ssl.c, it should be quite > usable. Your tests and observations are welcome! And I'll look at that _ssl.c problem. Bill From solipsis at pitrou.net Tue Jan 27 23:44:49 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 27 Jan 2009 22:44:49 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: Raymond Hettinger rcn.com> writes: > > What is involved in finishing io-in-c? Off the top of my head: - fix the _ssl bug which prevents some tests from passing (issue #4967) - clean up io.py (and decide what to do with the remaining Python code: basically, the parts of StringIO which are implemented in Python) - of course, test in various situations, review the code, suggest possible improvements... Now here are some performance figures. Text I/O is done in utf-8 with universal newlines enabled: === I/O in C === ** Binary input ** [ 400KB ] read one unit at a time... 1.64 MB/s [ 400KB ] read 20 units at a time... 27.2 MB/s [ 400KB ] read 4096 units at a time... 845 MB/s [ 20KB ] read whole contents at once... 924 MB/s [ 400KB ] read whole contents at once... 883 MB/s [ 10MB ] read whole contents at once... 980 MB/s [ 400KB ] seek forward one unit at a time... 0.528 MB/s [ 400KB ] seek forward 1000 units at a time... 516 MB/s [ 400KB ] alternate read & seek one unit... 1.33 MB/s [ 400KB ] alternate read & seek 1000 units... 490 MB/s ** Text input ** [ 400KB ] read one unit at a time... 2.28 MB/s [ 400KB ] read 20 units at a time... 29.2 MB/s [ 400KB ] read one line at a time... 71.7 MB/s [ 400KB ] read 4096 units at a time... 97.4 MB/s [ 20KB ] read whole contents at once... 108 MB/s [ 400KB ] read whole contents at once... 112 MB/s [ 10MB ] read whole contents at once... 89.7 MB/s [ 400KB ] seek forward one unit at a time... 0.0904 MB/s [ 400KB ] seek forward 1000 units at a time... 87.4 MB/s ** Binary append ** [ 20KB ] write one unit at a time... 0.668 MB/s [ 400KB ] write 20 units at a time... 12.2 MB/s [ 400KB ] write 4096 units at a time... 722 MB/s [ 10MB ] write 1e6 units at a time... 1529 MB/s ** Text append ** [ 20KB ] write one unit at a time... 0.983 MB/s [ 400KB ] write 20 units at a time... 16 MB/s [ 400KB ] write 4096 units at a time... 236 MB/s [ 10MB ] write 1e6 units at a time... 261 MB/s ** Binary overwrite ** [ 20KB ] modify one unit at a time... 0.677 MB/s [ 400KB ] modify 20 units at a time... 12.1 MB/s [ 400KB ] modify 4096 units at a time... 382 MB/s [ 400KB ] alternate write & seek one unit... 0.212 MB/s [ 400KB ] alternate write & seek 1000 units... 173 MB/s [ 400KB ] alternate read & write one unit... 0.827 MB/s [ 400KB ] alternate read & write 1000 units... 276 MB/s ** Text overwrite ** [ 20KB ] modify one unit at a time... 0.296 MB/s [ 400KB ] modify 20 units at a time... 5.69 MB/s [ 400KB ] modify 4096 units at a time... 151 MB/s === I/O in Python (branches/py3k) === ** Binary input ** [ 400KB ] read one unit at a time... 0.174 MB/s [ 400KB ] read 20 units at a time... 3.44 MB/s [ 400KB ] read 4096 units at a time... 246 MB/s [ 20KB ] read whole contents at once... 443 MB/s [ 400KB ] read whole contents at once... 216 MB/s [ 10MB ] read whole contents at once... 274 MB/s [ 400KB ] seek forward one unit at a time... 0.188 MB/s [ 400KB ] seek forward 1000 units at a time... 182 MB/s [ 400KB ] alternate read & seek one unit... 0.0821 MB/s [ 400KB ] alternate read & seek 1000 units... 81.2 MB/s ** Text input ** [ 400KB ] read one unit at a time... 0.218 MB/s [ 400KB ] read 20 units at a time... 3.8 MB/s [ 400KB ] read one line at a time... 3.69 MB/s [ 400KB ] read 4096 units at a time... 34.9 MB/s [ 20KB ] read whole contents at once... 70.5 MB/s [ 400KB ] read whole contents at once... 81 MB/s [ 10MB ] read whole contents at once... 68.7 MB/s [ 400KB ] seek forward one unit at a time... 0.0709 MB/s [ 400KB ] seek forward 1000 units at a time... 67.3 MB/s ** Binary append ** [ 20KB ] write one unit at a time... 0.15 MB/s [ 400KB ] write 20 units at a time... 2.88 MB/s [ 400KB ] write 4096 units at a time... 346 MB/s [ 10MB ] write 1e6 units at a time... 728 MB/s ** Text append ** [ 20KB ] write one unit at a time... 0.0814 MB/s [ 400KB ] write 20 units at a time... 1.51 MB/s [ 400KB ] write 4096 units at a time... 118 MB/s [ 10MB ] write 1e6 units at a time... 218 MB/s ** Binary overwrite ** [ 20KB ] modify one unit at a time... 0.123 MB/s [ 400KB ] modify 20 units at a time... 2.34 MB/s [ 400KB ] modify 4096 units at a time... 213 MB/s [ 400KB ] alternate write & seek one unit... 0.0816 MB/s [ 400KB ] alternate write & seek 1000 units... 71.4 MB/s [ 400KB ] alternate read & write one unit... 0.0448 MB/s [ 400KB ] alternate read & write 1000 units... 41.1 MB/s ** Text overwrite ** [ 20KB ] modify one unit at a time... 0.0723 MB/s [ 400KB ] modify 20 units at a time... 1.36 MB/s [ 400KB ] modify 4096 units at a time... 88.3 MB/s Regards Antoine. From benjamin at python.org Tue Jan 27 23:48:33 2009 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 27 Jan 2009 16:48:33 -0600 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: <1afaf6160901271448x2a80d172rd9b7c814e5c510c6@mail.gmail.com> On Tue, Jan 27, 2009 at 4:44 PM, Antoine Pitrou wrote: > Raymond Hettinger rcn.com> writes: >> >> What is involved in finishing io-in-c? > > Off the top of my head: > - fix the _ssl bug which prevents some tests from passing (issue #4967) > - clean up io.py (and decide what to do with the remaining Python code: > basically, the parts of StringIO which are implemented in Python) > - of course, test in various situations, review the code, suggest possible > improvements... There are also several IO bugs that should be fixed before it becomes official like #5006. > > Now here are some performance figures. Text I/O is done in utf-8 with universal > newlines enabled: -- Regards, Benjamin From daniel at stutzbachenterprises.com Tue Jan 27 23:48:38 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Tue, 27 Jan 2009 16:48:38 -0600 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: On Tue, Jan 27, 2009 at 4:44 PM, Antoine Pitrou wrote: > Now here are some performance figures. Text I/O is done in utf-8 with > universal > newlines enabled: > Would it be much trouble to also compare performance with Python 2.6? -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Tue Jan 27 23:46:43 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 27 Jan 2009 22:46:43 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F8C0C.7090502@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > > The IO-in-C branch cannot be reasonably pulled in release30-maint, but it will > > be ready for 3.1. > > Even if 3.1 is released in February? No, unless we take some risks and rush it in. (technically, it seems to work, but it's such a critical piece of code that it would be nice to let it rest a little) Regards Antoine. From solipsis at pitrou.net Tue Jan 27 23:54:32 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 27 Jan 2009 22:54:32 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: Daniel Stutzbach stutzbachenterprises.com> writes: > > Would it be much trouble to also compare performance with Python 2.6? Here are the results on trunk. Keep in mind Text IO, while it's still `open("r", filename)`, does not mean the same thing. === 2.7 I/O (trunk) === ** Binary input ** [ 400KB ] read one unit at a time... 1.48 MB/s [ 400KB ] read 20 units at a time... 29.2 MB/s [ 400KB ] read 4096 units at a time... 1038 MB/s [ 20KB ] read whole contents at once... 1145 MB/s [ 400KB ] read whole contents at once... 891 MB/s [ 10MB ] read whole contents at once... 966 MB/s [ 400KB ] seek forward one unit at a time... 0.893 MB/s [ 400KB ] seek forward 1000 units at a time... 568 MB/s [ 400KB ] alternate read & seek one unit... 1.11 MB/s [ 400KB ] alternate read & seek 1000 units... 563 MB/s ** Text input ** [ 400KB ] read one unit at a time... 1.41 MB/s [ 400KB ] read 20 units at a time... 28.4 MB/s [ 400KB ] read one line at a time... 207 MB/s [ 400KB ] read 4096 units at a time... 1060 MB/s [ 20KB ] read whole contents at once... 1196 MB/s [ 400KB ] read whole contents at once... 841 MB/s [ 10MB ] read whole contents at once... 966 MB/s [ 400KB ] seek forward one unit at a time... 0.873 MB/s [ 400KB ] seek forward 1000 units at a time... 589 MB/s ** Binary append ** [ 20KB ] write one unit at a time... 0.887 MB/s [ 400KB ] write 20 units at a time... 15.8 MB/s [ 400KB ] write 4096 units at a time... 1071 MB/s [ 10MB ] write 1e6 units at a time... 1523 MB/s ** Text append ** [ 20KB ] write one unit at a time... 1.33 MB/s [ 400KB ] write 20 units at a time... 22.9 MB/s [ 400KB ] write 4096 units at a time... 1244 MB/s [ 10MB ] write 1e6 units at a time... 1540 MB/s ** Binary overwrite ** [ 20KB ] modify one unit at a time... 0.867 MB/s [ 400KB ] modify 20 units at a time... 15.3 MB/s [ 400KB ] modify 4096 units at a time... 446 MB/s [ 400KB ] alternate write & seek one unit... 0.237 MB/s [ 400KB ] alternate write & seek 1000 units... 151 MB/s [ 400KB ] alternate read & write one unit... 0.221 MB/s [ 400KB ] alternate read & write 1000 units... 153 MB/s ** Text overwrite ** [ 20KB ] modify one unit at a time... 1.32 MB/s [ 400KB ] modify 20 units at a time... 22.5 MB/s [ 400KB ] modify 4096 units at a time... 509 MB/s From python at rcn.com Tue Jan 27 23:57:48 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 27 Jan 2009 14:57:48 -0800 Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com><497F6E55.6090608@v.loewis.de><1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> [Antoine Pitrou] > Now here are some performance figures. Text I/O is done in utf-8 with universal > newlines enabled: That's a substantial boost. How does it compare to Py2.x equivalents? Raymond From barry at python.org Tue Jan 27 23:59:52 2009 From: barry at python.org (Barry Warsaw) Date: Tue, 27 Jan 2009 17:59:52 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <497F8B29.5030206@v.loewis.de> Message-ID: <98A7C5B2-45BE-4335-BAA2-4698A4E971CD@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 27, 2009, at 5:36 PM, Brett Cannon wrote: > On Tue, Jan 27, 2009 at 14:31, "Martin v. L?wis" > wrote: >>> My preference is to drop 3.0 entirely (no incompatable bugfix >>> release) >>> and in early February release 3.1 as the real 3.x that migrators >>> ought >>> to aim for and that won't have incompatable bugfix releases. Then >>> at >>> PyCon, we can have a real bug day and fix-up any chips in the paint. >> >> I would fear that than 3.1 gets the same fate as 3.0. In May, we will >> all think "what piece of junk was that 3.1 release, let's put it to >> history", and replace it with 3.2. By then, users will wonder if >> there >> is ever a 3.x release that is any good. > > That's my fear as well. I have no problem doing a quick 3.0.1 release > any time between now and the end of February and start with the first > alpha or beta of 3.1 at PyCon. +1 Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSX+R6HEjvBPtnXfVAQLQBwQAuJfVHtKQRqptjl1Hlkz37RSqMnCGNE/f Fm2JmulfWbtlZgeZ+YgBMyPw2jGpmkSp/zB0aThuBNRrtcEPOnO0nFKxWwcFwBa/ ZddlM9RJvb+GgBPNOjnSXNSJcYmNLwea7GuKPkTVmkb9nH0JLOnk2dLVTGjJ89Q4 F3qsGz5coEc= =gUH4 -----END PGP SIGNATURE----- From brett at python.org Wed Jan 28 00:02:13 2009 From: brett at python.org (Brett Cannon) Date: Tue, 27 Jan 2009 15:02:13 -0800 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: On Tue, Jan 27, 2009 at 14:44, Antoine Pitrou wrote: > Raymond Hettinger rcn.com> writes: >> >> What is involved in finishing io-in-c? > > Off the top of my head: > - fix the _ssl bug which prevents some tests from passing (issue #4967) > - clean up io.py (and decide what to do with the remaining Python code: > basically, the parts of StringIO which are implemented in Python) The other VMs might appreciate the code being available and used if _io is not available for import. If you need help on how to have the tests run twice, once on the Python code and again on the C code, you can look at test_heapq and test_warnings for approaches. > - of course, test in various situations, review the code, suggest possible > improvements... > > Now here are some performance figures. Text I/O is done in utf-8 with universal > newlines enabled: > That is impressive! Congrats to you and (I think) Amaury for all the hard work you guys have put in. -Brett > > === I/O in C === > > ** Binary input ** > > [ 400KB ] read one unit at a time... 1.64 MB/s > [ 400KB ] read 20 units at a time... 27.2 MB/s > [ 400KB ] read 4096 units at a time... 845 MB/s > > [ 20KB ] read whole contents at once... 924 MB/s > [ 400KB ] read whole contents at once... 883 MB/s > [ 10MB ] read whole contents at once... 980 MB/s > > [ 400KB ] seek forward one unit at a time... 0.528 MB/s > [ 400KB ] seek forward 1000 units at a time... 516 MB/s > [ 400KB ] alternate read & seek one unit... 1.33 MB/s > [ 400KB ] alternate read & seek 1000 units... 490 MB/s > > ** Text input ** > > [ 400KB ] read one unit at a time... 2.28 MB/s > [ 400KB ] read 20 units at a time... 29.2 MB/s > [ 400KB ] read one line at a time... 71.7 MB/s > [ 400KB ] read 4096 units at a time... 97.4 MB/s > > [ 20KB ] read whole contents at once... 108 MB/s > [ 400KB ] read whole contents at once... 112 MB/s > [ 10MB ] read whole contents at once... 89.7 MB/s > > [ 400KB ] seek forward one unit at a time... 0.0904 MB/s > [ 400KB ] seek forward 1000 units at a time... 87.4 MB/s > > ** Binary append ** > > [ 20KB ] write one unit at a time... 0.668 MB/s > [ 400KB ] write 20 units at a time... 12.2 MB/s > [ 400KB ] write 4096 units at a time... 722 MB/s > [ 10MB ] write 1e6 units at a time... 1529 MB/s > > ** Text append ** > > [ 20KB ] write one unit at a time... 0.983 MB/s > [ 400KB ] write 20 units at a time... 16 MB/s > [ 400KB ] write 4096 units at a time... 236 MB/s > [ 10MB ] write 1e6 units at a time... 261 MB/s > > ** Binary overwrite ** > > [ 20KB ] modify one unit at a time... 0.677 MB/s > [ 400KB ] modify 20 units at a time... 12.1 MB/s > [ 400KB ] modify 4096 units at a time... 382 MB/s > > [ 400KB ] alternate write & seek one unit... 0.212 MB/s > [ 400KB ] alternate write & seek 1000 units... 173 MB/s > [ 400KB ] alternate read & write one unit... 0.827 MB/s > [ 400KB ] alternate read & write 1000 units... 276 MB/s > > ** Text overwrite ** > > [ 20KB ] modify one unit at a time... 0.296 MB/s > [ 400KB ] modify 20 units at a time... 5.69 MB/s > [ 400KB ] modify 4096 units at a time... 151 MB/s > > > === I/O in Python (branches/py3k) === > > ** Binary input ** > > [ 400KB ] read one unit at a time... 0.174 MB/s > [ 400KB ] read 20 units at a time... 3.44 MB/s > [ 400KB ] read 4096 units at a time... 246 MB/s > > [ 20KB ] read whole contents at once... 443 MB/s > [ 400KB ] read whole contents at once... 216 MB/s > [ 10MB ] read whole contents at once... 274 MB/s > > [ 400KB ] seek forward one unit at a time... 0.188 MB/s > [ 400KB ] seek forward 1000 units at a time... 182 MB/s > [ 400KB ] alternate read & seek one unit... 0.0821 MB/s > [ 400KB ] alternate read & seek 1000 units... 81.2 MB/s > > ** Text input ** > > [ 400KB ] read one unit at a time... 0.218 MB/s > [ 400KB ] read 20 units at a time... 3.8 MB/s > [ 400KB ] read one line at a time... 3.69 MB/s > [ 400KB ] read 4096 units at a time... 34.9 MB/s > > [ 20KB ] read whole contents at once... 70.5 MB/s > [ 400KB ] read whole contents at once... 81 MB/s > [ 10MB ] read whole contents at once... 68.7 MB/s > > [ 400KB ] seek forward one unit at a time... 0.0709 MB/s > [ 400KB ] seek forward 1000 units at a time... 67.3 MB/s > > ** Binary append ** > > [ 20KB ] write one unit at a time... 0.15 MB/s > [ 400KB ] write 20 units at a time... 2.88 MB/s > [ 400KB ] write 4096 units at a time... 346 MB/s > [ 10MB ] write 1e6 units at a time... 728 MB/s > > ** Text append ** > > [ 20KB ] write one unit at a time... 0.0814 MB/s > [ 400KB ] write 20 units at a time... 1.51 MB/s > [ 400KB ] write 4096 units at a time... 118 MB/s > [ 10MB ] write 1e6 units at a time... 218 MB/s > > ** Binary overwrite ** > > [ 20KB ] modify one unit at a time... 0.123 MB/s > [ 400KB ] modify 20 units at a time... 2.34 MB/s > [ 400KB ] modify 4096 units at a time... 213 MB/s > > [ 400KB ] alternate write & seek one unit... 0.0816 MB/s > [ 400KB ] alternate write & seek 1000 units... 71.4 MB/s > [ 400KB ] alternate read & write one unit... 0.0448 MB/s > [ 400KB ] alternate read & write 1000 units... 41.1 MB/s > > ** Text overwrite ** > > [ 20KB ] modify one unit at a time... 0.0723 MB/s > [ 400KB ] modify 20 units at a time... 1.36 MB/s > [ 400KB ] modify 4096 units at a time... 88.3 MB/s > > Regards > > Antoine. > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/brett%40python.org > From barry at python.org Wed Jan 28 00:04:01 2009 From: barry at python.org (Barry Warsaw) Date: Tue, 27 Jan 2009 18:04:01 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <497F7325.7070802@v.loewis.de> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <497F7325.7070802@v.loewis.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 27, 2009, at 3:48 PM, Martin v. L?wis wrote: >>> Releasing 3.1 6 months after 3.0 sounds reasonable; I don't think >>> it should be released earlier (else 3.0 looks fairly ridiculous). >> >> It sounds like my approval of Raymond's removal of certain >> (admittedly >> obsolete) operators from the 3.0 branch was premature. Barry at least >> thinks those should be rolled back. Others? > > I agree that not too much harm is done by removing stuff in 3.0.1 that > erroneously had been left in the 3.0 release - in particular if 3.0.1 > gets released quickly (e.g. within two months of the original > release). > > If that is an acceptable policy, then those changes would fall under > the policy. If the policy is *not* acceptable, a lot of changes to > 3.0.1 need to be rolled back (e.g. the ongoing removal of __cmp__ > fragments) I have no problem with removing things that were advertised and/or documented to be removed in 3.0 but accidentally were not. That seems like a reasonable policy to me. However, if we did not tell people that something was going to be removed, then I don't think we can really remove it in 3.0. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSX+S4nEjvBPtnXfVAQIjuQQAucsAp79ZtlcOq1GPiwDaEoYMKTEgkkNp hLgdDW85ktmFf0xHl/KAU8lcxeaiWGepefsRxsx7c5fX6UIVZPUHDvkDkf5rImx6 wg7Nin2MirLT/lXY7a8//N+5TwLqIBTLLEfAIAFvDhrQT/CuMfZej7leB7BAd7Ti puLWYYYUL+M= =pK8E -----END PGP SIGNATURE----- From janssen at parc.com Wed Jan 28 00:14:46 2009 From: janssen at parc.com (Bill Janssen) Date: Tue, 27 Jan 2009 15:14:46 PST Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: <35670.1233098086@pippin.parc.xerox.com> > - fix the _ssl bug which prevents some tests from passing (issue #4967) I see you've already got a patch for this. I'll try it out. Bill From python at rcn.com Wed Jan 28 00:16:48 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 27 Jan 2009 15:16:48 -0800 Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <497F8B29.5030206@v.loewis.de> Message-ID: [Martin] > I would fear that than 3.1 gets the same fate as 3.0. In May, we will > all think "what piece of junk was that 3.1 release, let's put it to > history", and replace it with 3.2. By then, users will wonder if there > is ever a 3.x release that is any good. I thought the gist of Guido's idea was to label 3.0.1 as 3.1 to emphasize the magnitude of differences from 3.0. That seemed like a good idea to me. But I'm happy no matter what you want to call it. The important thing is that the bugfixes go in and the half-started removals get finished. I would like the next release (whatever it is called) to include the IO speedups which will help remove a barrier to adoption for serious use. I do hope the next release goes out as soon as possible. I use 3.0 daily and my impression is that the current version needs to be replaced as soon as possible. If it gets called 3.1, the nice side effect for me is that my itertools updates get fielded a bit sooner. But that is a somewhat unimportant consideration. I really have no opinion on what the next release gets called. Raymond From python at rcn.com Wed Jan 28 00:21:05 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 27 Jan 2009 15:21:05 -0800 Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de><497F7325.7070802@v.loewis.de> Message-ID: If something gets left in 3.0.1 and then ripped-out in 3.1, I think we're doing more harm than good. Very little code has been ported to 3.0 so far. One there is a base, all changes become more difficult. In the interests of our users, I vote for sooner than later. Also, 3.0 is a special case because it is IMO a broken release. AFAICT, it is not in any distro yet. Hopefully, no one will keep it around and it will vanish silently. Raymond ----- Original Message ----- I have no problem with removing things that were advertised and/or documented to be removed in 3.0 but accidentally were not. That seems like a reasonable policy to me. However, if we did not tell people that something was going to be removed, then I don't think we can really remove it in 3.0. Barry From solipsis at pitrou.net Wed Jan 28 00:25:38 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 27 Jan 2009 23:25:38 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de><497F7325.7070802@v.loewis.de> Message-ID: Raymond Hettinger rcn.com> writes: > > Also, 3.0 is a special case because it is IMO a broken release. > AFAICT, it is not in any distro yet. I have access to an Ubuntu 8.10 box and: $ apt-cache search python3.0 idle-python3.0 - An IDE for Python (v3.0) using Tkinter libpython3.0 - Shared Python runtime library (version 3.0) python3-all - Package depending on all supported Python runtime versions python3-all-dbg - Package depending on all supported Python debugging packages python3-all-dev - Package depending on all supported Python development packages python3-dbg - Debug Build of the Python Interpreter (version 3.0) python3.0 - An interactive high-level object-oriented language (version 3.0) python3.0-dbg - Debug Build of the Python Interpreter (version 3.0) python3.0-dev - Header files and a static library for Python (v3.0) python3.0-doc - Documentation for the high-level object-oriented language Python (v3.0) python3.0-examples - Examples for the Python language (v3.0) python3.0-minimal - A minimal subset of the Python language (version 3.0) But it's not installed by default. Regards Antoine. From daniel at stutzbachenterprises.com Wed Jan 28 00:28:53 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Tue, 27 Jan 2009 17:28:53 -0600 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: On Tue, Jan 27, 2009 at 4:54 PM, Antoine Pitrou wrote: > Daniel Stutzbach stutzbachenterprises.com> writes: > > Would it be much trouble to also compare performance with Python 2.6? > > Here are the results on trunk. > Thanks, Antoine! To make comparison easier, I put together the results into a Google Spreadsheet: http://spreadsheets.google.com/pub?key=pbqSxQEo4UXwPlifXmvPHGQ Keep in mind Text IO, while it's still `open("r", > filename)`, does not mean the same thing. That's because in Python 3, the Text IO has to convert to Unicode, correct? -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.brandl at gmx.net Wed Jan 28 00:35:48 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Wed, 28 Jan 2009 00:35:48 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <497F8B29.5030206@v.loewis.de> Message-ID: Raymond Hettinger schrieb: > [Martin] >> I would fear that than 3.1 gets the same fate as 3.0. In May, we will >> all think "what piece of junk was that 3.1 release, let's put it to >> history", and replace it with 3.2. By then, users will wonder if there >> is ever a 3.x release that is any good. > > I thought the gist of Guido's idea was to label 3.0.1 as 3.1 to emphasize > the magnitude of differences from 3.0. That seemed like a good idea > to me. But I'm happy no matter what you want to call it. The important > thing is that the bugfixes go in and the half-started removals get finished. > I would like the next release (whatever it is called) to include the IO > speedups which will help remove a barrier to adoption for serious use. FWIW, I completely agree here. > I do hope the next release goes out as soon as possible. I use 3.0 daily > and my impression is that the current version needs to be replaced as soon > as possible. That's important to note: I do not use Python 3.x productively in any way, other than trying to port a bit of a library every now and then, and I expect that many others here are in the same position. In these matters, we should give more weight to what *actual users* like Raymond think. It's a great thing that we actually got 3.0 out, and didn't stall somewhere along the way, but the next step is to make sure it gets accepted and used, and doesn't get abandoned for a long time because of policies that come from the 2.x branch but might not be healthy for 3.x. Georg From benjamin at python.org Wed Jan 28 00:37:52 2009 From: benjamin at python.org (Benjamin Peterson) Date: Tue, 27 Jan 2009 17:37:52 -0600 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <497F7325.7070802@v.loewis.de> Message-ID: <1afaf6160901271537t5017f65u57c3ce09db18f83c@mail.gmail.com> On Tue, Jan 27, 2009 at 5:04 PM, Barry Warsaw wrote: > I have no problem with removing things that were advertised and/or > documented to be removed in 3.0 but accidentally were not. That seems like > a reasonable policy to me. However, if we did not tell people that > something was going to be removed, then I don't think we can really remove > it in 3.0. As others have said, this would technically include cmp() removal. In the 2.x docs, there are big warnings by the operator functions and a suggestion to use ABCs. We also already have a 2to3 fixer for the module. -- Regards, Benjamin From solipsis at pitrou.net Wed Jan 28 00:44:55 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Tue, 27 Jan 2009 23:44:55 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: Daniel Stutzbach stutzbachenterprises.com> writes: > > Thanks, Antoine!? To make comparison easier, I put together the results into a Google Spreadsheet:http://spreadsheets.google.com/pub?key=pbqSxQEo4UXwPlifXmvPHGQ Thanks, that's much more readable indeed. > That's because in Python 3, the Text IO has to convert to Unicode, correct?? Yes, exactly. Regards Antoine. From barry at python.org Wed Jan 28 00:56:36 2009 From: barry at python.org (Barry Warsaw) Date: Tue, 27 Jan 2009 18:56:36 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de><497F7325.7070802@v.loewis.de> Message-ID: <2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 27, 2009, at 6:21 PM, Raymond Hettinger wrote: > If something gets left in 3.0.1 and then ripped-out in 3.1, I think > we're > doing more harm than good. Very little code has been ported to 3.0 > so far. One there is a base, all changes become more difficult. > > In the interests of our users, I vote for sooner than later. > > Also, 3.0 is a special case because it is IMO a broken release. > AFAICT, it is not in any distro yet. Hopefully, no one will keep it > around > and it will vanish silently. I stand by my opinion about the right way to do this. I also think that a 3.1 release 6 months after 3.0 is perfectly fine and serves our users just as well. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSX+fNnEjvBPtnXfVAQJO1QQAmRVH0tslNfRfpQsC+2jlJu5uljOVvuvN uE3/HFktxLUr6NPdOk+Ir1r2p4mQ5iXFlZbJvOSNckM3UYSFkeKmS/T0nVJzqx89 +23sv7UC2Qf8zJRJBEhzuePT1iAE8OybRH1Vxql9ka8FVzCrZHt2JhnRZUmHNblT Y2d92iL7eqE= =Qzdr -----END PGP SIGNATURE----- From goodger at python.org Wed Jan 28 01:04:05 2009 From: goodger at python.org (David Goodger) Date: Tue, 27 Jan 2009 19:04:05 -0500 Subject: [Python-Dev] PyCon 2009 registration is now open! Message-ID: <4335d2c40901271604j1d86cce0yb07ff8ac7e04f8f4@mail.gmail.com> Register here: http://us.pycon.org/2009/register/ Information (rates etc.): http://us.pycon.org/2009/registration/ Hotel information & reservations: http://us.pycon.org/2009/about/hotel/ Early bird registration ends February 21, so don't delay! -- David Goodger, PyCon 2009 Chair From daniel at stutzbachenterprises.com Wed Jan 28 01:04:30 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Tue, 27 Jan 2009 18:04:30 -0600 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: On Tue, Jan 27, 2009 at 5:44 PM, Antoine Pitrou wrote: > Daniel Stutzbach stutzbachenterprises.com> writes: > > That's because in Python 3, the Text IO has to convert to Unicode, > correct? > > Yes, exactly. > What kind of input are you using for the Text tests? I'm kind of surprised that the conversion to Unicode results in such a dramatic slowdown, if you're feeding it plain text (characters 0x00 through 0x7f). -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Jan 28 01:15:27 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 28 Jan 2009 00:15:27 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: Daniel Stutzbach stutzbachenterprises.com> writes: > > What kind of input are you using for the Text tests?? I'm kind of surprised that the conversion to Unicode results in such a dramatic slowdown, if you're feeding it plain text (characters 0x00 through 0x7f). It's some arbitrary text composed of 95% ASCII characters and 5% non-ASCII. On this specific example, utf8 decodes at around 250 MB/s, latin1 at almost 1 GB/s (on the same machine on which I ran the benchmarks). You can find the test here: http://svn.python.org/view/sandbox/trunk/iobench/ From daniel at stutzbachenterprises.com Wed Jan 28 01:30:08 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Tue, 27 Jan 2009 18:30:08 -0600 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: On Tue, Jan 27, 2009 at 6:15 PM, Antoine Pitrou wrote: > It's some arbitrary text composed of 95% ASCII characters and 5% non-ASCII. > On > this specific example, utf8 decodes at around 250 MB/s, latin1 at almost 1 > GB/s > (on the same machine on which I ran the benchmarks). > For the "10MB whole contents at once" test, we then have: (assuming the code does no pipelining of disk I/O with decoding) 10MB / 980MB/s to read from disk = 10 ms 10MB / 250MB/s to decode to utf8 = 40 ms 10MB / (10ms + 40ms) = 200 MB/s In practice, your results shows around 90 MB/s. That's at least vaguely in the same ballpark. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Wed Jan 28 01:39:37 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 28 Jan 2009 00:39:37 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> Message-ID: Daniel Stutzbach stutzbachenterprises.com> writes: > For the "10MB whole contents at once" test, we then have: > (assuming the code does no pipelining of disk I/O with decoding) > > 10MB / 980MB/s to read from disk = 10 ms > 10MB / 250MB/s to decode to utf8 = 40 ms > 10MB / (10ms + 40ms) = 200 MB/s > > In practice, your results shows around 90 MB/s. That's at least vaguely in > the same ballpark. Yes, the remaining CPU time is spent in the IncrementalNewlineDecoder (which does universal newline translation). Antoine. From steve at holdenweb.com Wed Jan 28 01:45:22 2009 From: steve at holdenweb.com (Steve Holden) Date: Tue, 27 Jan 2009 19:45:22 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de><497F7325.7070802@v.loewis.de> <2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> Message-ID: Barry Warsaw wrote: > On Jan 27, 2009, at 6:21 PM, Raymond Hettinger wrote: > >> If something gets left in 3.0.1 and then ripped-out in 3.1, I think we're >> doing more harm than good. Very little code has been ported to 3.0 >> so far. One there is a base, all changes become more difficult. > >> In the interests of our users, I vote for sooner than later. > >> Also, 3.0 is a special case because it is IMO a broken release. >> AFAICT, it is not in any distro yet. Hopefully, no one will keep it >> around >> and it will vanish silently. > > I stand by my opinion about the right way to do this. I also think that > a 3.1 release 6 months after 3.0 is perfectly fine and serves our users > just as well. > +1 regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From victor.stinner at haypocalc.com Wed Jan 28 01:57:15 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 28 Jan 2009 01:57:15 +0100 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <1afaf6160901271448x2a80d172rd9b7c814e5c510c6@mail.gmail.com> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <1afaf6160901271448x2a80d172rd9b7c814e5c510c6@mail.gmail.com> Message-ID: <497FAD6B.7020608@haypocalc.com> Benjamin Peterson a ?crit : > There are also several IO bugs that should be fixed before it becomes > official like #5006. > I looked at this one, but I discovered another a bug with f.tell(): it's now issue #5008. This issue is now closed, that I will look again to #5006. See also #5016 (f.seekable() bug). Victor From matthew at matthewwilkes.co.uk Wed Jan 28 02:50:20 2009 From: matthew at matthewwilkes.co.uk (Matthew Wilkes) Date: Wed, 28 Jan 2009 01:50:20 +0000 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de><497F7325.7070802@v.loewis.de> <2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> Message-ID: <0959C095-CD9B-460A-B240-907A9B79696C@matthewwilkes.co.uk> On 27 Jan 2009, at 23:56, Barry Warsaw wrote: >> Also, 3.0 is a special case because it is IMO a broken release. >> AFAICT, it is not in any distro yet. Hopefully, no one will keep >> it around >> and it will vanish silently. > > I stand by my opinion about the right way to do this. I also think > that a 3.1 release 6 months after 3.0 is perfectly fine and serves > our users just as well. I'm lurking here, as I usually have nothing to contribute, but here's my take on this: I'm generally a Python 2.4 user, but have recently been able to tinker in 2.6. I hope to be using 2.6 as my main language within a year. I anticipate dropping all 2.4 projects within 5 years. We have not yet dropped 2.3. I didn't know 3.0 is considered a broken release, but teething troubles are to be expected. Knowing this, I would be reluctant to use 3.0.1, it sounds like too small a change. If you put a lot of things into a minor point release you risk setting expectations about future ones. From the 2.x series I 2.x.{y,y+1) to be seemless, but 2. {x,x+1} to be more performant, include new features and potentially break comlpex code. I personally would see a 3.1 with C based IO support as being more sensible than a 3.0.1 with lots of changes. I wouldn't worry about 3.x being seen as a dead duck, as you say it's not in wide use yet. We trust you guys, if there's been big fixes there should be a big version update. Broadcast what's been made better and it'll encourage us to try it. Matt From alexandre at peadrop.com Wed Jan 28 03:51:04 2009 From: alexandre at peadrop.com (Alexandre Vassalotti) Date: Tue, 27 Jan 2009 21:51:04 -0500 Subject: [Python-Dev] undesireable unpickle behavior, proposed fix In-Reply-To: <11495282-A381-4C98-85B0-F811ECD9A072@youtube.com> References: <2003160D-7A1C-4567-87B4-10E329E169E1@youtube.com> <497F55D2.1020805@v.loewis.de> <497F6316.60902@v.loewis.de> <497F710C.8080406@v.loewis.de> <11495282-A381-4C98-85B0-F811ECD9A072@youtube.com> Message-ID: On Tue, Jan 27, 2009 at 5:16 PM, Jake McGuire wrote: > Another vaguely related change would be to store string and unicode objects > in the pickler memo keyed as themselves rather than their object ids. That wouldn't be difficult to do--i.e., simply add a type check in Pickler.memoize and another in Pickler.save(). But I am not sure if that would be a good idea, since you would end up hashing every string pickled. And, that would probably be expensive if you are pickling for long strings. -- Alexandre From python at rcn.com Wed Jan 28 04:04:54 2009 From: python at rcn.com (Raymond Hettinger) Date: Tue, 27 Jan 2009 19:04:54 -0800 Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de><497F7325.7070802@v.loewis.de><2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> <0959C095-CD9B-460A-B240-907A9B79696C@matthewwilkes.co.uk> Message-ID: [Matthew Wilkes] > I didn't know 3.0 is considered a broken release, but teething > troubles are to be expected. Knowing this, I would be reluctant to > use 3.0.1, it sounds like too small a change. Not to worry. Many of the major language features are stable and many of the rough edges are quickly getting ironed-out. Over time, anything that's slow will get optimized and all will be well. What we're discussing are subtlies of major vs minor releases. When the tp_compare change goes in, will it affect third-party C extensions enough to warrant a 3.1 name instead of 3.0.1? Are users better served by removing operator.isSequenceType() in 3.0.1 while there are still few early adopers and few converted third-party modules or will we help them more by warning them in advance and waiting for 3.1. The nice thing about the IO speedups is that the API is already set and won't change. So, the speedup doesn't really affect whether the release gets named 3.0.1 or 3.1. The important part is that we get it out as soon as it's solid so that we don't preclude adoption by users who need fast IO. Raymond From steve at holdenweb.com Wed Jan 28 04:32:04 2009 From: steve at holdenweb.com (Steve Holden) Date: Tue, 27 Jan 2009 22:32:04 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de><497F7325.7070802@v.loewis.de> <2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> Message-ID: Steve Holden wrote: > Barry Warsaw wrote: [...] >> I stand by my opinion about the right way to do this. I also think that >> a 3.1 release 6 months after 3.0 is perfectly fine and serves our users >> just as well. >> > +1 > I should have been more explicit. I think that stuff that was slated for removal in 3.0 should be removed as soon as possible, and a micro release is fine for that. ISTM that if we really cared about our users we would have got this right before we released 3.0. Since we clearly didn't, it behooves us make sure that any 3.1 release isn't a( repeat performance. There are changes that should clearly have been made before 3.0 saw the light of day, which are now being discussed for incorporation. If those changes were *supposed* to be made before 3.0 came out then they should be made as soon as possible. Waiting for a major release only encourages people to use them, and once they get use further changes will be seen as introducing incompatibilities that we have promised would not occur. So it seems that the operator functions should stand not on the order of their going, but depart. While a quick 3.1 release might look like the best compromise for now, it cannot then be followed with a quick 3.2 release, and then we are in the territory Martin warned about. Quality is crucial after a poor initial release: we have to engender confidence in the user base that we are not dicking them around with ill-thought-out changes. So on balance I think it might be better to live with the known inadequacies of 3.0, making small changes for 3.0.1 and possibly ignoring the policy that says we don't remove features in point releases (since they apparently should have been taken out of 3.0 but weren't). But this is only going to work if the quality of 3.1 is considerably higher than 3.0, making it worth the wait. I think that both 3.0 and 2.6 were rushed releases. 2.6 showed it in the inclusion (later recognizable as somewhat ill-advised so late in the day) of multiprocessing; 3.0 shows it in the very fact that this discussion has become necessary. So we face an important turning point: is 3.1 going to be serious production quality or not? Given that we have just been presented with a fabulous resource that could help improve delivered quality (I am talking about snakebite.org, of course) we might be well-advised to use the 3.1 release as a demonstration of how much it is going to improve the quality of delivered releases. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From Scott.Daniels at Acm.Org Wed Jan 28 08:05:32 2009 From: Scott.Daniels at Acm.Org (Scott David Daniels) Date: Tue, 27 Jan 2009 23:05:32 -0800 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com><497F6E55.6090608@v.loewis.de><1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> Message-ID: Raymond Hettinger wrote: > > [Antoine Pitrou] >> Now here are some performance figures. Text I/O is done in utf-8 with >> universal >> newlines enabled: > > That's a substantial boost. > How does it compare to Py2.x equivalents? Comparison of three cases (including performance rations): MB/S MB/S MB/S in C in py3k in 2.7 C/3k 2.7/3k ** Binary append ** 10M write 1e6 units at a time 1529.00 728.000 1523.000 2.10 2.09 20K write one unit at a time 0.668 0.150 0.887 4.45 5.91 400K write 20 units at a time 12.200 2.880 15.800 4.24 5.49 400K write 4096 units at a time 722.00 346.000 1071.000 2.09 3.10 ** Binary input ** 10M read whole contents at once 980.00 274.000 966.000 3.58 3.53 20K read whole contents at once 924.00 443.000 1145.000 2.09 2.58 400K alternate read & seek 1000 units 490.000 81.200 563.000 6.03 6.93 400K alternate read & seek one unit 1.330 0.082 1.11 16.20 13.52 400K read 20 units at a time 27.200 3.440 29.200 7.91 8.49 400K read 4096 units at a time 845.00 246.000 1038.000 3.43 4.22 400K read one unit at a time 1.64 0.174 1.480 9.43 8.51 400K read whole contents at once 883.00 216.000 891.000 4.09 4.13 400K seek forward 1000 units a time 516.00 182.000 568.000 2.84 3.12 400K seek forward one unit at a time 0.528 0.188 0.893 2.81 4.75 ** Binary overwrite ** 20K modify one unit at a time 0.677 0.123 0.867 5.50 7.05 400K alternate read & write 1000 unit 276.000 41.100 153.000 6.72 3.72 400K alternate read & write one unit 0.827 0.045 0.22 18.46 4.93 400K alternate write & seek 1000 unit 173.000 71.400 151.000 2.42 2.11 400K alternate write & seek one unit 0.212 0.082 0.237 2.60 2.90 400K modify 20 units at a time 12.100 2.340 15.300 5.17 6.54 400K modify 4096 units at a time 382.00 213.000 446.000 1.79 2.09 ** Text append ** 10M write 1e6 units at a time 261.00 218.000 1540.000 1.20 7.06 20K write one unit at a time 0.983 0.081 1.33 12.08 16.34 400K write 20 units at a time 16.000 1.510 22.90 10.60 15.17 400K write 4096 units at a time 236.00 118.000 1244.000 2.00 10.54 ** Text input ** 10M read whole contents at once 89.700 68.700 966.000 1.31 14.06 20K read whole contents at once 108.000 70.500 1196.000 1.53 16.96 400K read 20 units at a time 29.200 3.800 28.400 7.68 7.47 400K read 4096 units at a time 97.400 34.900 1060.000 2.79 30.37 400K read one line at a time 71.700 3.690 207.00 19.43 56.10 400K read one unit at a time 2.280 0.218 1.41 10.46 6.47 400K read whole contents at once 112.000 81.000 841.000 1.38 10.38 400K seek forward 1000 units at a time 87.400 67.300 589.000 1.30 8.75 400K seek forward one unit at a time 0.090 0.071 0.873 1.28 12.31 ** Text overwrite ** 20K modify one unit at a time 0.296 0.072 1.320 4.09 18.26 400K modify 20 units at a time 5.690 1.360 22.500 4.18 16.54 400K modify 4096 units at a time 151.000 88.300 509.000 1.71 5.76 --Scott David Daniels Scott.Daniels at Acm.Org From asmodai at in-nomine.org Wed Jan 28 08:22:20 2009 From: asmodai at in-nomine.org (Jeroen Ruigrok van der Werven) Date: Wed, 28 Jan 2009 08:22:20 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> Message-ID: <20090128072220.GO99614@nexus.in-nomine.org> -On [20090128 00:21], Raymond Hettinger (python at rcn.com) wrote: >Also, 3.0 is a special case because it is IMO a broken release. >AFAICT, it is not in any distro yet. Hopefully, no one will keep it around >and it will vanish silently. It is in FreeBSD's ports since December. Fairly good chance it is in pkgsrc also by now. Might even be that it is part of FreeBSD's 7.1-RELEASE. So I reckon with 'distro' you were speaking of Linux only? -- Jeroen Ruigrok van der Werven / asmodai ????? ?????? ??? ?? ?????? http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B Earth to earth, ashes to ashes, dust to dust... From asmodai at in-nomine.org Wed Jan 28 08:24:25 2009 From: asmodai at in-nomine.org (Jeroen Ruigrok van der Werven) Date: Wed, 28 Jan 2009 08:24:25 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> Message-ID: <20090128072425.GP99614@nexus.in-nomine.org> -On [20090128 00:57], Barry Warsaw (barry at python.org) wrote: >I stand by my opinion about the right way to do this. I also think >that a 3.1 release 6 months after 3.0 is perfectly fine and serves our >users just as well. When API fixes were mentioned, does that mean changes in the API which influence the C extension? If so, then I think a minor number update (3.1) is more warranted than a revision number update (3.0.1). -- Jeroen Ruigrok van der Werven / asmodai ????? ?????? ??? ?? ?????? http://www.in-nomine.org/ | http://www.rangaku.org/ | GPG: 2EAC625B Earth to earth, ashes to ashes, dust to dust... From tjreedy at udel.edu Wed Jan 28 08:47:54 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 28 Jan 2009 02:47:54 -0500 Subject: [Python-Dev] [PSF-Board] I've got a surprise for you! In-Reply-To: <497EF94E.3050302@holdenweb.com> References: <20090126233246.GA37662@wind.teleri.net> <497E9320.2030404@sun.com> <497EF94E.3050302@holdenweb.com> Message-ID: Steve Holden wrote: >> We now have zone servers in the OpenSolaris test farm, and >> I plan to add guest os servers in the next few weeks using >> ldoms (sparc) and xvm (x64). The zone servers provide whole >> root zones, which should be a good development environment >> for most projects. Check it out: >> >> http://test.opensolaris.org/testfarm Requires sign-in. >> http://www.opensolaris.org/os/community/testing/testfarm/zones/ Freely readable. From python at rcn.com Wed Jan 28 11:03:48 2009 From: python at rcn.com (Raymond Hettinger) Date: Wed, 28 Jan 2009 02:03:48 -0800 Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com><497F6E55.6090608@v.loewis.de><1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> Message-ID: <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> [Scott David Daniels] > Comparison of three cases (including performance rations): > MB/S MB/S MB/S > in C in py3k in 2.7 C/3k 2.7/3k > ** Text append ** > 10M write 1e6 units at a time 261.00 218.000 1540.000 1.20 7.06 > 20K write one unit at a time 0.983 0.081 1.33 12.08 16.34 > 400K write 20 units at a time 16.000 1.510 22.90 10.60 15.17 > 400K write 4096 units at a time 236.00 118.000 1244.000 2.00 10.54 Do you know why the text-appends fell off so much in the 1st and last cases? > ** Text input ** > 10M read whole contents at once 89.700 68.700 966.000 1.31 14.06 > 20K read whole contents at once 108.000 70.500 1196.000 1.53 16.96 ... > 400K read one line at a time 71.700 3.690 207.00 19.43 56.10 ... > 400K read whole contents at once 112.000 81.000 841.000 1.38 10.38 > 400K seek forward 1000 units at a time 87.400 67.300 589.000 1.30 8.75 > 400K seek forward one unit at a time 0.090 0.071 0.873 1.28 12.31 Looks like most of these still have substantial falloffs in performance. Is this part still a work in progress or is this as good as its going to get? > ** Text overwrite ** > 20K modify one unit at a time 0.296 0.072 1.320 4.09 18.26 > 400K modify 20 units at a time 5.690 1.360 22.500 4.18 16.54 > 400K modify 4096 units at a time 151.000 88.300 509.000 1.71 5.76 Same question on this batch. Raymond From solipsis at pitrou.net Wed Jan 28 11:55:16 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 28 Jan 2009 10:55:16 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com><497F6E55.6090608@v.loewis.de><1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> Message-ID: Hello, Raymond Hettinger rcn.com> writes: > > > MB/S MB/S MB/S > > in C in py3k in 2.7 C/3k 2.7/3k > > ** Text append ** > > 10M write 1e6 units at a time 261.00 218.000 1540.000 1.20 7.06 > > 20K write one unit at a time 0.983 0.081 1.33 12.08 16.34 > > 400K write 20 units at a time 16.000 1.510 22.90 10.60 15.17 > > 400K write 4096 units at a time 236.00 118.000 1244.000 2.00 10.54 > > Do you know why the text-appends fell off so much in the 1st and last cases? When writing large chunks of text (4096, 1e6), bookkeeping costs become marginal and encoding costs dominate. 2.x has no encoding costs, which explains why it's so much faster. A quick test tells me utf-8 encoding runs at 280 MB/s. on this dataset (the 400KB text file). You see that there is not much left to optimize on large writes. > > ** Text input ** > > 10M read whole contents at once 89.700 68.700 966.000 1.31 14.06 > > 20K read whole contents at once 108.000 70.500 1196.000 1.53 16.96 > ... > > 400K read one line at a time 71.700 3.690 207.00 19.43 56.10 > ... > > 400K read whole contents at once 112.000 81.000 841.000 1.38 10.38 > > 400K seek forward 1000 units at a time 87.400 67.300 589.000 1.30 8.75 > > 400K seek forward one unit at a time 0.090 0.071 0.873 1.28 12.31 > > Looks like most of these still have substantial falloffs in performance. > Is this part still a work in progress or is this as good as its going to get? There is nothing left obvious to optimize in the read() department. Decoding and newline translation costs dominate. Decoding has already been optimized for the most popular encodings in py3k: http://mail.python.org/pipermail/python-checkins/2009-January/077024.html Newline translation follows a fast path depending on various heuristics. I also took particular care of the "read one line at a time" scenario because it's the most likely idiom when reading a text file. I think there is hardly anything left to optimize on this one. Your eyes are welcome, though. Note that the benchmark is run with the following default settings for text I/O: utf-8 encoding, universal newlines enabled, text containing only "\n" newlines. You can play with settings here: http://svn.python.org/view/sandbox/trunk/iobench/ Text seek() and tell(), on the other hand, is known to be slow, and it could perhaps be improved. It is assumed, however, that they won't be used a lot for text files. > > ** Text overwrite ** > > 20K modify one unit at a time 0.296 0.072 1.320 4.09 18.26 > > 400K modify 20 units at a time 5.690 1.360 22.500 4.18 16.54 > > 400K modify 4096 units at a time 151.000 88.300 509.000 1.71 5.76 > > Same question on this batch. There seems to be some additional overhead in this case. Perhaps it could be improved, I'll have to take a look... But I doubt overwriting chunks of text is a common scenario. Regards Antoine. From p.f.moore at gmail.com Wed Jan 28 12:19:50 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 28 Jan 2009 11:19:50 +0000 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> Message-ID: <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> 2009/1/28 Antoine Pitrou : > When writing large chunks of text (4096, 1e6), bookkeeping costs become > marginal and encoding costs dominate. 2.x has no encoding costs, which > explains why it's so much faster. Interesting. However, it's still "slower" in terms of perception. In 2.x, I regularly do the equivalent of f = open("filename", "r") ... read strings from f ... Yes, I know this is byte I/O in reality, but for everything I do (Latin-1 on input and output, and for most practical purposes ASCII-only) it simply isn't relevant to me. If Python 3.x makes this substantially slower (working in a naive mode where I ignore encoding issues), claiming it's "encoding costs" doesn't make any difference - in a practical sense, I don't get any benefits and yet I pay the cost. (You can say my approach is wrong, but so what? I'll just say that 2.x is faster for me, and not migrate. Ultimately, this is about "marketing" 3.x...) It would be helpful to limit this cost as much as possible - maybe that's simply ensuring that the default encoding for open is (in the majority of cases) a highly-optimised one whose costs *don't* dominate in the way you describe (although if you're using UTF-8, I'd guess that would be the usual default on Linux, so it looks like there's some work needed there). Hmm, I just checked and on Windows, it appears that sys.getdefaultencoding() is UTF-8. That seems odd - I would have thought the majority of Windows systems were NOT set to use UTF-8 by default... Paul. From victor.stinner at haypocalc.com Wed Jan 28 12:22:21 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 28 Jan 2009 12:22:21 +0100 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> Message-ID: <200901281222.21611.victor.stinner@haypocalc.com> Le Wednesday 28 January 2009 11:55:16 Antoine Pitrou, vous avez ?crit?: > 2.x has no encoding costs, which explains why it's so much faster. Why not testing io.open() or codecs.open() which create unicode strings? -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From solipsis at pitrou.net Wed Jan 28 12:39:22 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 28 Jan 2009 11:39:22 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> Message-ID: Paul Moore gmail.com> writes: > > It would be helpful to limit this cost as much as possible - maybe > that's simply ensuring that the default encoding for open is (in the > majority of cases) a highly-optimised one whose costs *don't* dominate > in the way you describe As I pointed out, utf-8, utf-16 and latin1 decoders have already been optimized in py3k. For *pure ASCII* input, utf-8 decoding is blazingly fast (1GB/s here). The dataset for iobench isn't pure ASCII though, and that's why it's not as fast. People are invited to test their own workloads with the io-c branch and report performance figures (and possible bugs). There are so many possibilities that the benchmark figures given by a generic tool can only be indicative. Regards Antoine. From solipsis at pitrou.net Wed Jan 28 12:41:07 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 28 Jan 2009 11:41:07 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <200901281222.21611.victor.stinner@haypocalc.com> Message-ID: Victor Stinner haypocalc.com> writes: > > Le Wednesday 28 January 2009 11:55:16 Antoine Pitrou, vous avez ?crit?: > > 2.x has no encoding costs, which explains why it's so much faster. > > Why not testing io.open() or codecs.open() which create unicode strings? The goal is to test the idiomatic way of opening text files (the "one obvious way to do it", if you want). There is no doubt that io.open() and codecs.open() in 2.x are much slower than the io-c branch. However, nobody is expecting very good performance from io.open() and codecs.open() in 2.x either. Regards Antoine. From p.f.moore at gmail.com Wed Jan 28 13:10:29 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 28 Jan 2009 12:10:29 +0000 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> Message-ID: <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> 2009/1/28 Antoine Pitrou : > Paul Moore gmail.com> writes: >> >> It would be helpful to limit this cost as much as possible - maybe >> that's simply ensuring that the default encoding for open is (in the >> majority of cases) a highly-optimised one whose costs *don't* dominate >> in the way you describe > > As I pointed out, utf-8, utf-16 and latin1 decoders have already been optimized > in py3k. For *pure ASCII* input, utf-8 decoding is blazingly fast (1GB/s here). > The dataset for iobench isn't pure ASCII though, and that's why it's not as fast. Ah, thanks. Although you said your data was 95% ASCII, and you're getting decode speeds of 250MB/s. That's 75% slowdown for 5% of the data! Surely that's not right??? > People are invited to test their own workloads with the io-c branch and report > performance figures (and possible bugs). There are so many possibilities that > the benchmark figures given by a generic tool can only be indicative. At the moment, I don't have the time to download and build the branch, and in any case as I only have Visual Studio Express, I don't get the PGO optimisations, making any tests I do highly suspect. Paul. PS Can anyone comment on why Python defaults to utf-8 on Windows? That seems like a highly suspect default... From victor.stinner at haypocalc.com Wed Jan 28 13:14:37 2009 From: victor.stinner at haypocalc.com (Victor Stinner) Date: Wed, 28 Jan 2009 13:14:37 +0100 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <200901281222.21611.victor.stinner@haypocalc.com> Message-ID: <200901281314.37816.victor.stinner@haypocalc.com> Le Wednesday 28 January 2009 12:41:07 Antoine Pitrou, vous avez ?crit?: > > Why not testing io.open() or codecs.open() which create unicode strings? > > There is no doubt that io.open() and codecs.open() in 2.x are much slower > than the io-c branch. However, nobody is expecting very good performance > from io.open() and codecs.open() in 2.x either. I use codecs.open() in my programs and so I'm interested by the benchmark on this function ;-) But if I understand correctly, Python (3.1 ?) will be faster (or much faster) to read/write files in unicode, and that's a great news ;-) -- Victor Stinner aka haypo http://www.haypocalc.com/blog/ From l.oluyede at gmail.com Wed Jan 28 16:46:41 2009 From: l.oluyede at gmail.com (Lawrence Oluyede) Date: Wed, 28 Jan 2009 16:46:41 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <497F7325.7070802@v.loewis.de> <2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> Message-ID: <9eebf5740901280746p2af7b15ci4d37fef40b57bf1f@mail.gmail.com> On Wed, Jan 28, 2009 at 4:32 AM, Steve Holden wrote: > I think that both 3.0 and 2.6 were rushed releases. 2.6 showed it in the > inclusion (later recognizable as somewhat ill-advised so late in the > day) of multiprocessing; 3.0 shows it in the very fact that this > discussion has become necessary. What about some kine of mechanism to "triage" 3rd party modules? Something like: module gains popularity -> the core team decides it's worthy -> the module is included in the library in some kind of "contrib"/"ext" package (like the future mechanism) and for one major release stays in that package (so developers don't have to rush fixing _all_ the bugs they can while making a major release) -> after (at least) one major release the module moves up one level and it's considered stable and rock solid. Meanwhile the documentation must say that the 3rd party contributed module is not considered production ready, though usable, until the release current + 1 I don't know if it feasible, if it's insane or what, it's just an idea I had. -- Lawrence, http://oluyede.org - http://twitter.com/lawrenceoluyede "It is difficult to get a man to understand something when his salary depends on not understanding it" - Upton Sinclair From solipsis at pitrou.net Wed Jan 28 17:23:19 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 28 Jan 2009 16:23:19 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> Message-ID: Paul Moore gmail.com> writes: > > > > As I pointed out, utf-8, utf-16 and latin1 decoders have already been optimized > > in py3k. For *pure ASCII* input, utf-8 decoding is blazingly fast (1GB/s here). > > The dataset for iobench isn't pure ASCII though, and that's why it's not as fast. > > Ah, thanks. Although you said your data was 95% ASCII, and you're > getting decode speeds of 250MB/s. That's 75% slowdown for 5% of the > data! Surely that's not right??? If you look at how utf-8 decoding is implemented (in unicodeobject.c), it's quite obvious why it is so :-) There is a (very) fast path for chunks of pure ASCII data, and (fast but not blazingly fast) fallback for non ASCII data. Please don't think of it as a slowdown... It's still much faster than 2.x, which manages 130MB/s on the same data. Regards Antoine. From p.f.moore at gmail.com Wed Jan 28 17:54:49 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 28 Jan 2009 16:54:49 +0000 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> Message-ID: <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> 2009/1/28 Antoine Pitrou : > If you look at how utf-8 decoding is implemented (in unicodeobject.c), it's > quite obvious why it is so :-) There is a (very) fast path for chunks of pure > ASCII data, and (fast but not blazingly fast) fallback for non ASCII data. Thanks for the explanation. > Please don't think of it as a slowdown... It's still much faster than 2.x, which > manages 130MB/s on the same data. Don't get me wrong - I'm hugely grateful for this work. And personally, I don't expect that I/O speed is ever likely to be a real bottleneck in the type of program I write. But I'm concerned that (much as with the whole "Python 3.0 is incompatible, and it will be hard to port to" meme) people will pick up on raw benchmark figures - no matter how much they aren't comparing like with like - and start making it sound like "Python 3.0 I/O is slower than 2.x" - which is a great disservice to the good work that's been done. I do think it's worth taking care over the default encoding, though. Quite apart from performance, getting "correct" behaviour is important. I can't speak for Unix, but on Windows, the following behaviour feels like a bug to me: >echo a?b >a1 >python Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> print open("a1").read() a?b >>> ^Z >\Apps\Python30\python.exe Python 3.0 (r30:67507, Dec 3 2008, 20:14:27) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> print(open("a1").read()) Traceback (most recent call last): File "", line 1, in File "D:\Apps\Python30\lib\io.py", line 1491, in write b = encoder.encode(s) File "D:\Apps\Python30\lib\encodings\cp850.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_map)[0] UnicodeEncodeError: 'charmap' codec can't encode character '\u0153' in position 1: character maps to >>> ^Z >chcp Active code page: 850 Paul. From solipsis at pitrou.net Wed Jan 28 18:04:18 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Wed, 28 Jan 2009 18:04:18 +0100 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> References: <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> Message-ID: <1233162258.6489.4.camel@fsol> Le mercredi 28 janvier 2009 ? 16:54 +0000, Paul Moore a ?crit : > I do think it's worth taking care over the default encoding, though. > Quite apart from performance, getting "correct" behaviour is > important. I can't speak for Unix, but on Windows, the following > behaviour feels like a bug to me: [...] Please open a bug :) cheers Antoine. From mal at egenix.com Wed Jan 28 18:51:08 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Wed, 28 Jan 2009 18:51:08 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> Message-ID: <49809B0C.4020905@egenix.com> On 2009-01-27 22:19, Raymond Hettinger wrote: > From: ""Martin v. L?wis"" >> Releasing 3.1 6 months after 3.0 sounds reasonable; I don't think >> it should be released earlier (else 3.0 looks fairly ridiculous). > > I think it should be released earlier and completely supplant 3.0 > before more third-party developers spend time migrating code. > We needed 3.0 to get released so we could get the feedback > necessary to shake it out. Now, it is time for it to fade into history > and take advantage of the lessons learned. > > The principles for the 2.x series don't really apply here. In 2.x, there > was always a useful, stable, clean release already fielded and there > were tons of third-party apps that needed a slow rate of change. > > In contrast, 3.0 has a near zero installed user base (at least in terms > of being used in production). It has very few migrated apps. It is > not particularly clean and some of the work for it was incomplete > when it was released. > > My preference is to drop 3.0 entirely (no incompatable bugfix release) > and in early February release 3.1 as the real 3.x that migrators ought > to aim for and that won't have incompatable bugfix releases. Then at > PyCon, we can have a real bug day and fix-up any chips in the paint. > > If 3.1 goes out right away, then it doesn't matter if 3.0 looks ridiculous. > All eyes go to the latest release. Better to get this done before more > people download 3.0 to kick the tires. Why don't we just mark 3.0.x as experimental branch and keep updating/ fixing things that were not sorted out for the 3.0.0 release ?! I think that's a fair approach, given that the only way to get field testing for new open-source software is to release early and often. A 3.1 release should then be the first stable release of the 3.x series and mark the start of the usual deprecation mechanisms we have in the 2.x series. Needless to say, that rushing 3.1 out now would only cause yet another experimental release... major releases do take time to stabilize. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 28 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From martin at v.loewis.de Wed Jan 28 18:55:31 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 28 Jan 2009 18:55:31 +0100 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> References: <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> Message-ID: <49809C13.7070208@v.loewis.de> > PS Can anyone comment on why Python defaults to utf-8 on Windows? Don't panic. It doesn't, and you are misinterpreting what you are seeing. Regards, Martin From fuzzyman at voidspace.org.uk Wed Jan 28 18:55:39 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Wed, 28 Jan 2009 17:55:39 +0000 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <49809B0C.4020905@egenix.com> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> Message-ID: <49809C1B.3090805@voidspace.org.uk> M.-A. Lemburg wrote: > On 2009-01-27 22:19, Raymond Hettinger wrote: > >> From: ""Martin v. L?wis"" >> >>> Releasing 3.1 6 months after 3.0 sounds reasonable; I don't think >>> it should be released earlier (else 3.0 looks fairly ridiculous). >>> >> I think it should be released earlier and completely supplant 3.0 >> before more third-party developers spend time migrating code. >> We needed 3.0 to get released so we could get the feedback >> necessary to shake it out. Now, it is time for it to fade into history >> and take advantage of the lessons learned. >> >> The principles for the 2.x series don't really apply here. In 2.x, there >> was always a useful, stable, clean release already fielded and there >> were tons of third-party apps that needed a slow rate of change. >> >> In contrast, 3.0 has a near zero installed user base (at least in terms >> of being used in production). It has very few migrated apps. It is >> not particularly clean and some of the work for it was incomplete >> when it was released. >> >> My preference is to drop 3.0 entirely (no incompatable bugfix release) >> and in early February release 3.1 as the real 3.x that migrators ought >> to aim for and that won't have incompatable bugfix releases. Then at >> PyCon, we can have a real bug day and fix-up any chips in the paint. >> >> If 3.1 goes out right away, then it doesn't matter if 3.0 looks ridiculous. >> All eyes go to the latest release. Better to get this done before more >> people download 3.0 to kick the tires. >> > > Why don't we just mark 3.0.x as experimental branch and keep updating/ > fixing things that were not sorted out for the 3.0.0 release ?! I think > that's a fair approach, given that the only way to get field testing > for new open-source software is to release early and often. > > A 3.1 release should then be the first stable release of the 3.x series > and mark the start of the usual deprecation mechanisms we have > in the 2.x series. Needless to say, that rushing 3.1 out now would > only cause yet another experimental release... major releases do take > time to stabilize. > > +1 I don't think we do users any favours by being cautious in removing / fixing things in the 3.0 releases. Michael Foord -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog From martin at v.loewis.de Wed Jan 28 18:59:33 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 28 Jan 2009 18:59:33 +0100 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> References: <497F6E55.6090608@v.loewis.de> <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> Message-ID: <49809D05.4040005@v.loewis.de> Paul Moore wrote: > Hmm, I just checked and on Windows, it > appears that sys.getdefaultencoding() is UTF-8. That seems odd - I > would have thought the majority of Windows systems were NOT set to use > UTF-8 by default... In Python 3, sys.getdefaultencoding() is "utf-8" on all platforms, just as it was "ascii" in 2.x, on all platforms. The default encoding isn't used for I/O; check f.encoding to find out what encoding is used to read the file you are reading. Regards, Martin From martin at v.loewis.de Wed Jan 28 19:01:27 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 28 Jan 2009 19:01:27 +0100 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> References: <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> Message-ID: <49809D77.7010703@v.loewis.de> >>>> print(open("a1").read()) > Traceback (most recent call last): > File "", line 1, in > File "D:\Apps\Python30\lib\io.py", line 1491, in write > b = encoder.encode(s) > File "D:\Apps\Python30\lib\encodings\cp850.py", line 19, in encode > return codecs.charmap_encode(input,self.errors,encoding_map)[0] > UnicodeEncodeError: 'charmap' codec can't encode character '\u0153' in > position 1: character maps to Looks right to me. Martin From p.f.moore at gmail.com Wed Jan 28 19:17:16 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 28 Jan 2009 18:17:16 +0000 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <49809D05.4040005@v.loewis.de> References: <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <49809D05.4040005@v.loewis.de> Message-ID: <79990c6b0901281017t569bf594l2f66e5aee6cff4e2@mail.gmail.com> 2009/1/28 "Martin v. L?wis" : > Paul Moore wrote: >> Hmm, I just checked and on Windows, it >> appears that sys.getdefaultencoding() is UTF-8. That seems odd - I >> would have thought the majority of Windows systems were NOT set to use >> UTF-8 by default... > > In Python 3, sys.getdefaultencoding() is "utf-8" on all platforms, just > as it was "ascii" in 2.x, on all platforms. The default encoding isn't > used for I/O; check f.encoding to find out what encoding is used to > read the file you are reading. Thanks for the explanation. It might be clearer to document this a little more explicitly in the docs for open() (on the basis that people using open() are the most likely to be naive about encodings). I'll see if I can come up with an appropriate doc patch. Paul. From p.f.moore at gmail.com Wed Jan 28 19:20:08 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 28 Jan 2009 18:20:08 +0000 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <49809D77.7010703@v.loewis.de> References: <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> <49809D77.7010703@v.loewis.de> Message-ID: <79990c6b0901281020ue5c609br3cd7f84ffdf44411@mail.gmail.com> 2009/1/28 "Martin v. L?wis" : >>>>> print(open("a1").read()) >> Traceback (most recent call last): >> File "", line 1, in >> File "D:\Apps\Python30\lib\io.py", line 1491, in write >> b = encoder.encode(s) >> File "D:\Apps\Python30\lib\encodings\cp850.py", line 19, in encode >> return codecs.charmap_encode(input,self.errors,encoding_map)[0] >> UnicodeEncodeError: 'charmap' codec can't encode character '\u0153' in >> position 1: character maps to > > Looks right to me. I don't see why. I wrote the file from the console (cp850), read it in Python using the default encoding (which I would expect to match the console encoding), wrote it to sys.stdout (which I would expect to use the console encoding). How did the character end up not being encodable, when I've only used one encoding throughout? (And if my assumptions about the encodings used are wrong at some point, that's what I'm suggesting is the error). Paul. From martin at v.loewis.de Wed Jan 28 19:29:07 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 28 Jan 2009 19:29:07 +0100 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <79990c6b0901281017t569bf594l2f66e5aee6cff4e2@mail.gmail.com> References: <1afaf6160901271348g48d3bc72t546821ad2feeed76@mail.gmail.com> <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <49809D05.4040005@v.loewis.de> <79990c6b0901281017t569bf594l2f66e5aee6cff4e2@mail.gmail.com> Message-ID: <4980A3F3.6060005@v.loewis.de> > Thanks for the explanation. It might be clearer to document this a > little more explicitly in the docs for open() (on the basis that > people using open() are the most likely to be naive about encodings). > I'll see if I can come up with an appropriate doc patch. Notice that the determination of the specific encoding used is fairly elaborate: - if IO is to a terminal, Python tries to determine the encoding of the terminal. This is mostly relevant for Windows (which uses, by default, the "OEM code page" in the terminal). - if IO is to a file, Python tries to guess the "common" encoding for the system. On Unix, it queries the locale, and falls back to "ascii" if no locale is set. On Windows, it uses the "ANSI code page". On OSX, it uses the "system encoding". - if IO is binary, (clearly) no encoding is used. Network IO is always binary. - for file names, yet different algorithms apply. On Windows, it uses the Unicode API, so no need for an encoding. On Unix, it (again) uses the locale encoding. On OSX, it uses UTF-8 (just to be clear: this applies to the first argument of open(), not to the resulting file object) Regards, Martin From steven.bethard at gmail.com Wed Jan 28 19:40:22 2009 From: steven.bethard at gmail.com (Steven Bethard) Date: Wed, 28 Jan 2009 10:40:22 -0800 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <4980A3F3.6060005@v.loewis.de> References: <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <49809D05.4040005@v.loewis.de> <79990c6b0901281017t569bf594l2f66e5aee6cff4e2@mail.gmail.com> <4980A3F3.6060005@v.loewis.de> Message-ID: On Wed, Jan 28, 2009 at 10:29 AM, "Martin v. L?wis" wrote: > Notice that the determination of the specific encoding used is fairly > elaborate: > - if IO is to a terminal, Python tries to determine the encoding of > the terminal. This is mostly relevant for Windows (which uses, > by default, the "OEM code page" in the terminal). > - if IO is to a file, Python tries to guess the "common" encoding > for the system. On Unix, it queries the locale, and falls back > to "ascii" if no locale is set. On Windows, it uses the "ANSI > code page". On OSX, it uses the "system encoding". > - if IO is binary, (clearly) no encoding is used. Network IO is > always binary. > - for file names, yet different algorithms apply. On Windows, it > uses the Unicode API, so no need for an encoding. On Unix, it > (again) uses the locale encoding. On OSX, it uses UTF-8 > (just to be clear: this applies to the first argument of open(), > not to the resulting file object) This a very helpful explanation. Is it in the docs somewhere, or if it isn't, could it be? Steve -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy From martin at v.loewis.de Wed Jan 28 19:43:03 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 28 Jan 2009 19:43:03 +0100 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <79990c6b0901281020ue5c609br3cd7f84ffdf44411@mail.gmail.com> References: <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> <49809D77.7010703@v.loewis.de> <79990c6b0901281020ue5c609br3cd7f84ffdf44411@mail.gmail.com> Message-ID: <4980A737.9080509@v.loewis.de> Paul Moore wrote: > 2009/1/28 "Martin v. L?wis" : >>>>>> print(open("a1").read()) >>> Traceback (most recent call last): >>> File "", line 1, in >>> File "D:\Apps\Python30\lib\io.py", line 1491, in write >>> b = encoder.encode(s) >>> File "D:\Apps\Python30\lib\encodings\cp850.py", line 19, in encode >>> return codecs.charmap_encode(input,self.errors,encoding_map)[0] >>> UnicodeEncodeError: 'charmap' codec can't encode character '\u0153' in >>> position 1: character maps to >> Looks right to me. > > I don't see why. I wrote the file from the console (cp850), read it in > Python using the default encoding (which I would expect to match the > console encoding), wrote it to sys.stdout (which I would expect to use > the console encoding). > > How did the character end up not being encodable, when I've only used > one encoding throughout? (And if my assumptions about the encodings > used are wrong at some point, that's what I'm suggesting is the > error). Well, first try to understand what the error *is*: py> unicodedata.name('\u0153') 'LATIN SMALL LIGATURE OE' py> unicodedata.name('?') 'POUND SIGN' py> ascii('?') "'\\xa3'" py> ascii('?'.encode('cp850').decode('cp1252')) "'\\u0153'" So when Python reads the file, it uses cp1252. This is sensible - just that the console uses cp850 doesn't change the fact that the "common" encoding of files on your system is cp1252. It is an unfortunate fact of Windows that the console window uses a different encoding from the rest of the system (namely, the console uses the OEM code page, and everything else uses the ANSI code page). Furthermore, U+0153 does not exist in cp850 (i.e. the terminal doesn't support ?), hence the exception. Regards, Martin From martin at v.loewis.de Wed Jan 28 19:46:41 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Wed, 28 Jan 2009 19:46:41 +0100 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <49809D05.4040005@v.loewis.de> <79990c6b0901281017t569bf594l2f66e5aee6cff4e2@mail.gmail.com> <4980A3F3.6060005@v.loewis.de> Message-ID: <4980A811.4070900@v.loewis.de> > This a very helpful explanation. Is it in the docs somewhere, or if it > isn't, could it be? I actually don't know. Regards, Martin From p.f.moore at gmail.com Wed Jan 28 19:52:41 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 28 Jan 2009 18:52:41 +0000 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <4980A737.9080509@v.loewis.de> References: <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> <49809D77.7010703@v.loewis.de> <79990c6b0901281020ue5c609br3cd7f84ffdf44411@mail.gmail.com> <4980A737.9080509@v.loewis.de> Message-ID: <79990c6b0901281052q23634382y7fffa767ac32d81b@mail.gmail.com> 2009/1/28 "Martin v. L?wis" : > Well, first try to understand what the error *is*: > > py> unicodedata.name('\u0153') > 'LATIN SMALL LIGATURE OE' > py> unicodedata.name('?') > 'POUND SIGN' > py> ascii('?') > "'\\xa3'" > py> ascii('?'.encode('cp850').decode('cp1252')) > "'\\u0153'" > > So when Python reads the file, it uses cp1252. This is sensible - just > that the console uses cp850 doesn't change the fact that the "common" > encoding of files on your system is cp1252. It is an unfortunate fact > of Windows that the console window uses a different encoding from the > rest of the system (namely, the console uses the OEM code page, and > everything else uses the ANSI code page). Ah, I see. That is entirely obvious. The key bit of information is that the default io encoding is cp1252, not cp850. I know that in theory, I see the consequences often enough (:-)), but it isn't "instinctive" for me. And the simple "default encoding is system dependent" comment is not very helpful in terms of warning me that there could be an issue. I do think that more wording around encoding defaults would be useful - as I said, I'll think about how best it could be made into a doc patch. I suspect the best approach would be to have a section (maybe in the docs for the codecs module) explaining all the details, and then a cross-reference to that from the various places (open, io) where default encodings are mentioned. Paul. > > Furthermore, U+0153 does not exist in cp850 (i.e. the terminal doesn't > support ?), hence the exception. > > Regards, > Martin > From tjreedy at udel.edu Wed Jan 28 20:03:31 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 28 Jan 2009 14:03:31 -0500 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <9EEA3AE4EA4646519D89DFF7BB65C3A7@RaymondLaptop1> <37FB8BA2BF694F2CA28843AA86FDE6D5@RaymondLaptop1> <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <49809D05.4040005@v.loewis.de> <79990c6b0901281017t569bf594l2f66e5aee6cff4e2@mail.gmail.com> <4980A3F3.6060005@v.loewis.de> Message-ID: Steven Bethard wrote: > On Wed, Jan 28, 2009 at 10:29 AM, "Martin v. L?wis" wrote: >> Notice that the determination of the specific encoding used is fairly >> elaborate: >> - if IO is to a terminal, Python tries to determine the encoding of >> the terminal. This is mostly relevant for Windows (which uses, >> by default, the "OEM code page" in the terminal). >> - if IO is to a file, Python tries to guess the "common" encoding >> for the system. On Unix, it queries the locale, and falls back >> to "ascii" if no locale is set. On Windows, it uses the "ANSI >> code page". On OSX, it uses the "system encoding". >> - if IO is binary, (clearly) no encoding is used. Network IO is >> always binary. >> - for file names, yet different algorithms apply. On Windows, it >> uses the Unicode API, so no need for an encoding. On Unix, it >> (again) uses the locale encoding. On OSX, it uses UTF-8 >> (just to be clear: this applies to the first argument of open(), >> not to the resulting file object) > > This a very helpful explanation. Is it in the docs somewhere, or if it > isn't, could it be? Here is the current entry on encodings in the Lib ref, built-in types, file objects. file.encoding The encoding that this file uses. When strings are written to a file, they will be converted to byte strings using this encoding. In addition, when the file is connected to a terminal, the attribute gives the encoding that the terminal is likely to use (that information might be incorrect if the user has misconfigured the terminal). The attribute is read-only and may not be present on all file-like objects. It may also be None, in which case the file uses the system default encoding for converting strings. From exarkun at divmod.com Wed Jan 28 20:17:46 2009 From: exarkun at divmod.com (Jean-Paul Calderone) Date: Wed, 28 Jan 2009 14:17:46 -0500 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <79990c6b0901281052q23634382y7fffa767ac32d81b@mail.gmail.com> Message-ID: <20090128191746.24460.683659615.divmod.quotient.3337@henry.divmod.com> On Wed, 28 Jan 2009 18:52:41 +0000, Paul Moore wrote: >2009/1/28 "Martin v. L?wis" : >> Well, first try to understand what the error *is*: >> >> py> unicodedata.name('\u0153') >> 'LATIN SMALL LIGATURE OE' >> py> unicodedata.name('?') >> 'POUND SIGN' >> py> ascii('?') >> "'\\xa3'" >> py> ascii('?'.encode('cp850').decode('cp1252')) >> "'\\u0153'" >> >> So when Python reads the file, it uses cp1252. This is sensible - just >> that the console uses cp850 doesn't change the fact that the "common" >> encoding of files on your system is cp1252. It is an unfortunate fact >> of Windows that the console window uses a different encoding from the >> rest of the system (namely, the console uses the OEM code page, and >> everything else uses the ANSI code page). > >Ah, I see. That is entirely obvious. The key bit of information is >that the default io encoding is cp1252, not cp850. I know that in >theory, I see the consequences often enough (:-)), but it isn't >"instinctive" for me. And the simple "default encoding is system >dependent" comment is not very helpful in terms of warning me that >there could be an issue. It probably didn't help that the exception raised told you that the error was in the "charmap" codec. This should have said "cp850" instead. The fact that cp850 is implemented in terms of "charmap" isn't very interesting. The fact that while encoding some text using "cp850" is. Jean-Paul From tjreedy at udel.edu Wed Jan 28 20:21:46 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 28 Jan 2009 14:21:46 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <49809C1B.3090805@voidspace.org.uk> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> Message-ID: Michael Foord wrote: > M.-A. Lemburg wrote: >> Why don't we just mark 3.0.x as experimental branch and keep updating/ >> fixing things that were not sorted out for the 3.0.0 release ?! I think >> that's a fair approach, given that the only way to get field testing >> for new open-source software is to release early and often. >> >> A 3.1 release should then be the first stable release of the 3.x series >> and mark the start of the usual deprecation mechanisms we have >> in the 2.x series. Needless to say, that rushing 3.1 out now would >> only cause yet another experimental release... major releases do take >> time to stabilize. >> >> > +1 > > I don't think we do users any favours by being cautious in removing / > fixing things in the 3.0 releases. I have two main reactions to 3.0. 1. It is great for my purpose -- coding algorithms. The cleaner object and text models are a mental relief for me. So it was a service to me to release it. I look forward to it becoming standard Python and have made my small contribution by helping clean up the 3.0 version of the docs. 2. It is something of a trial run that it should be fixed as soon as possible. I seem to remember sometning from Shakespear(?) "If it twer done, tis best it twer done quickly". Guido said something over a year ago to the effect that he did not expect 3.0 to be used as a production release, so I do not think it should to treated as one. Label it developmental and people will not try to keep in use for years and years in the way that, say, 2.4 still is. tjr From rhamph at gmail.com Wed Jan 28 20:42:30 2009 From: rhamph at gmail.com (Adam Olsen) Date: Wed, 28 Jan 2009 12:42:30 -0700 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: <79990c6b0901281052q23634382y7fffa767ac32d81b@mail.gmail.com> References: <79990c6b0901280319q7a9ee669w758d4d1ae765b3d3@mail.gmail.com> <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> <49809D77.7010703@v.loewis.de> <79990c6b0901281020ue5c609br3cd7f84ffdf44411@mail.gmail.com> <4980A737.9080509@v.loewis.de> <79990c6b0901281052q23634382y7fffa767ac32d81b@mail.gmail.com> Message-ID: On Wed, Jan 28, 2009 at 11:52 AM, Paul Moore wrote: > Ah, I see. That is entirely obvious. The key bit of information is > that the default io encoding is cp1252, not cp850. I know that in > theory, I see the consequences often enough (:-)), but it isn't > "instinctive" for me. And the simple "default encoding is system > dependent" comment is not very helpful in terms of warning me that > there could be an issue. > > I do think that more wording around encoding defaults would be useful > - as I said, I'll think about how best it could be made into a doc > patch. I suspect the best approach would be to have a section (maybe > in the docs for the codecs module) explaining all the details, and > then a cross-reference to that from the various places (open, io) > where default encodings are mentioned. It'd also help if the file repr gave the encoding: >>> f = open('/dev/null') >>> f >>> import sys >>> sys.stdout Of course I can check .encoding manually, but it needs to be more intuitive. -- Adam Olsen, aka Rhamphoryncus From daniel at stutzbachenterprises.com Wed Jan 28 20:59:13 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Wed, 28 Jan 2009 13:59:13 -0600 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com> <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> <49809D77.7010703@v.loewis.de> <79990c6b0901281020ue5c609br3cd7f84ffdf44411@mail.gmail.com> <4980A737.9080509@v.loewis.de> <79990c6b0901281052q23634382y7fffa767ac32d81b@mail.gmail.com> Message-ID: On Wed, Jan 28, 2009 at 1:42 PM, Adam Olsen wrote: > It'd also help if the file repr gave the encoding: > +1 -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From python at rcn.com Wed Jan 28 21:08:43 2009 From: python at rcn.com (Raymond Hettinger) Date: Wed, 28 Jan 2009 12:08:43 -0800 Subject: [Python-Dev] Python 3.0.1 (io-in-c) References: <79990c6b0901280410p68a77baakca66dce118912df3@mail.gmail.com><79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com><49809D77.7010703@v.loewis.de><79990c6b0901281020ue5c609br3cd7f84ffdf44411@mail.gmail.com><4980A737.9080509@v.loewis.de><79990c6b0901281052q23634382y7fffa767ac32d81b@mail.gmail.com> Message-ID: [Adam Olsen] > It'd also help if the file repr gave the encoding: +1 from me too. That will be a big help. Raymond From steve at holdenweb.com Wed Jan 28 22:19:32 2009 From: steve at holdenweb.com (Steve Holden) Date: Wed, 28 Jan 2009 16:19:32 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> Message-ID: Terry Reedy wrote: > Michael Foord wrote: >> M.-A. Lemburg wrote: > >>> Why don't we just mark 3.0.x as experimental branch and keep updating/ >>> fixing things that were not sorted out for the 3.0.0 release ?! I think >>> that's a fair approach, given that the only way to get field testing >>> for new open-source software is to release early and often. >>> >>> A 3.1 release should then be the first stable release of the 3.x series >>> and mark the start of the usual deprecation mechanisms we have >>> in the 2.x series. Needless to say, that rushing 3.1 out now would >>> only cause yet another experimental release... major releases do take >>> time to stabilize. >>> >>> >> +1 >> >> I don't think we do users any favours by being cautious in removing / >> fixing things in the 3.0 releases. > > I have two main reactions to 3.0. > > 1. It is great for my purpose -- coding algorithms. > The cleaner object and text models are a mental relief for me. > So it was a service to me to release it. > I look forward to it becoming standard Python and have made my small > contribution by helping clean up the 3.0 version of the docs. > > 2. It is something of a trial run that it should be fixed as soon as > possible. I seem to remember sometning from Shakespear(?) "If it twer > done, tis best it twer done quickly". > > Guido said something over a year ago to the effect that he did not > expect 3.0 to be used as a production release, so I do not think it > should to treated as one. Label it developmental and people will not > try to keep in use for years and years in the way that, say, 2.4 still is. > It might also be a good idea to take the download link off the front page of python.org: until that happens newbies are going to keep coming along and downloading it "because it's the newest". regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From ncoghlan at gmail.com Wed Jan 28 22:24:13 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Thu, 29 Jan 2009 07:24:13 +1000 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de><497F7325.7070802@v.loewis.de> <2E3BD230-BE90-4E31-88D3-7844E5FFC3BC@python.org> Message-ID: <4980CCFD.2050605@gmail.com> Steve Holden wrote: > 2.6 showed it in the > inclusion (later recognizable as somewhat ill-advised so late in the > day) of multiprocessing; Given the longstanding fork() bugs that were fixed as a result of that inclusion, I think that ill-advised is too strong... could it have done with a little more time to bed down multiprocessing in particular? Possibly. Was it worth holding up the whole release just for that? I don't think so - we'd already fixed up the problems that the test suite and python-dev were likely to find, so the cost/benefit ratio on a delay would have been pretty poor. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From martin at v.loewis.de Wed Jan 28 22:26:18 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Wed, 28 Jan 2009 22:26:18 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> Message-ID: <4980CD7A.6000506@v.loewis.de> > It might also be a good idea to take the download link off the front > page of python.org: until that happens newbies are going to keep coming > along and downloading it "because it's the newest". It was (and probably still is) Guido's position that 3.0 *is* the version that newbies should be using. Regards, Martin From p.f.moore at gmail.com Thu Jan 29 00:33:59 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Wed, 28 Jan 2009 23:33:59 +0000 Subject: [Python-Dev] Python 3.0.1 (io-in-c) In-Reply-To: References: <79990c6b0901280854s17ad8f06ldf9c894b5d00e455@mail.gmail.com> <49809D77.7010703@v.loewis.de> <79990c6b0901281020ue5c609br3cd7f84ffdf44411@mail.gmail.com> <4980A737.9080509@v.loewis.de> <79990c6b0901281052q23634382y7fffa767ac32d81b@mail.gmail.com> Message-ID: <79990c6b0901281533x3918052wbb2a1e866338def4@mail.gmail.com> 2009/1/28 Raymond Hettinger : > [Adam Olsen] >> >> It'd also help if the file repr gave the encoding: > > +1 from me too. That will be a big help. Definitely. People *are* going to get confused by encoding errors - let's give them all the help we can. Paul From stephen at xemacs.org Thu Jan 29 01:59:34 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Thu, 29 Jan 2009 09:59:34 +0900 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <4980CD7A.6000506@v.loewis.de> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> Message-ID: <87d4e7dj0p.fsf@xemacs.org> "Martin v. L?wis" writes: > > It might also be a good idea to take the download link off the front > > page of python.org: until that happens newbies are going to keep coming > > along and downloading it "because it's the newest". > > It was (and probably still is) Guido's position that 3.0 *is* the > version that newbies should be using. Indeed. See Terry Reedy's post. Somebody who is looking for a platform for a production application is not going to download something "because it's the newest". Sure, those advocating other platforms will carp about Python 3.0, but hey, where is Perl 6? "The amazing thing about a dancing bear is *not* how well it dances." Let's not get too worried about the PR aspects; just fixing the bugs as we go along will fix that to the extent that people are not totally prejudiced anyway. I think there is definitely something to the notion that the 3.x vs. 3.0.y distinction should signal something, and I personally like MAL's suggestion that 3.0.x should be marked some sort of beta in perpetuity, or at least until 3.1 is ready to ship as stable and production-ready. (That's AIUI, MAL's intent may be somewhat different.) From tjreedy at udel.edu Thu Jan 29 04:22:46 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Wed, 28 Jan 2009 22:22:46 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <87d4e7dj0p.fsf@xemacs.org> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> Message-ID: Stephen J. Turnbull wrote: > "Martin v. L?wis" writes: > > > It might also be a good idea to take the download link off the front > > > page of python.org: until that happens newbies are going to keep coming > > > along and downloading it "because it's the newest". By that logic, I would suggest removing 2.6 ;-) See below. > > > > It was (and probably still is) Guido's position that 3.0 *is* the > > version that newbies should be using. > > Indeed. See Terry Reedy's post. When people ask on c.l.p, I recommend either 3.0 for the relative cleanliness or 2.5 (until now, at least) for the 3rd-party add-on availability (that will gradually improve for both 2.6 and more slowly, for 3.x). I expect that some newbies would find 2.6 a somewhat confusing mix of old and new. tjr From steve at holdenweb.com Thu Jan 29 04:38:31 2009 From: steve at holdenweb.com (Steve Holden) Date: Wed, 28 Jan 2009 22:38:31 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> Message-ID: <498124B7.1010204@holdenweb.com> Terry Reedy wrote: > Stephen J. Turnbull wrote: >> "Martin v. L?wis" writes: >> > > It might also be a good idea to take the download link off the front >> > > page of python.org: until that happens newbies are going to keep >> coming >> > > along and downloading it "because it's the newest". > > By that logic, I would suggest removing 2.6 ;-) > See below. > >> > > It was (and probably still is) Guido's position that 3.0 *is* the >> > version that newbies should be using. >> >> Indeed. See Terry Reedy's post. > > When people ask on c.l.p, I recommend either 3.0 for the relative > cleanliness or 2.5 (until now, at least) for the 3rd-party add-on > availability (that will gradually improve for both 2.6 and more slowly, > for 3.x). I expect that some newbies would find 2.6 a somewhat > confusing mix of old and new. > Fair point. At least we both agree that the current site doesn't best serve the punters. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From steve at holdenweb.com Thu Jan 29 04:38:31 2009 From: steve at holdenweb.com (Steve Holden) Date: Wed, 28 Jan 2009 22:38:31 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> Message-ID: <498124B7.1010204@holdenweb.com> Terry Reedy wrote: > Stephen J. Turnbull wrote: >> "Martin v. L?wis" writes: >> > > It might also be a good idea to take the download link off the front >> > > page of python.org: until that happens newbies are going to keep >> coming >> > > along and downloading it "because it's the newest". > > By that logic, I would suggest removing 2.6 ;-) > See below. > >> > > It was (and probably still is) Guido's position that 3.0 *is* the >> > version that newbies should be using. >> >> Indeed. See Terry Reedy's post. > > When people ask on c.l.p, I recommend either 3.0 for the relative > cleanliness or 2.5 (until now, at least) for the 3rd-party add-on > availability (that will gradually improve for both 2.6 and more slowly, > for 3.x). I expect that some newbies would find 2.6 a somewhat > confusing mix of old and new. > Fair point. At least we both agree that the current site doesn't best serve the punters. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From mal at egenix.com Thu Jan 29 09:56:52 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Thu, 29 Jan 2009 09:56:52 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <87d4e7dj0p.fsf@xemacs.org> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> Message-ID: <49816F54.20501@egenix.com> On 2009-01-29 01:59, Stephen J. Turnbull wrote: > I think there is definitely something to the notion that the 3.x > vs. 3.0.y distinction should signal something, and I personally like > MAL's suggestion that 3.0.x should be marked some sort of beta in > perpetuity, or at least until 3.1 is ready to ship as stable and > production-ready. (That's AIUI, MAL's intent may be somewhat > different.) That's basically it, yes. I don't think that marking 3.0 as experimental is bad in any way, as long as we're clear about it. Having lots of incompatible changes in a patch level release will start to get users worrying about the stability of the 3.x branch anyway, so a heads-up message and clear perspective for the 3.1 release is a lot better than dumping 3.0 altogether or not providing such a perspective at all. That said, we should stick to the statement already made for 3.0 (too early as it now appears), ie. that the same development and releases processes will apply to the 3.x branch as we have for 2.x - starting with 3.1. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 29 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From ajp at eutechnyx.com Thu Jan 29 11:38:26 2009 From: ajp at eutechnyx.com (Dr Andrew Perella) Date: Thu, 29 Jan 2009 10:38:26 -0000 Subject: [Python-Dev] python breakpoint opcode Message-ID: <014f01c981fd$b7bb3c60$2731b520$@com> Hi, I was thinking of adding a breakpoint opcode to python to enable less invasive debugging. I came across posts from 1999 by Vladimir Marangozov and Christian Tismer discussing this issue but the links to the code are all out of date. Did anything come of this? Is this a good approach to take? - if so why was this never incorporated? Cheers, Andrew Dr. Andrew Perella Chief Software Architect Eutechnyx Limited. Metro Centre East Business Park, Waterside Drive, Gateshead, Tyne & Wear NE11 9HU UK Co.Reg.No. 2172322 T +44 (0) 191 460 6060 F +44 (0) 191 460 2266 E ajp at eutechnyx.com W www.eutechnyx.com This e-mail is confidential and may be privileged. It may be read, copied and used only by the intended recipient. No communication sent by e-mail to or from Eutechnyx is intended to give rise to contractual or other legal liability, apart from liability which cannot be excluded under English law. This e-mail is confidential and may be privileged. It may be read, copied and used only by the intended recipient. No communication sent by e-mail to or from Eutechnyx is intended to give rise to contractual or other legal liability, apart from liability which cannot be excluded under English law. This email has been scanned for all known viruses by the Email Protection Agency. http://www.epagency.net www.eutechnyx.com Eutechnyx Limited. Registered in England No: 2172322 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jcea at jcea.es Thu Jan 29 13:38:29 2009 From: jcea at jcea.es (Jesus Cea) Date: Thu, 29 Jan 2009 13:38:29 +0100 Subject: [Python-Dev] mlockall() in Python? In-Reply-To: References: Message-ID: <4981A345.5030004@jcea.es> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Evgeni Golov wrote: > I'd like to write a small daemon in Python, which should never be > swapped out (on Linux, this daemon will be Linux specific, so no need > in a platform-independent solution). > > In C I'd do: > #include > mlockall(MCL_FUTURE); > //do stuff here > munlockall(); > > Is there anything similar in Python? I would like things like this added to core python, but since you are restricting yourself to linux, you can use a (trivial) ctypes wrapper. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ . _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.8 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iQCVAwUBSYGjQZlgi5GaxT1NAQIUMAP/Tl7SWFgVkeeEdRHbkrtlOX4eERbfny7A xBkUVO72lPB1XnRxZT0+Vo2ggYh/6IHN6SQriEZZPe9Wwn3cZzirjjAqpdvb70TJ 1BezGtLKsoDp4cf6QqDwfITecMaGjfaNhKvvSvPFzaKlpbjsdQjyGCI0dOvxzY5J 6BUxE2yYJdc= =N0dO -----END PGP SIGNATURE----- From facundobatista at gmail.com Thu Jan 29 13:50:14 2009 From: facundobatista at gmail.com (Facundo Batista) Date: Thu, 29 Jan 2009 10:50:14 -0200 Subject: [Python-Dev] Examples in Py2 or Py3 Message-ID: Hi! In the Python Argentina mail list there's already people passing examples and asking help about Python 3. This introduces the problem that some examples are in Py2 and others are in Py3. Sometimes this is not explicit, and gets confusing. I'm trying to avoid this confusion when preparing my own examples. So far, I use (py3) as a prefix for any example block, like: (Py3k) >>> (some example) (some result) Is there any recommended way to avoid confusion in these cases? (I'm thinking about changing the prompt in my Python installation, to something like ">2>>" and ">3>>", to be explicit about it... but I wanted to know if there's another better way) Thanks. -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From orsenthil at gmail.com Thu Jan 29 13:57:24 2009 From: orsenthil at gmail.com (Senthil Kumaran) Date: Thu, 29 Jan 2009 18:27:24 +0530 Subject: [Python-Dev] Examples in Py2 or Py3 In-Reply-To: References: Message-ID: <7c42eba10901290457y4f7ce3d2g79e84ebb58870f1f@mail.gmail.com> > Facundo Batista wrote: > Hi! > > In the Python Argentina mail list there's already people passing > examples and asking help about Python 3. For complete snippets: #!/usr/bin/env python3.0 vs #!/usr/bin/env python2.6 And for blocks of code # this for python 3.0 # this is for python 2.6 I know, it is very rudimentary, but I have followed snippets with written these identifications. -- Senthil From facundobatista at gmail.com Thu Jan 29 14:00:02 2009 From: facundobatista at gmail.com (Facundo Batista) Date: Thu, 29 Jan 2009 11:00:02 -0200 Subject: [Python-Dev] Examples in Py2 or Py3 In-Reply-To: <7c42eba10901290457y4f7ce3d2g79e84ebb58870f1f@mail.gmail.com> References: <7c42eba10901290457y4f7ce3d2g79e84ebb58870f1f@mail.gmail.com> Message-ID: 2009/1/29 Senthil Kumaran : > And for blocks of code > > # this for python 3.0 > # this is for python 2.6 Too much work, ;) Seriously, most probably people will forgot to add that after the third example... -- . Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ From ben at redfrontdoor.org Thu Jan 29 15:12:02 2009 From: ben at redfrontdoor.org (Ben North) Date: Thu, 29 Jan 2009 14:12:02 +0000 Subject: [Python-Dev] Partial function application 'from the right' Message-ID: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Hi, I find 'functools.partial' useful, but occasionally I'm unable to use it because it lacks a 'from the right' version. E.g., to create a function which splits a string on commas, you can't say # Won't work when called: split_comma = partial(str.split, sep = ',') and to create a 'log to base 10' function, you can't say # Won't work when called: log_10 = partial(math.log, base = 10.0) because str.split and math.log don't take keyword arguments. PEP-309, which introduces functools.partial, mentions For completeness, another object that appends partial arguments after those supplied in the function call (maybe called rightcurry) has been suggested. 'Completeness' by itself doesn't seem to have been a compelling reason to introduce this feature, but the above cases show that it would be of real use. I've created a patch which adds a 'partial_right' function. The two examples above: >>> import functools, math >>> split_comma = functools.partial_right(str.split, ',') >>> split_comma('a,b,c') ['a', 'b', 'c'] >>> log_10 = functools.partial_right(math.log, 10.0) >>> log_10(100.0) 2.0 and a general illustrative one: >>> def all_args(*args): return args ... >>> functools.partial_right(all_args, 1, 2)(3, 4) (3, 4, 1, 2) I was prompted to look at this by a recent message on python-dev: Xavier Morel , Thu, 22 Jan 2009 14:44:41 +0100: > [...] drop(iterable, n) has to be written islice(iterable, n, None) > (and of course the naming isn't ideal), and you can't really use > functools.partial since the iterator is the first argument (unless > there's a way to partially apply only the tail args without kwargs). Xavier's case becomes: >>> import functools, itertools >>> drop = functools.partial_right(itertools.islice, None, None) >>> list(drop(range(10), 5)) [5, 6, 7, 8, 9] The patch adds a 'from_right' member to partial objects, which can be True for the new from-the-right behaviour, or False for the existing from-the-left behaviour. It's quite small, only c.40 lines, plus a c.70-line change to test_functools.py. I imagine a documention patch would be c.20 lines. Would there be any interest in this? Ben. From aahz at pythoncraft.com Thu Jan 29 15:20:21 2009 From: aahz at pythoncraft.com (Aahz) Date: Thu, 29 Jan 2009 06:20:21 -0800 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> Message-ID: <20090129142021.GA8996@panix.com> On Tue, Jan 27, 2009, Raymond Hettinger wrote: > > It is becoming the norm in 3.x for functions to return iterators, > generators, or views whereever possible. > > I had a thought that pprint() ought to be taught to print iterators: > > pprint(enumerate(seq)) > pprint(map(somefunc, somedata)) > pprint(permutations(elements)) > pprint(mydict.items()) Along the lines of what others have said: pprint() cannot consume an unknown iterator. Therefore, you can pretty up the existing output slightly or special-case certain known iterators. There might also be an API change to pprint() that allowed it to consume iterators. The reason I'm chiming in is that I would welcome a PEP that created a __pprint__ method as an alternative to special-casing. I think that it would be generically useful for user-created objects, plus once you've added this feature other people can easily do some of the grunt work of extending this through the Python core. (Actually, unless someone objects, I don't think a PEP is required, but it would be good for the usual reasons that PEPs are written, to provide a central place documenting the addition.) This can also be done for Python 2.7, too. -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. From fuzzyman at voidspace.org.uk Thu Jan 29 15:22:47 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Thu, 29 Jan 2009 14:22:47 +0000 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <20090129142021.GA8996@panix.com> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> Message-ID: <4981BBB7.50502@voidspace.org.uk> Aahz wrote: > On Tue, Jan 27, 2009, Raymond Hettinger wrote: > >> It is becoming the norm in 3.x for functions to return iterators, >> generators, or views whereever possible. >> >> I had a thought that pprint() ought to be taught to print iterators: >> >> pprint(enumerate(seq)) >> pprint(map(somefunc, somedata)) >> pprint(permutations(elements)) >> pprint(mydict.items()) >> > > Along the lines of what others have said: pprint() cannot consume an > unknown iterator. Therefore, you can pretty up the existing output > slightly or special-case certain known iterators. There might also be an > API change to pprint() that allowed it to consume iterators. > > The reason I'm chiming in is that I would welcome a PEP that created a > __pprint__ method as an alternative to special-casing. I think that it > would be generically useful for user-created objects, plus once you've > added this feature other people can easily do some of the grunt work of > extending this through the Python core. (Actually, unless someone > objects, I don't think a PEP is required, but it would be good for the > usual reasons that PEPs are written, to provide a central place > documenting the addition.) > > This can also be done for Python 2.7, too. > Don't we have a pretty-print API - and isn't it spelled __str__ ? Michael Foord -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog From aahz at pythoncraft.com Thu Jan 29 17:06:18 2009 From: aahz at pythoncraft.com (Aahz) Date: Thu, 29 Jan 2009 08:06:18 -0800 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <4981BBB7.50502@voidspace.org.uk> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> Message-ID: <20090129160617.GA21013@panix.com> On Thu, Jan 29, 2009, Michael Foord wrote: > Aahz wrote: >> On Tue, Jan 27, 2009, Raymond Hettinger wrote: >> >>> It is becoming the norm in 3.x for functions to return iterators, >>> generators, or views whereever possible. >>> >>> I had a thought that pprint() ought to be taught to print iterators: >>> >>> pprint(enumerate(seq)) >>> pprint(map(somefunc, somedata)) >>> pprint(permutations(elements)) >>> pprint(mydict.items()) >>> >> >> The reason I'm chiming in is that I would welcome a PEP that created a >> __pprint__ method as an alternative to special-casing. I think that it >> would be generically useful for user-created objects, plus once you've >> added this feature other people can easily do some of the grunt work of >> extending this through the Python core. (Actually, unless someone >> objects, I don't think a PEP is required, but it would be good for the >> usual reasons that PEPs are written, to provide a central place >> documenting the addition.) > > Don't we have a pretty-print API - and isn't it spelled __str__ ? In theory, yes. In practice, we wouldn't be having this discussion if that really worked. But it probably would make sense to see how far using __str__ can take us -- AFAICT enumobject.c doesn't define __str__ (although I may be missing something, I don't know Python at the C level very well). -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. From rdmurray at bitdance.com Thu Jan 29 17:17:50 2009 From: rdmurray at bitdance.com (rdmurray at bitdance.com) Date: Thu, 29 Jan 2009 11:17:50 -0500 (EST) Subject: [Python-Dev] Examples in Py2 or Py3 In-Reply-To: References: Message-ID: On Thu, 29 Jan 2009 at 10:50, Facundo Batista wrote: > This introduces the problem that some examples are in Py2 and others > are in Py3. Sometimes this is not explicit, and gets confusing. I'm > trying to avoid this confusion when preparing my own examples. So far, > I use (py3) as a prefix for any example block, like: > > (Py3k) >>>> (some example) > (some result) > > Is there any recommended way to avoid confusion in these cases? (I'm > thinking about changing the prompt in my Python installation, to > something like ">2>>" and ">3>>", to be explicit about it... but I > wanted to know if there's another better way) My suggestion would be to run the examples in the interpreter shell to validate them before posting, and just cut and paste the banner along with the example: Python 2.6.1 (r261:67515, Jan 7 2009, 17:09:13) [GCC 4.3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print "hello world" hello world Python 3.0 (r30:67503, Dec 18 2008, 19:09:30) [GCC 4.3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print("hello world") hello world A bit noisier, but not much more work than cutting and pasting the example without the banner :) --RDM From phd at phd.pp.ru Thu Jan 29 17:23:17 2009 From: phd at phd.pp.ru (Oleg Broytmann) Date: Thu, 29 Jan 2009 19:23:17 +0300 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <20090129160617.GA21013@panix.com> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <20090129160617.GA21013@panix.com> Message-ID: <20090129162317.GC27314@phd.pp.ru> On Thu, Jan 29, 2009 at 08:06:18AM -0800, Aahz wrote: > On Thu, Jan 29, 2009, Michael Foord wrote: > > Don't we have a pretty-print API - and isn't it spelled __str__ ? > > In theory, yes. In practice, we wouldn't be having this discussion if > that really worked. But it probably would make sense to see how far > using __str__ can take us -- AFAICT enumobject.c doesn't define __str__ > (although I may be missing something, I don't know Python at the C level > very well). Container objects (tuples/lists/dicts/sets) don't define __str__. Is __pprint__ an attempt to redefine __str__? Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd at phd.pp.ru Programmers don't die, they just GOSUB without RETURN. From theller at ctypes.org Thu Jan 29 17:27:35 2009 From: theller at ctypes.org (Thomas Heller) Date: Thu, 29 Jan 2009 17:27:35 +0100 Subject: [Python-Dev] Include C++ code in the ctypes test suite? Message-ID: I'm currently working on a patch that adds the __thiscall calling convention to ctypes. This calling convention is used on Windows for calling member functions from C++ classes. The idea is to eventually allow ctypes to wrap C++ classes. To test this functionality it is required to add some C++ source code to the ctypes private test module _ctypes_test.pyd/_ctypes_test.so. Is it appropriate to add C++ source files to the Python repository, or would that create too much trouble on some platforms? -- Thanks, Thomas From benjamin at python.org Thu Jan 29 17:30:21 2009 From: benjamin at python.org (Benjamin Peterson) Date: Thu, 29 Jan 2009 10:30:21 -0600 Subject: [Python-Dev] Include C++ code in the ctypes test suite? In-Reply-To: References: Message-ID: <1afaf6160901290830mb862666od7aa59d5d9941bec@mail.gmail.com> On Thu, Jan 29, 2009 at 10:27 AM, Thomas Heller wrote: > Is it appropriate to add C++ source files to the Python repository, > or would that create too much trouble on some platforms? I don't see a problem with that as long as platforms without C++ compilers aren't affected in the build process. -- Regards, Benjamin From p.f.moore at gmail.com Thu Jan 29 17:39:53 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Thu, 29 Jan 2009 16:39:53 +0000 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <20090129162317.GC27314@phd.pp.ru> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <20090129160617.GA21013@panix.com> <20090129162317.GC27314@phd.pp.ru> Message-ID: <79990c6b0901290839t7080e6b0h81206bd978d85cf0@mail.gmail.com> 2009/1/29 Oleg Broytmann : > On Thu, Jan 29, 2009 at 08:06:18AM -0800, Aahz wrote: >> On Thu, Jan 29, 2009, Michael Foord wrote: >> > Don't we have a pretty-print API - and isn't it spelled __str__ ? >> >> In theory, yes. In practice, we wouldn't be having this discussion if >> that really worked. But it probably would make sense to see how far >> using __str__ can take us -- AFAICT enumobject.c doesn't define __str__ >> (although I may be missing something, I don't know Python at the C level >> very well). > > Container objects (tuples/lists/dicts/sets) don't define __str__. > Is __pprint__ an attempt to redefine __str__? Anyone feel like raising the topic of generic functions again? :-) More practically, the undocumented simplegeneric decorator in pkgutil could be used: Python 2.6.1 (r261:67517, Dec 4 2008, 16:51:00) [MSC v.1500 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> from pkgutil import simplegeneric >>> @simplegeneric ... def f(obj): ... return "Object of type %s" % type(obj) ... >>> def str_f(s): ... return "String: " + s ... >>> f.register(str, str_f) >>> f("Test") 'String: Test' >>> f(1) "Object of type " >>> To me, that seems better than inventing yet another special method. Paul. From lists at cheimes.de Thu Jan 29 17:46:04 2009 From: lists at cheimes.de (Christian Heimes) Date: Thu, 29 Jan 2009 17:46:04 +0100 Subject: [Python-Dev] Include C++ code in the ctypes test suite? In-Reply-To: References: Message-ID: <4981DD4C.5040507@cheimes.de> Thomas Heller schrieb: > To test this functionality it is required to add some C++ source code to the > ctypes private test module _ctypes_test.pyd/_ctypes_test.so. > > Is it appropriate to add C++ source files to the Python repository, > or would that create too much trouble on some platforms? How about creating an additional test library ctypes_test_cpp.cpp? This way we can still run the ctypes tests on a platform without a C++ compiler. Of course you could add some preprocessor magic but that's just another way to ask for trouble. A second test library should make it easier. Christian From solipsis at pitrou.net Thu Jan 29 17:50:32 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 29 Jan 2009 16:50:32 +0000 (UTC) Subject: [Python-Dev] Include C++ code in the ctypes test suite? References: Message-ID: Thomas Heller ctypes.org> writes: > > To test this functionality it is required to add some C++ source code to the > ctypes private test module _ctypes_test.pyd/_ctypes_test.so. Perhaps you should create a separate test module (_ctypes_pp_test?) so that platforms without a properly configured C++ compiler can still run the other tests. (I also suppose configure can detect the presence of a C++ compiler...) Regards Antoine. From barry at python.org Thu Jan 29 17:59:22 2009 From: barry at python.org (Barry Warsaw) Date: Thu, 29 Jan 2009 11:59:22 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <20090129113130.GA2490@amk.local> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> <20090129113130.GA2490@amk.local> Message-ID: <7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 29, 2009, at 6:31 AM, A.M. Kuchling wrote: > If we intend for 3.0 to be a 'beta release', or to weaken the 'no > features in micro releases' rule, then fine; but we have to be *really > clear about it*. Are we? (The 3.0 release page calls it > production-ready.) I think it sets bad precedence to downgrade our confidence in the release. Again, my position is that it's better to stick to the same development processes we've always used, fix the most egregious problems in 3.0.1 with no API changes, but spend most of our energy on a 3.1 release in 6 months. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYHga3EjvBPtnXfVAQLQxQP+Ipu3J0Ogvj0kW4txTgu8SJ4Hr6q7ll7i uyASnNQdB0WV3My1VsymMb5VlIWJtuvwY4DxYR1fqLHOQY6CloFqmmIkeMpZKt/K qXqNI1OvyLfoqg6QqXI+A4UFnUwlv7bSFHqZUu8wVn4De/kQqVfFUgjxBCoNe0lj 0au4xGdjjYo= =qOne -----END PGP SIGNATURE----- From alexander.belopolsky at gmail.com Thu Jan 29 17:59:39 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 29 Jan 2009 11:59:39 -0500 Subject: [Python-Dev] Include C++ code in the ctypes test suite? In-Reply-To: References: Message-ID: On Thu, Jan 29, 2009 at 11:50 AM, Antoine Pitrou wrote: .. > (I also suppose configure can detect the presence of a C++ compiler...) > This test is already there: $ ./configure ... checking for g++... g++ configure: WARNING: By default, distutils will build C++ extension modules with "g++". If this is not intended, then set CXX on the configure command line. ... From tjreedy at udel.edu Thu Jan 29 18:50:49 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 29 Jan 2009 12:50:49 -0500 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: Ben North wrote: > I find 'functools.partial' useful, but occasionally I'm unable to use it > because it lacks a 'from the right' version. ... > Would there be any interest in this? I think so. Post your patch to the tracker. Even if eventually rejected, it will be there for people to use. From solipsis at pitrou.net Thu Jan 29 18:58:44 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 29 Jan 2009 17:58:44 +0000 (UTC) Subject: [Python-Dev] Partial function application 'from the right' References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: Hello, Ben North redfrontdoor.org> writes: > > I find 'functools.partial' useful, but occasionally I'm unable to use it > because it lacks a 'from the right' version. E.g., to create a function > which splits a string on commas, you can't say > > # Won't work when called: > split_comma = partial(str.split, sep = ',') In py3k, we could also use "..." (the Ellipsis object) to denote places where an argument is missing, so that: split_comma = partial(str.split, ..., ',') would do what you want. Regards Antoine. From guido at python.org Thu Jan 29 19:13:33 2009 From: guido at python.org (Guido van Rossum) Date: Thu, 29 Jan 2009 10:13:33 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> References: <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> <20090129113130.GA2490@amk.local> <7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> Message-ID: > On Jan 29, 2009, at 6:31 AM, A.M. Kuchling wrote: >> If we intend for 3.0 to be a 'beta release', or to weaken the 'no >> features in micro releases' rule, then fine; but we have to be *really >> clear about it*. Are we? (The 3.0 release page calls it >> production-ready.) On Thu, Jan 29, 2009 at 8:59 AM, Barry Warsaw wrote: > I think it sets bad precedence to downgrade our confidence in the release. > Again, my position is that it's better to stick to the same development > processes we've always used, fix the most egregious problems in 3.0.1 with > no API changes, but spend most of our energy on a 3.1 release in 6 months. I'd like to find a middle ground. We can all agree that the users of 3.0 are a small minority compared to the 2.x users. Therefore I think we can bend the rules more than we have done for the recent 2.x releases. Those rules weren't always there (anyone remember the addition of bool, True and False to 2.2.1?). The rules were introduced for the benefit of our most conservative users -- people who introduce Python in an enterprise and don't want to find that they are forced to upgrade in six months. Frankly, I don't really believe the users for whom those rules were created are using 3.0 yet. Instead, I expect there to be two types of users: people in the educational business who don't have a lot of bridges to burn and are eager to use the new features; and developers of serious Python software (e.g. Twisted) who are trying to figure out how to port their code to 3.0. The first group isn't affected by the changes we're considering here (e.g. removing cmp or some obscure functions from the operator module). The latter group *may* be affected, simply because they may have some pre-3.0 code using old features that (by accident) still works under 3.0. On the one hand I understand that those folks want a stable target. On the other hand I think they would prefer to find out sooner rather than later they're using stuff they shouldn't be using any more. It's a delicate balance for sure, and I certainly don't want to open the floodgates here, or rebrand 3.1 as 3.0.1 or anything like that. But I really don't believe that the strictest interpretation of "no new features" will benefit us for 3.0.1. Perhaps we should decide when to go back to a more strict interpretation of the rules based on the uptake of Python 3 compared to Python 2. I don't believe that we risk influencing that uptake by bending the rules; the uptake will depend on the availability of ported 3rd party packages and some performance gains. (I don't know enough about the C reimplementation of io.py to tell whether it could be folded into 3.0 or will have to wait for 3.1.) Finally, to those who claim that 2.6 is a mess because multiprocessing wasn't perfectly stable at introduction: that's never been the standard we've used for totally *new* features. It's always been okay to add slightly immature features at a major release, as long as (a) they don't break anything else, and (b) we can fix things in the next release while maintaining backward compatibility. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From brett at python.org Thu Jan 29 19:28:59 2009 From: brett at python.org (Brett Cannon) Date: Thu, 29 Jan 2009 10:28:59 -0800 Subject: [Python-Dev] python breakpoint opcode In-Reply-To: <014f01c981fd$b7bb3c60$2731b520$@com> References: <014f01c981fd$b7bb3c60$2731b520$@com> Message-ID: On Thu, Jan 29, 2009 at 02:38, Dr Andrew Perella wrote: > Hi, > > I was thinking of adding a breakpoint opcode to python to enable less > invasive debugging. > > I came across posts from 1999 by Vladimir Marangozov and Christian Tismer > discussing this issue but the links to the code are all out of date. > > Did anything come of this? There is nothing currently in Python for this, but I was not around for the discussion back then. -Brett From steve at holdenweb.com Thu Jan 29 19:57:43 2009 From: steve at holdenweb.com (Steve Holden) Date: Thu, 29 Jan 2009 13:57:43 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> <20090129113130.GA2490@amk.local> <7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> Message-ID: <4981FC27.3010208@holdenweb.com> Guido van Rossum wrote: [...] > > Finally, to those who claim that 2.6 is a mess because multiprocessing > wasn't perfectly stable at introduction: that's never been the > standard we've used for totally *new* features. It's always been okay > to add slightly immature features at a major release, as long as (a) > they don't break anything else, and (b) we can fix things in the next > release while maintaining backward compatibility. > There's a large distance between saying its introduction was ill-advised and that 2.6 is a mess. I certainly never intimated such a thing (I said it was "a rushed release"). Did anyone? Of course we can fix it. Of course 2.6 is great. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From steve at holdenweb.com Thu Jan 29 19:57:43 2009 From: steve at holdenweb.com (Steve Holden) Date: Thu, 29 Jan 2009 13:57:43 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> <20090129113130.GA2490@amk.local> <7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> Message-ID: <4981FC27.3010208@holdenweb.com> Guido van Rossum wrote: [...] > > Finally, to those who claim that 2.6 is a mess because multiprocessing > wasn't perfectly stable at introduction: that's never been the > standard we've used for totally *new* features. It's always been okay > to add slightly immature features at a major release, as long as (a) > they don't break anything else, and (b) we can fix things in the next > release while maintaining backward compatibility. > There's a large distance between saying its introduction was ill-advised and that 2.6 is a mess. I certainly never intimated such a thing (I said it was "a rushed release"). Did anyone? Of course we can fix it. Of course 2.6 is great. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From guido at python.org Thu Jan 29 20:17:04 2009 From: guido at python.org (Guido van Rossum) Date: Thu, 29 Jan 2009 11:17:04 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <4981FC27.3010208@holdenweb.com> References: <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> <20090129113130.GA2490@amk.local> <7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> <4981FC27.3010208@holdenweb.com> Message-ID: On Thu, Jan 29, 2009 at 10:57 AM, Steve Holden wrote: > Guido van Rossum wrote: > [...] >> >> Finally, to those who claim that 2.6 is a mess because multiprocessing >> wasn't perfectly stable at introduction: that's never been the >> standard we've used for totally *new* features. It's always been okay >> to add slightly immature features at a major release, as long as (a) >> they don't break anything else, and (b) we can fix things in the next >> release while maintaining backward compatibility. >> > There's a large distance between saying its introduction was ill-advised > and that 2.6 is a mess. I certainly never intimated such a thing (I said > it was "a rushed release"). Did anyone? I don't think that 2.6 as a whole counts as a rushed release, despite the inclusion of multiprocessing. And I don't think it was ill-advised either. > Of course we can fix it. Of course 2.6 is great. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From martin at v.loewis.de Thu Jan 29 20:26:17 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 29 Jan 2009 20:26:17 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> <20090129113130.GA2490@amk.local> <7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> Message-ID: <498202D9.30300@v.loewis.de> > I think it sets bad precedence to downgrade our confidence in the > release. Again, my position is that it's better to stick to the same > development processes we've always used, fix the most egregious problems > in 3.0.1 with no API changes, but spend most of our energy on a 3.1 > release in 6 months. +1 Martin From martin at v.loewis.de Thu Jan 29 20:32:05 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Thu, 29 Jan 2009 20:32:05 +0100 Subject: [Python-Dev] Include C++ code in the ctypes test suite? In-Reply-To: References: Message-ID: <49820435.3040501@v.loewis.de> > Is it appropriate to add C++ source files to the Python repository, > or would that create too much trouble on some platforms? I think there will be massive portability problems, which only fade after one or two years, until this actually works everywhere. So failure of this to work shouldn't break the Python build, and, preferably, the build process should suggest the user what might have happened when it failed. Regards, Martin From Scott.Daniels at Acm.Org Thu Jan 29 21:00:15 2009 From: Scott.Daniels at Acm.Org (Scott David Daniels) Date: Thu, 29 Jan 2009 12:00:15 -0800 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: Antoine Pitrou wrote: > ... > In py3k, we could also use "..." (the Ellipsis object) to denote > places where an argument is missing, so that: > split_comma = partial(str.split, ..., ',') > would do what you want. Thus preventing any use of partial when an argument could be an the Ellipsis instance. --Scott David Daniels Scott.Daniels at Acm.Org From solipsis at pitrou.net Thu Jan 29 21:04:03 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 29 Jan 2009 20:04:03 +0000 (UTC) Subject: [Python-Dev] Partial function application 'from the right' References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: Scott David Daniels Acm.Org> writes: > > Antoine Pitrou wrote: > > ... > > In py3k, we could also use "..." (the Ellipsis object) to denote > > places where an argument is missing, so that: > > split_comma = partial(str.split, ..., ',') > > would do what you want. > > Thus preventing any use of partial when an argument could be an > the Ellipsis instance. Obviously, it is the drawback :) But Ellipsis is hardly used anywhere, and it reads good in this very use case. From Chris.Barker at noaa.gov Thu Jan 29 21:39:31 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Thu, 29 Jan 2009 12:39:31 -0800 Subject: [Python-Dev] Universal newlines, and the gzip module. Message-ID: <49821403.1030603@noaa.gov> Hi all, Over on the matplotlib mailing list, we ran into a problem with trying to use Universal newlines with gzip. In virtually all of my code that reads text files, I use the 'U' flag to open files, it really helps not having to deal with newline issues. Yes, they are fewer now that the Macintosh uses \n, but they can still be a pain. Anyway, we added such support to some matplotlib methods, and found that gzip file reading broken We were passing the flags though into either file() or gzip.open(), and passing 'U' into gzip.open() turns out to be fatal. 1) It would be nice if the gzip module (and the zip lib module) supported Universal newlines -- you could read a compressed text file with "wrong" newlines, and have them handled properly. However, that may be hard to do, so at least: 2) Passing a 'U' flag in to gzip.open shouldn't break it. I took a look at the Python SVN (2.5.4 and 2.6.1) for the gzip lib. I see this: # guarantee the file is opened in binary mode on platforms # that care about that sort of thing if mode and 'b' not in mode: mode += 'b' if fileobj is None: fileobj = self.myfileobj = __builtin__.open(filename, mode or 'rb') this is going to break for 'U' == you'll get 'rUb'. I tested file(filename, 'rUb'), and it looks like it does universal newline translation. So: * Either gzip should be a bit smarter, and remove the 'U' flag (that's what we did in the MPL code), or force 'rb' or 'wb'. * Or: file opening should be a bit smarter -- what does 'rUb' mean? a file can't be both Binary and Universal Text. Should it raise an exception? Somehow I think it would be better to ignore the 'U', but maybe that's only because of the issue I happen to be looking at now. That later seems a better idea -- this issue could certainly come up in other places than the gzip module, but maybe it would break a bunch of code -- who knows? I haven't touched py3 yet, so I have not idea if this issue is different there. -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From alexander.belopolsky at gmail.com Thu Jan 29 22:44:09 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Thu, 29 Jan 2009 16:44:09 -0500 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: This discussion probably belongs to python-ideas, but since we already have this thread, I'll reply here instead of opening a new thread there. Ellipsis was introduced into python to serve needs of the numeric python community. If you think of numpy multiarrays as functions taking ndim number of arguments, then ellipsis is used to denote any number of missing arguments and : is used to denote a single missing argument. By this analogy, partial(f, ..., *args) is right_partial with '...' standing for any number of missing arguments. I you want to specify exactly one missing argument, you would want to write partial(f, :, *args), which is not a valid syntax even in Py3. If one is willing to use [] instead of () with partial, it is possible to implement partial[f, ..., *args] and partial[f, x, :, z] already in Py2, but I would rather see : allowed in the argument lists or some other short syntax for missing arguments. If such syntax is introduced, the need for partial may even go away with partial(str.split, :, ',') spelled simply as str.split(:, ','). On Thu, Jan 29, 2009 at 3:04 PM, Antoine Pitrou wrote: > Scott David Daniels Acm.Org> writes: >> >> Antoine Pitrou wrote: >> > ... >> > In py3k, we could also use "..." (the Ellipsis object) to denote >> > places where an argument is missing, so that: >> > split_comma = partial(str.split, ..., ',') >> > would do what you want. >> >> Thus preventing any use of partial when an argument could be an >> the Ellipsis instance. > > Obviously, it is the drawback :) But Ellipsis is hardly used anywhere, and it > reads good in this very use case. > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: http://mail.python.org/mailman/options/python-dev/alexander.belopolsky%40gmail.com > From python at rcn.com Thu Jan 29 22:51:14 2009 From: python at rcn.com (Raymond Hettinger) Date: Thu, 29 Jan 2009 13:51:14 -0800 Subject: [Python-Dev] Python 3.0.1 References: <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org><20090129113130.GA2490@amk.local><7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> Message-ID: From: "Guido van Rossum" > On the one hand I understand that those folks want a stable target. On > the other hand I think they would prefer to find out sooner rather > than later they're using stuff they shouldn't be using any more. It's > a delicate balance for sure, and I certainly don't want to open the > floodgates here, or rebrand 3.1 as 3.0.1 or anything like that. But I > really don't believe that the strictest interpretation of "no new > features" will benefit us for 3.0.1. Perhaps we should decide when to > go back to a more strict interpretation of the rules based on the > uptake of Python 3 compared to Python 2. That seems like a smart choice to me. Make the fixups as early as possible, before there has been significant uptake. Am reminded of a cautionary tale from The Art of Unix Programming http://www.faqs.org/docs/artu/ch15s04.html#id2986550 : """ No discussion of make(1) would be complete without an acknowledgement that it includes one of the worst design botches in the history of Unix. The use of tab characters as a required leader for command lines associated with a production means that the interpretation of a makefile can change drastically on the basis of invisible differences in whitespace. "Why the tab in column 1? Yacc was new, Lex was brand new. I hadn't tried either, so I figured this would be a good excuse to learn. After getting myself snarled up with my first stab at Lex, I just did something simple with the pattern newline-tab. It worked, it stayed. And then a few weeks later I had a user population of about a dozen, most of them friends, and I didn't want to screw up my embedded base. The rest, sadly, is history." -- Stuart Feldman """ Raymond From fuzzyman at voidspace.org.uk Thu Jan 29 22:54:28 2009 From: fuzzyman at voidspace.org.uk (Michael Foord) Date: Thu, 29 Jan 2009 21:54:28 +0000 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org><20090129113130.GA2490@amk.local><7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> Message-ID: <49822594.2000900@voidspace.org.uk> Raymond Hettinger wrote: > From: "Guido van Rossum" >> On the one hand I understand that those folks want a stable target. On >> the other hand I think they would prefer to find out sooner rather >> than later they're using stuff they shouldn't be using any more. It's >> a delicate balance for sure, and I certainly don't want to open the >> floodgates here, or rebrand 3.1 as 3.0.1 or anything like that. But I >> really don't believe that the strictest interpretation of "no new >> features" will benefit us for 3.0.1. Perhaps we should decide when to >> go back to a more strict interpretation of the rules based on the >> uptake of Python 3 compared to Python 2. > > That seems like a smart choice to me. Make the fixups as early as > possible, > before there has been significant uptake. > > Am reminded of a cautionary tale from The Art of Unix Programming > http://www.faqs.org/docs/artu/ch15s04.html#id2986550 : > > """ > > No discussion of make(1) would be complete without an acknowledgement > that it includes one of the worst design botches in the history of > Unix. The use of tab characters as a required leader for command lines > associated with a production means that the interpretation of a > makefile can change drastically on the basis of invisible differences > in whitespace. > > > "Why the tab in column 1? Yacc was new, Lex was brand new. I hadn't > tried either, so I figured this would be a good excuse to learn. After > getting myself snarled up with my first stab at Lex, I just did > something simple with the pattern newline-tab. It worked, it stayed. > And then a few weeks later I had a user population of about a dozen, > most of them friends, and I didn't want to screw up my embedded base. > The rest, sadly, is history." -- Stuart Feldman > > """ > I suspect that the use of significant whitespace is too deeply ingrained in Python for us to change it now - even in Python 3. ;-) Michael > > Raymond > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk > -- http://www.ironpythoninaction.com/ http://www.voidspace.org.uk/blog From ncoghlan at gmail.com Thu Jan 29 22:54:35 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Jan 2009 07:54:35 +1000 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <4981BBB7.50502@voidspace.org.uk> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> Message-ID: <4982259B.4070808@gmail.com> Michael Foord wrote: > Don't we have a pretty-print API - and isn't it spelled __str__ ? For the "reiterable" cases like dictionary views (where the object is not consumed), an appropriate __str__ or __repr__ should be written). Whether that is something as simple as ".items()" for an items view, or something more complicated that more directly shows the content of the view, I'm not sure. For the standard iterators like enumerate and ranged, I would suggest that they be modified to use a repr of the form: "reversed()" "enumerate()" "iter()" "iter(, )" While those obviously won't show how much of the iterable has been consumed, neither do the current representations. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From leif.walsh at gmail.com Thu Jan 29 22:58:25 2009 From: leif.walsh at gmail.com (Leif Walsh) Date: Thu, 29 Jan 2009 16:58:25 -0500 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: On Thu, Jan 29, 2009 at 9:12 AM, Ben North wrote: > I find 'functools.partial' useful, but occasionally I'm unable to use it > because it lacks a 'from the right' version. E.g., to create a function > which splits a string on commas, you can't say First of all, many functions like this are easy to handle yourself. Example: >>> def split_comma(s): >>> return str.split(s, ',') That said, it seems to me that if we're going to add to functools.partial, we should go all the way and allow keyword arguments (or a dict of them, if it's otherwise too hard to implement). Otherwise, in another few {days, weeks, months} we'll see another thread like this clamoring for a keyword-sensitive functools.partial. Come to think of it, I would imagine the next iteration would ask for a way to curry arbitrary positional arguments, and I can't come up with a simple and beautiful way to do that off the top of my head. Maybe this is an argument for keeping functools.partial the way it is and forcing developers to write their own currying functions. -- Cheers, Leif From solipsis at pitrou.net Thu Jan 29 23:04:41 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 29 Jan 2009 22:04:41 +0000 (UTC) Subject: [Python-Dev] Partial function application 'from the right' References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: Alexander Belopolsky gmail.com> writes: > > By this analogy, partial(f, ..., *args) is right_partial with '...' > standing for any number of missing arguments. I you want to specify > exactly one missing argument, you would want to write partial(f, :, > *args), which is not a valid syntax even in Py3. Yes, of course, but... the meaning which numpy attributes to Ellipsis does not have to be the same in other libraries. Otherwise this meaning would have been embedded in the interpreter itself, while it hasn't. The point of using Ellipsis in this case is not to be numpy-friendly, but rather to exploit the fact that it is a very rarely used object, and that it has an alternate spelling which suits very well (visually speaking) the purpose being discussed. Regards Antoine. From aahz at pythoncraft.com Thu Jan 29 23:09:51 2009 From: aahz at pythoncraft.com (Aahz) Date: Thu, 29 Jan 2009 14:09:51 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <49809B0C.4020905@egenix.com> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> Message-ID: <20090129220951.GA17786@panix.com> On Wed, Jan 28, 2009, M.-A. Lemburg wrote: > > Why don't we just mark 3.0.x as experimental branch and keep updating/ > fixing things that were not sorted out for the 3.0.0 release ?! I > think that's a fair approach, given that the only way to get field > testing for new open-source software is to release early and often. > > A 3.1 release should then be the first stable release of the 3.x > series and mark the start of the usual deprecation mechanisms we have > in the 2.x series. Needless to say, that rushing 3.1 out now would > only cause yet another experimental release... major releases do take > time to stabilize. Speaking as the original author of PEP6 (Bug Fix Releases), this sounds like a reasonable middle ground. I certainly advocate that nobody consider Python 3.0 for production software, and enshrining that into the dev process should work well. At the same time, I think each individual change that doesn't clearly fall into the PEP6 process of being a bugfix needs to be vetted beyond what's permitted for not-yet-released versions. The problem is that the obvious candidate for doing the vetting is the Release Manager, and Barry doesn't like this approach. The vetting does need to be handled by a core committer IMO -- MAL, are you volunteering? Anyone else? Barry, are you actively opposed to marking 3.0.x as experimental, or do you just dislike it? (I.e. are you -1 or -0?) -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. From ncoghlan at gmail.com Thu Jan 29 23:12:14 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Jan 2009 08:12:14 +1000 Subject: [Python-Dev] Merging to the 3.0 maintenance branch In-Reply-To: <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> Message-ID: <498229BE.4060408@gmail.com> Benjamin Peterson wrote: > On Wed, Jan 28, 2009 at 10:37 PM, brett. cannon > wrote: >> Author: brett.cannon >> Date: Thu Jan 29 05:37:06 2009 >> New Revision: 69093 >> >> Log: >> Backport r69092 by hand since svnmerge keeps saying there is a conflict on '.'. > > Just do "svn resolved ." There are potential problems with doing it that way [1]. The safer option is to do: svn revert . svnmerge merge -M -F Perhaps we should add a "maintmerge" script (along with "maintmerge.bat" batch file) to the root development directory that automates this: #/bin/sh svnmerge merge -r $1 svn revert . svnmerge -M -F $1 (Note that my shell scripting is a little rusty and I haven't actually executed that example...) Then the advice will just be to use svnmerge directly most of the time, and maintmerge when merging a revision that was itself created with svnmerge. Cheers, Nick. [1] How to clobber svnmerge's revision tracking 101: http://mail.python.org/pipermail/python-dev/2008-December/084644.html -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From ncoghlan at gmail.com Thu Jan 29 23:19:08 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Jan 2009 08:19:08 +1000 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: <49822B5C.1000405@gmail.com> Leif Walsh wrote: > That said, it seems to me that if we're going to add to > functools.partial, we should go all the way and allow keyword > arguments (or a dict of them, if it's otherwise too hard to > implement). Otherwise, in another few {days, weeks, months} we'll see > another thread like this clamoring for a keyword-sensitive > functools.partial. functools.partial *does* support keyword arguments - it's just that some functions and methods written in C (such as string methods) *don't*, so partial's keyword support doesn't help. A functools.rpartial would go some way towards addressing that. Using the standalone Ellipsis to denote missing arguments would probably start to miss the whole point of functools.partial: the only reason for its existence is that it is *faster than the equivalent Python function*. If partial starts messing about looking for missing arguments and then slotting them in, then it is likely to slow down to the point where you would be better off skipping it and writing a dedicated function that adds the extra arguments. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From solipsis at pitrou.net Thu Jan 29 23:30:27 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Thu, 29 Jan 2009 22:30:27 +0000 (UTC) Subject: [Python-Dev] Partial function application 'from the right' References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> <49822B5C.1000405@gmail.com> Message-ID: Nick Coghlan gmail.com> writes: > > If partial starts messing about looking for missing arguments and then > slotting them in, then it is likely to slow down to the point where you > would be better off skipping it and writing a dedicated function that > adds the extra arguments. Looking for missing arguments is very cheap, just raw pointer compares (Ellipsis is a singleton). In comparison, the cost of executing a dedicated Python function would be overwhelming. From steve at pearwood.info Thu Jan 29 23:42:15 2009 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 30 Jan 2009 09:42:15 +1100 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <4981BBB7.50502@voidspace.org.uk> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> Message-ID: <498230C7.2040403@pearwood.info> Michael Foord wrote: > Don't we have a pretty-print API - and isn't it spelled __str__ ? Not really. If it were as simple as calling str(obj), there would be no need for the pprint module. In any case, it seems that the pprint module actually calls repr() on objects other than dicts, tuples and lists. I'm concerned about the number of special methods exploding, but I've also come across times where I needed more than two string representations of an object. Sometimes I solved this by adding a pprint() method, other times I used other names, and it would be nice if there was a standard way of spelling it. So I'm +0.5 on Aahz's suggestion of __pprint__. In my ideal world, __pprint__ should allow (but not require) extra arguments, so that one can do something like the following: pprint(binarytree) # sensible defaults pprint(binarytree, order='preorder') -- Steven From brett at python.org Thu Jan 29 23:45:55 2009 From: brett at python.org (Brett Cannon) Date: Thu, 29 Jan 2009 14:45:55 -0800 Subject: [Python-Dev] Universal newlines, and the gzip module. In-Reply-To: <49821403.1030603@noaa.gov> References: <49821403.1030603@noaa.gov> Message-ID: On Thu, Jan 29, 2009 at 12:39, Christopher Barker wrote: > Hi all, > > Over on the matplotlib mailing list, we ran into a problem with trying to > use Universal newlines with gzip. In virtually all of my code that reads > text files, I use the 'U' flag to open files, it really helps not having to > deal with newline issues. Yes, they are fewer now that the Macintosh uses > \n, but they can still be a pain. > > Anyway, we added such support to some matplotlib methods, and found that > gzip file reading broken We were passing the flags though into either file() > or gzip.open(), and passing 'U' into gzip.open() turns out to be fatal. > > 1) It would be nice if the gzip module (and the zip lib module) supported > Universal newlines -- you could read a compressed text file with "wrong" > newlines, and have them handled properly. However, that may be hard to do, > so at least: > > 2) Passing a 'U' flag in to gzip.open shouldn't break it. > > I took a look at the Python SVN (2.5.4 and 2.6.1) for the gzip lib. I see > this: > > > # guarantee the file is opened in binary mode on platforms > # that care about that sort of thing > if mode and 'b' not in mode: > mode += 'b' > if fileobj is None: > fileobj = self.myfileobj = __builtin__.open(filename, mode or > 'rb') > > this is going to break for 'U' == you'll get 'rUb'. I tested file(filename, > 'rUb'), and it looks like it does universal newline translation. > > So: > > * Either gzip should be a bit smarter, and remove the 'U' flag (that's what > we did in the MPL code), or force 'rb' or 'wb'. > > * Or: file opening should be a bit smarter -- what does 'rUb' mean? a file > can't be both Binary and Universal Text. Should it raise an exception? > Somehow I think it would be better to ignore the 'U', but maybe that's only > because of the issue I happen to be looking at now. > > > That later seems a better idea -- this issue could certainly come up in > other places than the gzip module, but maybe it would break a bunch of code > -- who knows? I think it should be raising an exception as 'rUb' is an invalid value for the argument. -Brett From python at rcn.com Fri Jan 30 00:00:56 2009 From: python at rcn.com (Raymond Hettinger) Date: Thu, 29 Jan 2009 15:00:56 -0800 Subject: [Python-Dev] Broken Test -- test_distutils Message-ID: In the past couple of days, test_distutils started failing. It looks like a pure python error and may have been introduced by guilherme.polo's checkins: File "c:\py27\lib\distutils\tests\test_sdist.py", line 119, in test_make_distr ibution spawn('tar --help') File "c:\py27\lib\distutils\spawn.py", line 37, in spawn _spawn_nt(cmd, search_path, dry_run=dry_run) File "c:\py27\lib\distutils\spawn.py", line 70, in _spawn_nt cmd = _nt_quote_args(cmd) File "c:\py27\lib\distutils\spawn.py", line 61, in _nt_quote_args args[i] = '"%s"' % args[i] TypeError: 'str' object does not support item assignment 1 test failed: test_distutils Raymond From ziade.tarek at gmail.com Fri Jan 30 00:05:28 2009 From: ziade.tarek at gmail.com (=?ISO-8859-1?Q?Tarek_Ziad=E9?=) Date: Fri, 30 Jan 2009 00:05:28 +0100 Subject: [Python-Dev] Broken Test -- test_distutils In-Reply-To: References: Message-ID: <94bdd2610901291505w1e89bb8dv154499869d982287@mail.gmail.com> On Fri, Jan 30, 2009 at 12:00 AM, Raymond Hettinger wrote: > In the past couple of days, test_distutils started failing. It looks like a > pure python error and may have been introduced by guilherme.polo's checkins: > That's me. I'll fix this problem right now. > > File "c:\py27\lib\distutils\tests\test_sdist.py", line 119, in > test_make_distr > ibution > spawn('tar --help') > File "c:\py27\lib\distutils\spawn.py", line 37, in spawn > _spawn_nt(cmd, search_path, dry_run=dry_run) > File "c:\py27\lib\distutils\spawn.py", line 70, in _spawn_nt > cmd = _nt_quote_args(cmd) > File "c:\py27\lib\distutils\spawn.py", line 61, in _nt_quote_args > args[i] = '"%s"' % args[i] > TypeError: 'str' object does not support item assignment > > 1 test failed: > test_distutils > > > > Raymond > > > > > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ziade.tarek%40gmail.com > -- Tarek Ziad? | Association AfPy | www.afpy.org Blog FR | http://programmation-python.org Blog EN | http://tarekziade.wordpress.com/ From ggpolo at gmail.com Fri Jan 30 00:07:45 2009 From: ggpolo at gmail.com (Guilherme Polo) Date: Thu, 29 Jan 2009 21:07:45 -0200 Subject: [Python-Dev] Broken Test -- test_distutils In-Reply-To: References: Message-ID: On Thu, Jan 29, 2009 at 9:00 PM, Raymond Hettinger wrote: > In the past couple of days, test_distutils started failing. It looks like a > pure python error and may have been introduced by guilherme.polo's checkins: > > > File "c:\py27\lib\distutils\tests\test_sdist.py", line 119, in > test_make_distr > ibution > spawn('tar --help') > File "c:\py27\lib\distutils\spawn.py", line 37, in spawn > _spawn_nt(cmd, search_path, dry_run=dry_run) > File "c:\py27\lib\distutils\spawn.py", line 70, in _spawn_nt > cmd = _nt_quote_args(cmd) > File "c:\py27\lib\distutils\spawn.py", line 61, in _nt_quote_args > args[i] = '"%s"' % args[i] > TypeError: 'str' object does not support item assignment > > 1 test failed: > test_distutils > > How did my commits introduced that error ? > > Raymond > -- -- Guilherme H. Polo Goncalves From robert.kern at gmail.com Fri Jan 30 00:13:41 2009 From: robert.kern at gmail.com (Robert Kern) Date: Thu, 29 Jan 2009 17:13:41 -0600 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <20090129142021.GA8996@panix.com> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> Message-ID: On 2009-01-29 08:20, Aahz wrote: > The reason I'm chiming in is that I would welcome a PEP that created a > __pprint__ method as an alternative to special-casing. I think that it > would be generically useful for user-created objects, plus once you've > added this feature other people can easily do some of the grunt work of > extending this through the Python core. (Actually, unless someone > objects, I don't think a PEP is required, but it would be good for the > usual reasons that PEPs are written, to provide a central place > documenting the addition.) I think it's worth looking at Armin Ronacher's pretty.py for a starting point. http://dev.pocoo.org/hg/sandbox/file/tip/pretty I've been using it as my default displayhook under IPython for a few weeks now. It uses a combination of a function registry and a __pretty__ special method to find the right pretty printer. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco From python at rcn.com Fri Jan 30 00:08:16 2009 From: python at rcn.com (Raymond Hettinger) Date: Thu, 29 Jan 2009 15:08:16 -0800 Subject: [Python-Dev] Broken Test -- test_distutils References: <94bdd2610901291505w1e89bb8dv154499869d982287@mail.gmail.com> Message-ID: <888A5B85FC644EF89EAC8A70BBDD9C1D@RaymondLaptop1> [Tarek Ziad?] > That's me. I'll fix this problem right now. Thanks. I appreciate it. Raymond From daniel at stutzbachenterprises.com Fri Jan 30 00:21:11 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Thu, 29 Jan 2009 17:21:11 -0600 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: On Thu, Jan 29, 2009 at 4:04 PM, Antoine Pitrou wrote: > Alexander Belopolsky gmail.com> writes: > > By this analogy, partial(f, ..., *args) is right_partial with '...' > > standing for any number of missing arguments. I you want to specify > > exactly one missing argument, you would want to write partial(f, :, > > *args), which is not a valid syntax even in Py3. > > Yes, of course, but... the meaning which numpy attributes to Ellipsis does > not > have to be the same in other libraries. Otherwise this meaning would have > been > embedded in the interpreter itself, while it hasn't. > The meaning which numpy attributes to Ellipsis is also the meaning that mathematical notation has attached to Ellipsis for a very long time. See: http://en.wikipedia.org/wiki/Ellipsis#In_mathematical_notation -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.klaas at gmail.com Fri Jan 30 00:24:38 2009 From: mike.klaas at gmail.com (Mike Klaas) Date: Thu, 29 Jan 2009 15:24:38 -0800 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: <5E031BDD-32EC-47CC-8CE4-D4D3888A49F2@gmail.com> On 29-Jan-09, at 3:21 PM, Daniel Stutzbach wrote: > On Thu, Jan 29, 2009 at 4:04 PM, Antoine Pitrou > wrote: > Alexander Belopolsky gmail.com> writes: > > By this analogy, partial(f, ..., *args) is right_partial with '...' > > standing for any number of missing arguments. I you want to specify > > exactly one missing argument, you would want to write partial(f, :, > > *args), which is not a valid syntax even in Py3. > > Yes, of course, but... the meaning which numpy attributes to > Ellipsis does not > have to be the same in other libraries. Otherwise this meaning would > have been > embedded in the interpreter itself, while it hasn't. > > The meaning which numpy attributes to Ellipsis is also the meaning > that mathematical notation has attached to Ellipsis for a very long > time. And yet, python isn't confined to mathematical notation. *, ** are both overloaded for use in argument lists to no-one's peril, AFAICT. -Mike From python at rcn.com Fri Jan 30 00:27:03 2009 From: python at rcn.com (Raymond Hettinger) Date: Thu, 29 Jan 2009 15:27:03 -0800 Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com><497F6E55.6090608@v.loewis.de><49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: [Aahz] > At the same time, I think each individual > change that doesn't clearly fall into the PEP6 process of being a bugfix > needs to be vetted beyond what's permitted for not-yet-released versions. To get the ball rolling, I have a candidate for discussion. Very late in the 3.0 process (after feature freeze), the bsddb code was ripped out (good riddance). This had the unfortunate side-effect of crippling shelves which now fall back to using dumbdbm. I'm somewhat working on an alternate dbm based on sqlite3: http://code.activestate.com/recipes/576638/ It is a pure python module and probably will not be used directly, but shelves will see an immediate benefit (especially for large shelves) in terms of speed and space. On the one hand, it is an API change or new feature because people can (if they choose) access the dbm directly. OTOH, it is basically a performance fix for shelves whose API won't change at all. The part that is visible and incompatible is that 3.0.1 shelves won't be readable by 3.0.0. > The problem is that the obvious candidate for doing the vetting is the > Release Manager, and Barry doesn't like this approach. The vetting does > need to be handled by a core committer IMO -- MAL, are you volunteering? > Anyone else? It should be someone who is using 3.0 regularly (ideally someone who is working on fixing it). IMO, people who aren't exercising it don't really have a feel for the problems or the cost/benefits of the fixes. > Barry, are you actively opposed to marking 3.0.x as experimental, or do > you just dislike it? (I.e. are you -1 or -0?) My preference is to *not* mark it as experimental. Instead, I prefer doing what it takes to make the 3.0.x series viable. Raymond From collinw at gmail.com Fri Jan 30 00:29:47 2009 From: collinw at gmail.com (Collin Winter) Date: Thu, 29 Jan 2009 15:29:47 -0800 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> Message-ID: <43aa6ff70901291529v3a014d3bo73b63229b8697cf1@mail.gmail.com> On Thu, Jan 29, 2009 at 6:12 AM, Ben North wrote: > Hi, > > I find 'functools.partial' useful, but occasionally I'm unable to use it > because it lacks a 'from the right' version. E.g., to create a function > which splits a string on commas, you can't say > > # Won't work when called: > split_comma = partial(str.split, sep = ',') [snip] > I've created a patch which adds a 'partial_right' function. The two > examples above: > > >>> import functools, math > > >>> split_comma = functools.partial_right(str.split, ',') > >>> split_comma('a,b,c') > ['a', 'b', 'c'] > > >>> log_10 = functools.partial_right(math.log, 10.0) > >>> log_10(100.0) > 2.0 Can you point to real code that this makes more readable? Collin From daniel at stutzbachenterprises.com Fri Jan 30 00:38:29 2009 From: daniel at stutzbachenterprises.com (Daniel Stutzbach) Date: Thu, 29 Jan 2009 17:38:29 -0600 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: <5E031BDD-32EC-47CC-8CE4-D4D3888A49F2@gmail.com> References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> <5E031BDD-32EC-47CC-8CE4-D4D3888A49F2@gmail.com> Message-ID: On Thu, Jan 29, 2009 at 5:24 PM, Mike Klaas wrote: > And yet, python isn't confined to mathematical notation. *, ** are both > overloaded for use in argument lists to no-one's peril, AFAICT. > Certainly, but there is no danger of confusion them for multiplication in context, whereas: split_comma = partial(str.split, ..., ',') to me looks like "make ',' the last argument" rather than "make ',' the second argument". -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew-pythondev at puzzling.org Fri Jan 30 00:38:36 2009 From: andrew-pythondev at puzzling.org (Andrew Bennetts) Date: Fri, 30 Jan 2009 10:38:36 +1100 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: <5E031BDD-32EC-47CC-8CE4-D4D3888A49F2@gmail.com> References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> <5E031BDD-32EC-47CC-8CE4-D4D3888A49F2@gmail.com> Message-ID: <20090129233836.GA9550@steerpike.home.puzzling.org> Mike Klaas wrote: > On 29-Jan-09, at 3:21 PM, Daniel Stutzbach wrote: [...] >> The meaning which numpy attributes to Ellipsis is also the meaning >> that mathematical notation has attached to Ellipsis for a very long >> time. > > And yet, python isn't confined to mathematical notation. *, ** are both > overloaded for use in argument lists to no-one's peril, AFAICT. With the crucial difference that * and ** are purely syntax, but Ellipsis is an object. -Andrew. From guido at python.org Fri Jan 30 00:40:41 2009 From: guido at python.org (Guido van Rossum) Date: Thu, 29 Jan 2009 15:40:41 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: On Thu, Jan 29, 2009 at 3:27 PM, Raymond Hettinger wrote: > To get the ball rolling, I have a candidate for discussion. > > Very late in the 3.0 process (after feature freeze), the bsddb code was > ripped out (good riddance). This had the unfortunate side-effect of > crippling shelves which now fall back to using dumbdbm. > > I'm somewhat working on an alternate dbm based on sqlite3: > http://code.activestate.com/recipes/576638/ > It is a pure python module and probably will not be used directly, but shelves > will see an immediate benefit (especially for large shelves) in terms of speed > and space. > > On the one hand, it is an API change or new feature because people can > (if they choose) access the dbm directly. OTOH, it is basically a > performance fix for shelves whose API won't change at all. The part that is visible > and incompatible is that 3.0.1 shelves won't be readable by 3.0.0. That is too much for 3.0.1. It could affect external file formats which strikes me as a bad idea. Sounds like a good candidate for 3.1, which we should be expecting in 4-6 months I hope. Also you could try find shelve users (are there any?) and recommend they install this as a 3rd party package, with the expectation it'll be built into 3.1. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From python at rcn.com Fri Jan 30 01:43:37 2009 From: python at rcn.com (Raymond Hettinger) Date: Thu, 29 Jan 2009 16:43:37 -0800 Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: >> On the one hand, it is an API change or new feature because people can >> (if they choose) access the dbm directly. OTOH, it is basically a >> performance fix for shelves whose API won't change at all. The part that is visible >> and incompatible is that 3.0.1 shelves won't be readable by 3.0.0. > > That is too much for 3.0.1. It could affect external file formats > which strikes me as a bad idea. We should have insisted that bsddb not be taken out until a replacement was put in. The process was broken with the RM insisting on feature freeze early in the game but letting tools like bsddb get ripped-out near the end. IMO, it was foolish to do one without the other. After the second alphas was out, there was resistance to any additions or to revisiting any of the early changes -- that was probably as mistake -- now we're deferring the fix for another 4-6 months and 3.0.x will never have it (at least right out of the box, as shipped). > Also you could try find shelve users (are there > any?) I'm a big fan of shelves and have always used them extensively. Not sure if anyone else cares about them though. > recommend they install this as a 3rd party package, with the > expectation it'll be built into 3.1. Will do. That was my original plan since the day bsddb got ripped out. Raymond From python at rcn.com Fri Jan 30 01:58:59 2009 From: python at rcn.com (Raymond Hettinger) Date: Thu, 29 Jan 2009 16:58:59 -0800 Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: A couple additional thoughts FWIW: * whichdb() selects from multiple file formats, so 3.0.1 would still be able to read 3.0.0 files. It is the 2.x shelves that won't be readable at all under any scenario. * If you're thinking that shelves have very few users and that 3.0.0 has had few adopters, doesn't that mitigate the effects of making a better format available in 3.0.1? Wouldn't this be the time to do it? * The file format itself is not new or unreadable by 3.0.0. It is just a plain sqlite3 file. Was is new is the ability of shelve's to call sqlite. To me, that is something a little different than changing a pickle protocol or somesuch. Raymond From solipsis at pitrou.net Fri Jan 30 02:04:36 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 30 Jan 2009 01:04:36 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: Raymond Hettinger rcn.com> writes: > > * If you're thinking that shelves have very few users and that > 3.0.0 has had few adopters, doesn't that mitigate the effects > of making a better format available in 3.0.1? Wouldn't this > be the time to do it? There was already another proposal for an sqlite-based dbm module, you may want to synchronize with it: http://bugs.python.org/issue3783 As I see it, the problem with introducing it in 3.0.1 is that we would be rushing in a new piece of code without much review or polish. Also, there are only two release blockers left for 3.0.1, so we might just finish those and release, then concentrate on 3.1. Regards Antoine. From python at rcn.com Fri Jan 30 02:15:54 2009 From: python at rcn.com (Raymond Hettinger) Date: Thu, 29 Jan 2009 17:15:54 -0800 Subject: [Python-Dev] pprint(iterator) References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> Message-ID: <6ACB41E9156B42DD87B138113FCBDFD7@RaymondLaptop1> > Along the lines of what others have said: pprint() cannot consume an > unknown iterator. Perhaps so. It's nice to have printing be free of side-effects (other than the actual printing). I've been working with 3.0 daily for several months (on a book project) and mostly think it's great. But sooner or later, we're going to have to address the issue about iterator reprs at the interactive prompt. This is a separate and more general issue than pprint(). My experience so far is that it is that the shift to more things being unviewable at the prompt is bit frustrating and makes the language more opaque. If that has been a source of irritation to me, then it will likely be more acutely felt by people who are starting out and are using the interactive prompt to explore the language. I don't know the right answer here (perhaps an alternate sys.displayhook). Just wanted to provide some early feedback based on my experiences heavily exercising 3.0. Raymond P.S. My other experience with 3.0 is that my most frequent error has changed. It used to be that the number reason for my getting a syntax error was leaving-off a colon. Now, my number one reason is omitting parens in a print() function call. I thought I would get used to it quickly, but it still comes up several times a day. From solipsis at pitrou.net Fri Jan 30 02:19:54 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 30 Jan 2009 01:19:54 +0000 (UTC) Subject: [Python-Dev] pprint(iterator) References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <6ACB41E9156B42DD87B138113FCBDFD7@RaymondLaptop1> Message-ID: Raymond Hettinger rcn.com> writes: > > P.S. My other experience with 3.0 is that my most frequent error has > changed. It used to be that the number reason for my getting a syntax > error was leaving-off a colon. Now, my number one reason is > omitting parens in a print() function call. I thought I would get used to > it quickly, but it still comes up several times a day. I find myself with the reverse problem. When I code with 2.x, I often put parens around the argument list of a print staement. From stephen at xemacs.org Fri Jan 30 02:34:28 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 30 Jan 2009 10:34:28 +0900 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: <873af1efvf.fsf@xemacs.org> Raymond Hettinger writes: > My preference is to *not* mark it as experimental. Don't take the word "experimental" too seriously. It's clearly an exaggeration given the current state of 3.0.x. What is meant is an explicit announcement that the stability rules chosen in response to the bool-True-False brouhaha will be relaxed for the 3.0.x series *only*. > Instead, I prefer doing what it takes to make the 3.0.x series viable. That's not an "instead", that's two independent choices. The point is that most of the people who are voicing concerns fear precisely that policy. I think that the important question is "can the 3.0.x series be made 'viable' in less than the time frame for 3.1?" If not, I really have to think it's DOA from the point of view of folks who consider 3.0.0 non-viable. I think that's what Barry and Martin are saying. Guido is saying something different. AIUI, he's saying that explicitly introducing controlled instability into 3.0.x of the form "this is what the extremely stable non-buggy inherited-from-3.0 core of 3.1 is going to look like" will be a great service to those who consider 3.0.0 non-viable. The key point is that new features in 3.1 are generally going to be considered less reliable than those inherited from 3.0, and thus a debugged 3.0, even if the implementations have been unstable, provides a way for the very demanding to determine what that set is, and to test how it behaves in their applications. I think it's worth a try, after consultation with some of the major developers who are the ostensible beneficiaries. But if tried, I think it's important to mark 3.0.x as "not yet stable" even if the instability is carefully defined and controlled. From martin at v.loewis.de Fri Jan 30 03:27:03 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 30 Jan 2009 03:27:03 +0100 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: <498229BE.4060408@gmail.com> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> Message-ID: <49826577.4030808@v.loewis.de> > There are potential problems with doing it that way [1]. The safer > option is to do: > > svn revert . > svnmerge merge -M -F I still don't see the potential problem. If you do svnmerge, svn commit, all is fine, right? The problem *only* arises if you do svnmerge, svn up, svn commit - and clearly, you shouldn't do that. If, on commit, you get a conflict, you should revert all your changes, svn up, and start all over with the merge. Regards, Martin From aahz at pythoncraft.com Fri Jan 30 03:33:28 2009 From: aahz at pythoncraft.com (Aahz) Date: Thu, 29 Jan 2009 18:33:28 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: <20090130023328.GA17511@panix.com> On Fri, Jan 30, 2009, Antoine Pitrou wrote: > Raymond Hettinger rcn.com> writes: >> >> * If you're thinking that shelves have very few users and that >> 3.0.0 has had few adopters, doesn't that mitigate the effects >> of making a better format available in 3.0.1? Wouldn't this >> be the time to do it? > > There was already another proposal for an sqlite-based dbm module, you may > want to synchronize with it: > http://bugs.python.org/issue3783 > > As I see it, the problem with introducing it in 3.0.1 is that we would > be rushing in a new piece of code without much review or polish. Also, > there are only two release blockers left for 3.0.1, so we might just > finish those and release, then concentrate on 3.1. There's absolutely no reason not to have a 3.0.2 before 3.1 comes out. You're probably right that what Raymond wants to is best not done for 3.0.1 -- but once we've agreed in principle that 3.0.x isn't a true production release of Python for PEP6 purposes, we can do "release early, release often". -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. From martin at v.loewis.de Fri Jan 30 03:51:44 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 30 Jan 2009 03:51:44 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <873af1efvf.fsf@xemacs.org> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> <873af1efvf.fsf@xemacs.org> Message-ID: <49826B40.3010708@v.loewis.de> > Don't take the word "experimental" too seriously. It's clearly an > exaggeration given the current state of 3.0.x. What is meant is an > explicit announcement that the stability rules chosen in response to > the bool-True-False brouhaha will be relaxed for the 3.0.x series > *only*. The name for that shouldn't be "experimental", though. I don't think it needs any name at all. It would be sufficient to report, in the release announcement, and some stuff got removed in an incompatible way. This is also different from bool-True-False, which was an addition, not a removal. > I think that the important question is "can the 3.0.x series be made > 'viable' in less than the time frame for 3.1?" If not, I really have > to think it's DOA from the point of view of folks who consider 3.0.0 > non-viable. I think that's what Barry and Martin are saying. DOA == dead on arrival? I don't think Python 3.0 is dead. Instead, I think it is fairly buggy, but those bugs can be fixed. Removal of stuff is *not* a bug fix, of course. The *real* bugs in 3.0 is stuff like "IDLE doesn't work", "bdist_wininst doesn't work", etc. I personally can agree with removal of stuff (despite it not being a bug fix). However, more importantly, I want to support respective authority. If the release manager sets a policy on what is and what is not acceptable for a bug fix release, every committer should implement this policy (or at least not actively break it). With the removals in the code, I do think it is important to release 3.0.1 quickly, like, say, next week. > The key point is that new features in 3.1 are generally going to be > considered less reliable than those inherited from 3.0, and thus a > debugged 3.0, even if the implementations have been unstable, provides > a way for the very demanding to determine what that set is, and to > test how it behaves in their applications. That is fairly abstract. What specific bugs in Python 3.0 are you talking about? Regards, Martin From brett at python.org Fri Jan 30 03:52:00 2009 From: brett at python.org (Brett Cannon) Date: Thu, 29 Jan 2009 18:52:00 -0800 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: <49826577.4030808@v.loewis.de> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> Message-ID: On Thu, Jan 29, 2009 at 18:27, "Martin v. L?wis" wrote: >> There are potential problems with doing it that way [1]. The safer >> option is to do: >> >> svn revert . >> svnmerge merge -M -F > > I still don't see the potential problem. If you do svnmerge, svn commit, > all is fine, right? The problem *only* arises if you do svnmerge, > svn up, svn commit - and clearly, you shouldn't do that. If, on commit, > you get a conflict, you should revert all your changes, svn up, and > start all over with the merge. I did do that and I still got conflicts. -Brett From martin at v.loewis.de Fri Jan 30 04:03:54 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 30 Jan 2009 04:03:54 +0100 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> Message-ID: <49826E1A.6080809@v.loewis.de> Brett Cannon wrote: > On Thu, Jan 29, 2009 at 18:27, "Martin v. L?wis" wrote: >>> There are potential problems with doing it that way [1]. The safer >>> option is to do: >>> >>> svn revert . >>> svnmerge merge -M -F >> I still don't see the potential problem. If you do svnmerge, svn commit, >> all is fine, right? The problem *only* arises if you do svnmerge, >> svn up, svn commit - and clearly, you shouldn't do that. If, on commit, >> you get a conflict, you should revert all your changes, svn up, and >> start all over with the merge. > > I did do that and I still got conflicts. What is "that"? "svn revert -R" (plus rm for all added files), "svn up", "svnmerge", "svn revert ."? What conflicts? Martin From brett at python.org Fri Jan 30 04:06:13 2009 From: brett at python.org (Brett Cannon) Date: Thu, 29 Jan 2009 19:06:13 -0800 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: <49826E1A.6080809@v.loewis.de> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> <49826E1A.6080809@v.loewis.de> Message-ID: On Thu, Jan 29, 2009 at 19:03, "Martin v. L?wis" wrote: > Brett Cannon wrote: >> On Thu, Jan 29, 2009 at 18:27, "Martin v. L?wis" wrote: >>>> There are potential problems with doing it that way [1]. The safer >>>> option is to do: >>>> >>>> svn revert . >>>> svnmerge merge -M -F >>> I still don't see the potential problem. If you do svnmerge, svn commit, >>> all is fine, right? The problem *only* arises if you do svnmerge, >>> svn up, svn commit - and clearly, you shouldn't do that. If, on commit, >>> you get a conflict, you should revert all your changes, svn up, and >>> start all over with the merge. >> >> I did do that and I still got conflicts. > > What is "that"? "svn revert -R" (plus rm for all added files), > "svn up", "svnmerge", "svn revert ."? svn up svnmerge ... conflicts svn revert -R . svn up svnmerge ... same conflicts > > What conflicts? Some metadata on '.'. -Brett From rdmurray at bitdance.com Fri Jan 30 04:08:50 2009 From: rdmurray at bitdance.com (rdmurray at bitdance.com) Date: Thu, 29 Jan 2009 22:08:50 -0500 (EST) Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: On Thu, 29 Jan 2009 at 16:43, Raymond Hettinger wrote: >On Thu, 29 Jan 2009 at 15:40, Guido van Rossum wrote: >> Also you could try find shelve users (are there >> any?) > > I'm a big fan of shelves and have always used them extensively. > Not sure if anyone else cares about them though. I use them. Not in any released products at the moment, though, and I haven't migrated the shelve-using code to 3.0 yet. So I'd be in favor of adding sqlite3 support as soon as practical. --RDM From martin at v.loewis.de Fri Jan 30 04:09:17 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Fri, 30 Jan 2009 04:09:17 +0100 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> <49826E1A.6080809@v.loewis.de> Message-ID: <49826F5D.7000009@v.loewis.de> > svn up > svnmerge > ... conflicts > svn revert -R . > svn up > svnmerge > ... same conflicts Ah. In the 3.0 branch, always do "svn revert ." after svnmerge. It's ok (Nick says it isn't exactly ok, but I don't understand why) Martin From rrr at ronadam.com Fri Jan 30 04:23:55 2009 From: rrr at ronadam.com (Ron Adam) Date: Thu, 29 Jan 2009 21:23:55 -0600 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <498230C7.2040403@pearwood.info> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <498230C7.2040403@pearwood.info> Message-ID: <498272CB.5040503@ronadam.com> Steven D'Aprano wrote: > Michael Foord wrote: > >> Don't we have a pretty-print API - and isn't it spelled __str__ ? > > Not really. If it were as simple as calling str(obj), there would be no > need for the pprint module. I agree. And when I want to use pprint, there are usually additional output formatting requirements I need that isn't a "one size fits all" type of problem. In any case, it seems that the pprint module > actually calls repr() on objects other than dicts, tuples and lists. > > I'm concerned about the number of special methods exploding, but I've > also come across times where I needed more than two string > representations of an object. Sometimes I solved this by adding a > pprint() method, other times I used other names, and it would be nice if > there was a standard way of spelling it. So I'm +0.5 on Aahz's > suggestion of __pprint__. I'm -.5 on addint __pprint__ for the above reasons. > In my ideal world, __pprint__ should allow (but not require) extra > arguments, so that one can do something like the following: > > pprint(binarytree) # sensible defaults > pprint(binarytree, order='preorder') It seems to me pprint is one of those functions where output format specifiers and keywords make sense because you are trying to fit the data output of a wide variety of types to a particular output need. It's not reasonably possible for each type to predict what that output need is. Some of the options that sound useful might be: abbreviated form short form long complete detail form tree form column align form right or left margins and alignment options Think of it as how 'dir' is used for examining the contents of a disk drive where different output styles is useful at different times. Looking at it this way, instead of a __pprint__ method, a optional __pprint_style__ attribute could specify a default output style that the pprint function would fall back to. Maybe for iterators, it's not the data produced but rather the current state of use that is more useful? For example for partially consumed iterators it might be useful to express how many items have been taken, and how many are left to take when that info is available. (?) The idea is that pretty printing is usually used to check the status or state of something. Or at least that is how I use it. Ron From rrr at ronadam.com Fri Jan 30 04:23:55 2009 From: rrr at ronadam.com (Ron Adam) Date: Thu, 29 Jan 2009 21:23:55 -0600 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <498230C7.2040403@pearwood.info> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <498230C7.2040403@pearwood.info> Message-ID: <498272CB.5040503@ronadam.com> Steven D'Aprano wrote: > Michael Foord wrote: > >> Don't we have a pretty-print API - and isn't it spelled __str__ ? > > Not really. If it were as simple as calling str(obj), there would be no > need for the pprint module. I agree. And when I want to use pprint, there are usually additional output formatting requirements I need that isn't a "one size fits all" type of problem. In any case, it seems that the pprint module > actually calls repr() on objects other than dicts, tuples and lists. > > I'm concerned about the number of special methods exploding, but I've > also come across times where I needed more than two string > representations of an object. Sometimes I solved this by adding a > pprint() method, other times I used other names, and it would be nice if > there was a standard way of spelling it. So I'm +0.5 on Aahz's > suggestion of __pprint__. I'm -.5 on addint __pprint__ for the above reasons. > In my ideal world, __pprint__ should allow (but not require) extra > arguments, so that one can do something like the following: > > pprint(binarytree) # sensible defaults > pprint(binarytree, order='preorder') It seems to me pprint is one of those functions where output format specifiers and keywords make sense because you are trying to fit the data output of a wide variety of types to a particular output need. It's not reasonably possible for each type to predict what that output need is. Some of the options that sound useful might be: abbreviated form short form long complete detail form tree form column align form right or left margins and alignment options Think of it as how 'dir' is used for examining the contents of a disk drive where different output styles is useful at different times. Looking at it this way, instead of a __pprint__ method, a optional __pprint_style__ attribute could specify a default output style that the pprint function would fall back to. Maybe for iterators, it's not the data produced but rather the current state of use that is more useful? For example for partially consumed iterators it might be useful to express how many items have been taken, and how many are left to take when that info is available. (?) The idea is that pretty printing is usually used to check the status or state of something. Or at least that is how I use it. Ron From brett at python.org Fri Jan 30 04:59:39 2009 From: brett at python.org (Brett Cannon) Date: Thu, 29 Jan 2009 19:59:39 -0800 Subject: [Python-Dev] 3.0.1/3.1.0 summary Message-ID: This is my attempt to summarize what everyone has been saying so we can get this resolved. >From what I can tell, most people like the idea of doing a 3.0.1 release ASAP (like "in a week or so" fast) with the stuff that should have been removed from 3.0.0 in the first place removed. People also seem to support doing a 3.1 release April/May where new stuff (e.g. io in C, new shelve back-end for sqlite3) is introduced to the rest of the world. This timeline has the benefit of allowing us to do an alpha release at PyCon and puts us at a six month release cycle which does not portray 3.0 or 3.1 as rushed releases. The sticky points I see are: 1. Barry, who is the release manager for 3.0.1, does not like the idea of the cruft that is being proposed removed from 3.0.1. Personally I say we continue to peer pressure him as I think a new major release is not like our typical minor release, but I am not about to force Barry to go against what he thinks is reasonable unless I am willing to step up as release manager (and I am not since I simply don't have the time to learn the process fast enough along with just a lack of time with other Python stuff). 2. Do we label 3.0.x as experimental? I say no since it isn't experimental; we basically had some bugs slip through that happen to be compatibility problems that were overlooked. I for one never viewed 3.0.x as experimental, just not the best we could necessarily do without more input from the community and our own experience with 3.x in general. Let's see if we can get these two points squared away so we can get 3.0.1 in whatever state it is meant to be in out the door quickly. -Brett From stephen at xemacs.org Fri Jan 30 05:08:52 2009 From: stephen at xemacs.org (Stephen J. Turnbull) Date: Fri, 30 Jan 2009 13:08:52 +0900 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <49826B40.3010708@v.loewis.de> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> <873af1efvf.fsf@xemacs.org> <49826B40.3010708@v.loewis.de> Message-ID: <87tz7hcu5n.fsf@xemacs.org> "Martin v. L?wis" writes: > > Don't take the word "experimental" too seriously. What is meant > > is an explicit announcement that the stability rules will be > > relaxed for the 3.0.x series *only*. > > The name for that shouldn't be "experimental", though. I don't think > it needs any name at all. That's what I meant. I'm sure that whoever wrote the word "experimental" in the first place regrets it, because it doesn't reflect what they meant. > > I think that the important question is "can the 3.0.x series be made > > 'viable' in less than the time frame for 3.1?" If not, I really have > > to think it's DOA from the point of view of folks who consider 3.0.0 > > non-viable. I think that's what Barry and Martin are saying. > > DOA == dead on arrival? I don't think Python 3.0 is dead. I'm sorry, DOA was poor word choice, especially this context. I meant that people who currently consider 3.0 non-viable are more likely to focus on the branch that will become 3.1 unless a "viable" 3.0.x will arrive *very* quickly. > That is fairly abstract. What specific bugs in Python 3.0 are you > talking about? I'm not talking about specific bugs; I'm perfectly happy with 3.0 for my purposes, and I think it very unlikely that any of the possibly destabilizing changes that have been proposed for 3.0.1 will affect me adversely. Rather, I'm trying to disentangle some of the unfortunate word choices that have been made (and I apologize for making one of my own!), and find common ground so that a policy can be set more quickly. IMO it's likely that there's really no audience for a 3.0.x series that conforms to the rules used for 2.x from 2.2.1 or so on. That is, there are people who really don't care because 3.0 is already a better platform for their application whether there are minor changes or not, and there are people who do care about stability but they're not going to use 3.0.x whether it adheres to the previous rules strictly or not. There are very few who will use 3.0.x if and only if it adheres strictly. From guido at python.org Fri Jan 30 05:11:02 2009 From: guido at python.org (Guido van Rossum) Date: Thu, 29 Jan 2009 20:11:02 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: On Thu, Jan 29, 2009 at 4:58 PM, Raymond Hettinger wrote: > A couple additional thoughts FWIW: > > * whichdb() selects from multiple file formats, so 3.0.1 would still > be able to read 3.0.0 files. It is the 2.x shelves that won't be > readable at all under any scenario. > > * If you're thinking that shelves have very few users and that > 3.0.0 has had few adopters, doesn't that mitigate the effects > of making a better format available in 3.0.1? Wouldn't this > be the time to do it? > > * The file format itself is not new or unreadable by 3.0.0. > It is just a plain sqlite3 file. Was is new is the ability > of shelve's to call sqlite. To me, that is something a little > different than changing a pickle protocol or somesuch. Sorry, not convinced. This is a change of a different scale than removing stuff that should've been removed. I understand you'd like to see your baby released. But I think it's better to have it tried and tested by others *outside* the core distro first. dbm is not broken in 3.0, just slow. Well so be it, io.py is too and that's a lot more serious. I also note that on some systems at least ndbm and/or gdbm are supported. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From python at rcn.com Fri Jan 30 05:25:53 2009 From: python at rcn.com (Raymond Hettinger) Date: Thu, 29 Jan 2009 20:25:53 -0800 Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: [Guido van Rossum] > Sorry, not convinced. No worries. Py3.1 is not far off. Just so I'm clear. Are you thinking that 3.0.x will never have fast shelves, or are you thinking 3.0.2 or 3.0.3 after some external deployment and battle-testing for the module? Raymond From tjreedy at udel.edu Fri Jan 30 05:27:14 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 29 Jan 2009 23:27:14 -0500 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <498272CB.5040503@ronadam.com> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <498230C7.2040403@pearwood.info> <498272CB.5040503@ronadam.com> Message-ID: Ron Adam wrote: > > > Steven D'Aprano wrote: >> Michael Foord wrote: >> >>> Don't we have a pretty-print API - and isn't it spelled __str__ ? >> >> Not really. If it were as simple as calling str(obj), there would be >> no need for the pprint module. > > I agree. And when I want to use pprint, there are usually additional > output formatting requirements I need that isn't a "one size fits all" > type of problem. Like others, I am wary of over-expanding the list of special methods. Perhap format strings could have a fourth conversion specifier, !p (pretty) in addition to !s, !r, and !a. From tjreedy at udel.edu Fri Jan 30 05:44:16 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Thu, 29 Jan 2009 23:44:16 -0500 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: References: Message-ID: Brett Cannon wrote: > This is my attempt to summarize what everyone has been saying so we > can get this resolved. > >>From what I can tell, most people like the idea of doing a 3.0.1 > release ASAP (like "in a week or so" fast) with the stuff that should > have been removed from 3.0.0 in the first place removed. > > People also seem to support doing a 3.1 release April/May where new > stuff (e.g. io in C, new shelve back-end for sqlite3) is introduced to > the rest of the world. This timeline has the benefit of allowing us to > do an alpha release at PyCon and puts us at a six month release cycle > which does not portray 3.0 or 3.1 as rushed releases. > > The sticky points I see are: > > 1. Barry, who is the release manager for 3.0.1, does not like the idea > of the cruft that is being proposed removed from 3.0.1. Personally I > say we continue to peer pressure him as I think a new major release is > not like our typical minor release, but I am not about to force Barry > to go against what he thinks is reasonable unless I am willing to step > up as release manager (and I am not since I simply don't have the time > to learn the process fast enough along with just a lack of time with > other Python stuff). While I prefer cruft removal now, I will, for the same reason, accept and use whatever whatever Barry delivers. > 2. Do we label 3.0.x as experimental? I say no since it isn't > experimental; we basically had some bugs slip through that happen to > be compatibility problems that were overlooked. I for one never viewed > 3.0.x as experimental, just not the best we could necessarily do > without more input from the community and our own experience with 3.x > in general. It is normal for true x.0 releases to be slightly flakey. Experienced users typically wait for x.1 (or SP1) releases for building production systems. I understand that 'normal' is below Python's usual high standards, but it is not a tragedy ;-). > Let's see if we can get these two points squared away so we can get > 3.0.1 in whatever state it is meant to be in out the door quickly. +1 Terry From nnorwitz at gmail.com Fri Jan 30 06:12:48 2009 From: nnorwitz at gmail.com (Neal Norwitz) Date: Thu, 29 Jan 2009 21:12:48 -0800 Subject: [Python-Dev] python breakpoint opcode In-Reply-To: <014f01c981fd$b7bb3c60$2731b520$@com> References: <014f01c981fd$b7bb3c60$2731b520$@com> Message-ID: On Thu, Jan 29, 2009 at 2:38 AM, Dr Andrew Perella wrote: > Hi, > > I was thinking of adding a breakpoint opcode to python to enable less > invasive debugging. > > I came across posts from 1999 by Vladimir Marangozov and Christian Tismer > discussing this issue but the links to the code are all out of date. Can you provide the links? n From martin at v.loewis.de Fri Jan 30 06:53:57 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 30 Jan 2009 06:53:57 +0100 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: References: Message-ID: <498295F5.2050607@v.loewis.de> > 1. Barry, who is the release manager for 3.0.1, does not like the idea > of the cruft that is being proposed removed from 3.0.1. I don't think he actually said that (in fact, I think he said the opposite). It would be good if he clarified, though. Regards, Martin From martin at v.loewis.de Fri Jan 30 06:56:24 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Fri, 30 Jan 2009 06:56:24 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: <49829688.3090200@v.loewis.de> > Just so I'm clear. Are you thinking that 3.0.x will never have > fast shelves As Guido said, shelves are *already* fast in 3.0, if you are using the right operating system. Regards, Martin From eric at trueblade.com Fri Jan 30 07:58:35 2009 From: eric at trueblade.com (Eric Smith) Date: Fri, 30 Jan 2009 01:58:35 -0500 Subject: [Python-Dev] pprint(iterator) In-Reply-To: References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <498230C7.2040403@pearwood.info> <498272CB.5040503@ronadam.com> Message-ID: <4982A51B.5020801@trueblade.com> Terry Reedy wrote: > Ron Adam wrote: >> >> >> Steven D'Aprano wrote: >>> Michael Foord wrote: >>> >>>> Don't we have a pretty-print API - and isn't it spelled __str__ ? >>> >>> Not really. If it were as simple as calling str(obj), there would be >>> no need for the pprint module. >> >> I agree. And when I want to use pprint, there are usually additional >> output formatting requirements I need that isn't a "one size fits all" >> type of problem. I don't see how you can have a standard interface (like __pprint__), and have additional, per-object formatting parameters. But that's beside the point, I don't like __pprint__ in any event. Too special. > Like others, I am wary of over-expanding the list of special methods. > Perhap format strings could have a fourth conversion specifier, !p > (pretty) in addition to !s, !r, and !a. What would format() do with "!p"? With "!s", it calls str(o), with "!r", it calls repr(o). "!p" could call o.__pprint__(), but that's the special method you're trying to avoid! (I don't recall if I added "!a", and a machine that would know isn't available to me just now.) Eric. From solipsis at pitrou.net Fri Jan 30 11:40:10 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 30 Jan 2009 10:40:10 +0000 (UTC) Subject: [Python-Dev] Python 3.0.1 References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> <20090130023328.GA17511@panix.com> Message-ID: Aahz pythoncraft.com> writes: > > There's absolutely no reason not to have a 3.0.2 before 3.1 comes out. > You're probably right that what Raymond wants to is best not done for > 3.0.1 -- but once we've agreed in principle that 3.0.x isn't a true > production release of Python for PEP6 purposes, we can do "release early, > release often". It's a possibility. To be honest, I didn't envision us releasing a 3.0.2 rather than focusing on 3.1 (which, as others said, can be released in a few months if we keep the amount of changes under control). But then it's only a matter of naming. We can continue the 3.0.x series and incorporate in them whatever was initially planned for 3.1 (including the IO-in-C branch, the dbm.sqlite module, etc.), and release 3.1 only when the whole thing is "good enough". Regards Antoine. From steve at pearwood.info Fri Jan 30 12:04:40 2009 From: steve at pearwood.info (Steven D'Aprano) Date: Fri, 30 Jan 2009 22:04:40 +1100 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <4982A51B.5020801@trueblade.com> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <498230C7.2040403@pearwood.info> <498272CB.5040503@ronadam.com> <4982A51B.5020801@trueblade.com> Message-ID: <4982DEC8.90900@pearwood.info> Eric Smith wrote: > Terry Reedy wrote: >> Ron Adam wrote: >>> >>> >>> Steven D'Aprano wrote: >>>> Michael Foord wrote: >>>> >>>>> Don't we have a pretty-print API - and isn't it spelled __str__ ? >>>> >>>> Not really. If it were as simple as calling str(obj), there would be >>>> no need for the pprint module. >>> >>> I agree. And when I want to use pprint, there are usually additional >>> output formatting requirements I need that isn't a "one size fits >>> all" type of problem. > > I don't see how you can have a standard interface (like __pprint__), and > have additional, per-object formatting parameters. I don't see how you can't. Other standard methods take variable arguments: __init__, __new__, __call__ come to mind. > But that's beside the > point, I don't like __pprint__ in any event. Too special. I'm not sure what you mean by "too special". It's no more special than any other special method. Do you mean the use-case is not common enough? I would find this useful. Whether enough people would find it useful enough to add yet another special method is an open question. -- Steven From ajp at eutechnyx.com Fri Jan 30 11:45:27 2009 From: ajp at eutechnyx.com (Dr Andrew Perella) Date: Fri, 30 Jan 2009 10:45:27 -0000 Subject: [Python-Dev] python breakpoint opcode In-Reply-To: References: <014f01c981fd$b7bb3c60$2731b520$@com> Message-ID: <004e01c982c7$dcf84250$96e8c6f0$@com> Hi Neal, The last post in the thread was: http://mail.python.org/pipermail/python-dev/1999-August/000793.html referencing a download at http://sirac.inrialpes.fr/~marangoz/python/lineno/ Cheers, Andrew This e-mail is confidential and may be privileged. It may be read, copied and used only by the intended recipient. No communication sent by e-mail to or from Eutechnyx is intended to give rise to contractual or other legal liability, apart from liability which cannot be excluded under English law. This email has been scanned for all known viruses by the Email Protection Agency. http://www.epagency.net www.eutechnyx.com Eutechnyx Limited. Registered in England No: 2172322 From mal at egenix.com Fri Jan 30 12:24:00 2009 From: mal at egenix.com (M.-A. Lemburg) Date: Fri, 30 Jan 2009 12:24:00 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> <20090130023328.GA17511@panix.com> Message-ID: <4982E350.9050206@egenix.com> On 2009-01-30 11:40, Antoine Pitrou wrote: > Aahz pythoncraft.com> writes: >> There's absolutely no reason not to have a 3.0.2 before 3.1 comes out. >> You're probably right that what Raymond wants to is best not done for >> 3.0.1 -- but once we've agreed in principle that 3.0.x isn't a true >> production release of Python for PEP6 purposes, we can do "release early, >> release often". > > It's a possibility. To be honest, I didn't envision us releasing a 3.0.2 rather > than focusing on 3.1 (which, as others said, can be released in a few months if > we keep the amount of changes under control). > > But then it's only a matter of naming. We can continue the 3.0.x series and > incorporate in them whatever was initially planned for 3.1 (including the > IO-in-C branch, the dbm.sqlite module, etc.), and release 3.1 only when the > whole thing is "good enough". That would be my preference. -- Marc-Andre Lemburg eGenix.com Professional Python Services directly from the Source (#1, Jan 30 2009) >>> Python/Zope Consulting and Support ... http://www.egenix.com/ >>> mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ >>> mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ ________________________________________________________________________ ::: Try our new mxODBC.Connect Python Database Interface for free ! :::: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg Registered at Amtsgericht Duesseldorf: HRB 46611 http://www.egenix.com/company/contact/ From steve at holdenweb.com Fri Jan 30 13:03:03 2009 From: steve at holdenweb.com (Steve Holden) Date: Fri, 30 Jan 2009 07:03:03 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: Antoine Pitrou wrote: > Raymond Hettinger rcn.com> writes: >> * If you're thinking that shelves have very few users and that >> 3.0.0 has had few adopters, doesn't that mitigate the effects >> of making a better format available in 3.0.1? Wouldn't this >> be the time to do it? > > There was already another proposal for an sqlite-based dbm module, you may > want to synchronize with it: > http://bugs.python.org/issue3783 > > As I see it, the problem with introducing it in 3.0.1 is that we would be > rushing in a new piece of code without much review or polish. Again > Also, there are > only two release blockers left for 3.0.1, so we might just finish those and > release, then concentrate on 3.1. > Seems to me that every deviation from the policy introduced as a result for the True/False debacle leads to complications and problems. There's no point having a policy instigated for good reasons if we can ignore those reasons on a whim. So to my mind, ignoring the policy *is* effectively declaring 3.0 to be, well, if not a dead parrot then at least a rushed release. Most consistently missing from this picture has been effective communications (in both directions) with the user base. Consequently nobody knows whether specific features are in serious use, and nobody knows whether 3.0 is intended to be a stable base for production software or not. Ignoring users, and acting as though we know what they are doing and what they want, is not going to lead to better acceptance of future releases. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ From p.f.moore at gmail.com Fri Jan 30 13:02:24 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 30 Jan 2009 12:02:24 +0000 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <4982DEC8.90900@pearwood.info> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <498230C7.2040403@pearwood.info> <498272CB.5040503@ronadam.com> <4982A51B.5020801@trueblade.com> <4982DEC8.90900@pearwood.info> Message-ID: <79990c6b0901300402i5d93aa3br57a2bbf5ad57a52b@mail.gmail.com> 2009/1/30 Steven D'Aprano : >> But that's beside the >> >> point, I don't like __pprint__ in any event. Too special. > > I'm not sure what you mean by "too special". It's no more special than any > other special method. Do you mean the use-case is not common enough? I would > find this useful. Whether enough people would find it useful enough to add > yet another special method is an open question. In my view, the issue is that as a special method, *either* it has to be included on all core types (too intrusive for something as non-critical as pprint) *or* pprint has to hard-code the behaviour for core types and still fall back to the special method for non-core types (ugly and a maintenance problem keeping the type tests up to date). Some sort of registry of type-specific implementation functions (whether you call it a generic function or just put together a custom implementation for pprint alone) is more flexible, and less intrusive. It also allows end users to customise the behaviour, even for core types. In all honesty, I think pkgutil.simplegeneric should be documented, exposed, and moved to a library of its own[1]. It's precisely what is needed for this type of situation, which does come up fairly often. I don't think ABCs do what's needed here (although maybe I'm missing something - if so, I'd be interested in knowing what). I'd be willing to look at creating a patch, if the consensus was that this was an appropriate approach and there was a reasonable chance of it being accepted (assuming my code wasn't rubbish :-)) Paul. [1] Note - I have no opinion on the quality of the code, I haven't reviewed it, I am assuming it's OK on the basis that it has been present and in use internally in the pkgutil module for some time now. From ncoghlan at gmail.com Fri Jan 30 13:21:02 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Jan 2009 22:21:02 +1000 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: <49826F5D.7000009@v.loewis.de> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> <49826E1A.6080809@v.loewis.de> <49826F5D.7000009@v.loewis.de> Message-ID: <4982F0AE.20308@gmail.com> Martin v. L?wis wrote: >> svn up >> svnmerge >> ... conflicts >> svn revert -R . >> svn up >> svnmerge >> ... same conflicts > > Ah. In the 3.0 branch, always do "svn revert ." after svnmerge. > It's ok (Nick says it isn't exactly ok, but I don't understand why) Doing "svn revert ." before making the commit will lose the metadata changes that svnmerge uses for its bookkeeping (i.e. if this practice is used regularly, the tool will completely lose track of which revisions have already been merged). That won't bother those of us that are only backporting cherry-picked revisions, but is rather inconvenient for anyone checking for revisions that haven't been backported yet, but haven't been explicitly blocked either. Doing "svn resolved ." assumes that you did everything else correctly, and even then I don't see how svnmerge could both backport the py3k changes to the metadata and make its own changes and still get the metadata to a sane state. The consequence of getting this approach wrong is that the merge state of the 3.0 maintenance branch can be clobbered completely (losing track both of which revisions have been backported and which have been blocked). Doing both "svn revert ." and "svnmerge merge -M -F " clears out the conflicted metadata and then correctly updates the metadata for the revisions that have been backported. It will always update the svnmerge metadata correctly, regardless of the relative order of the svnmerge and svn update operations. Given the choice of a method which will always do the right thing, over one which always does the wrong thing and another one which only does the right thing if I did two other things in the right order and will completely trash the bookkeeping if I get it wrong, I prefer the option which is guaranteed to be correct (even if it happens to be a little slower as svnmerge recreates the needed metadata updates). If there's something wrong with my understanding of either svn properties or the operation of svnmerge that means the quicker approaches aren't as broken as I think they are, then I'd be happy to adopt one of them (since they *are* faster than my current approach). But until someone pokes a hole in my logic, I'll stick with the slower-but-always-correct methodology (and continue advocating that approach to everyone else doing updates that affect all four branches). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From phil at riverbankcomputing.com Fri Jan 30 13:21:39 2009 From: phil at riverbankcomputing.com (Phil Thompson) Date: Fri, 30 Jan 2009 12:21:39 +0000 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: <8044d27e31e3f09373f63f87d77bedd5@localhost> On Fri, 30 Jan 2009 07:03:03 -0500, Steve Holden wrote: > Antoine Pitrou wrote: >> Raymond Hettinger rcn.com> writes: >>> * If you're thinking that shelves have very few users and that >>> 3.0.0 has had few adopters, doesn't that mitigate the effects >>> of making a better format available in 3.0.1? Wouldn't this >>> be the time to do it? >> >> There was already another proposal for an sqlite-based dbm module, you >> may >> want to synchronize with it: >> http://bugs.python.org/issue3783 >> >> As I see it, the problem with introducing it in 3.0.1 is that we would be >> rushing in a new piece of code without much review or polish. > > Again > >> Also, there are >> only two release blockers left for 3.0.1, so we might just finish those >> and >> release, then concentrate on 3.1. >> > Seems to me that every deviation from the policy introduced as a result > for the True/False debacle leads to complications and problems. There's > no point having a policy instigated for good reasons if we can ignore > those reasons on a whim. > > So to my mind, ignoring the policy *is* effectively declaring 3.0 to be, > well, if not a dead parrot then at least a rushed release. > > Most consistently missing from this picture has been effective > communications (in both directions) with the user base. Consequently > nobody knows whether specific features are in serious use, and nobody > knows whether 3.0 is intended to be a stable base for production > software or not. Ignoring users, and acting as though we know what they > are doing and what they want, is not going to lead to better acceptance > of future releases. My 2 cents as a user... I wouldn't consider v3.0.n (where n is small) for use in production. v3.1 however implies (to me at least) a level of quality where I would be disappointed if it wasn't production ready. Therefore I would suggest the main purpose of any v3.0.1 release is to make sure that v3.1 is up to scratch. Phil From eric at trueblade.com Fri Jan 30 13:33:19 2009 From: eric at trueblade.com (Eric Smith) Date: Fri, 30 Jan 2009 07:33:19 -0500 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <4982DEC8.90900@pearwood.info> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <498230C7.2040403@pearwood.info> <498272CB.5040503@ronadam.com> <4982A51B.5020801@trueblade.com> <4982DEC8.90900@pearwood.info> Message-ID: <4982F38F.8030908@trueblade.com> Steven D'Aprano wrote: > Eric Smith wrote: >> Terry Reedy wrote: >>> Ron Adam wrote: >>>> >>>> >>>> Steven D'Aprano wrote: >>>>> Michael Foord wrote: >>>>> >>>>>> Don't we have a pretty-print API - and isn't it spelled __str__ ? >>>>> >>>>> Not really. If it were as simple as calling str(obj), there would >>>>> be no need for the pprint module. >>>> >>>> I agree. And when I want to use pprint, there are usually >>>> additional output formatting requirements I need that isn't a "one >>>> size fits all" type of problem. >> >> I don't see how you can have a standard interface (like __pprint__), >> and have additional, per-object formatting parameters. > > I don't see how you can't. Other standard methods take variable > arguments: __init__, __new__, __call__ come to mind. Those are different, since they're called on known specific objects. Having params to a generic __pprint__ method would be more like having params to __str__ or __repr__. If you know enough about the object to know which parameters to pass to its pretty-print function, then just call a normal method on the object to do the pprint'ing. But, for example, assuming pprint for a list is recursive (as it is for repr), how would you pass the arguments around? > > But that's beside the >> point, I don't like __pprint__ in any event. Too special. > > I'm not sure what you mean by "too special". It's no more special than > any other special method. Do you mean the use-case is not common enough? > I would find this useful. Whether enough people would find it useful > enough to add yet another special method is an open question. Bad choice of words on my part. I meant "too special case" for such machinery. That is, the use case isn't common enough. From ncoghlan at gmail.com Fri Jan 30 13:38:13 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Jan 2009 22:38:13 +1000 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: <49826577.4030808@v.loewis.de> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> Message-ID: <4982F4B5.7090001@gmail.com> Martin v. L?wis wrote: >> There are potential problems with doing it that way [1]. The safer >> option is to do: >> >> svn revert . >> svnmerge merge -M -F > > I still don't see the potential problem. If you do svnmerge, svn commit, > all is fine, right? Sort of. svnmerge still gets confused by the fact that the revision being backported already has changes to the svnmerge metadata, so you have to either revert it (which is always wrong), or flag it as resolved (I believe that svnmerge actually does get that case right, but I haven't checked it extensively - since if it does get it right, I don't understand why it leaves the conflict in place instead of automatically marking it as resolved). Regardless, the consequences of forgetting that you did the svn up after the merge instead of before (e.g. if it took some time to get the backported version working, or if something interrupted you between the initial backport/update and the final test and commit step) are fairly hard to clean up, so I prefer the safe approach (despite the extra minute or two it takes for svnmerge to recalculate the metadata changes). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From solipsis at pitrou.net Fri Jan 30 13:38:15 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Fri, 30 Jan 2009 12:38:15 +0000 (UTC) Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> <49826E1A.6080809@v.loewis.de> <49826F5D.7000009@v.loewis.de> <4982F0AE.20308@gmail.com> Message-ID: Nick Coghlan gmail.com> writes: > > Doing "svn resolved ." assumes that you did everything else correctly, > and even then I don't see how svnmerge could both backport the py3k > changes to the metadata and make its own changes and still get the > metadata to a sane state. The metadata are discriminated by source merge URL. That is, the py3k metadata are of the form "/python/trunk:" while the release30-maint metadata are of the form "/python/branches/py3k:". (*) I guess that's what allows svn to not shoot itself in the foot when merging. I did "svn resolved ." again yesterday and it doesn't seem to have borked anything. (*) (try "svn propget svnmerge-integrated" or "svn propget svnmerge-blocked") Regards Antoine. From ncoghlan at gmail.com Fri Jan 30 14:02:19 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Fri, 30 Jan 2009 23:02:19 +1000 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> <49826E1A.6080809@v.loewis.de> <49826F5D.7000009@v.loewis.de> <4982F0AE.20308@gmail.com> Message-ID: <4982FA5B.7000507@gmail.com> Antoine Pitrou wrote: > Nick Coghlan gmail.com> writes: >> Doing "svn resolved ." assumes that you did everything else correctly, >> and even then I don't see how svnmerge could both backport the py3k >> changes to the metadata and make its own changes and still get the >> metadata to a sane state. > > The metadata are discriminated by source merge URL. That is, the py3k metadata > are of the form "/python/trunk:" while the release30-maint > metadata are of the form "/python/branches/py3k:". (*) > I guess that's what allows svn to not shoot itself in the foot when merging. Ah, thanks - that's the piece I was missing regarding why the svn resolved trick works (I have used that approach before and checked it as you did - as Martin has pointed out, the only time it definitely goes wrong is if you do an update *after* doing the local merge and the update included other backports). So I'll chalk the fact that svnmerge emits that false alarm up to a deficiency in the tool and only use the "regenerate the metadata" approach when I suspect I may have done the merge+update in the wrong order (since it's a harmless thing to do - it just wastes a couple of minutes relative to the svn resolved approach). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From skip at pobox.com Fri Jan 30 14:04:27 2009 From: skip at pobox.com (skip at pobox.com) Date: Fri, 30 Jan 2009 07:04:27 -0600 Subject: [Python-Dev] Universal newlines, and the gzip module. In-Reply-To: <49821403.1030603@noaa.gov> References: <49821403.1030603@noaa.gov> Message-ID: <18818.64219.137713.112225@montanaro.dyndns.org> Christopher> 1) It would be nice if the gzip module (and the zip lib Christopher> module) supported Universal newlines -- you could read a Christopher> compressed text file with "wrong" newlines, and have Christopher> them handled properly. However, that may be hard to do, Christopher> so at least: Christopher> 2) Passing a 'U' flag in to gzip.open shouldn't break it. I agree with Brett that 'U' is meaningless on the compressed file itself. You want it applied to the contents of the compressed file though, is that right? That makes sense to me. It probably belongs in a separate argument though. Skip From p.f.moore at gmail.com Fri Jan 30 14:28:15 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 30 Jan 2009 13:28:15 +0000 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: <79990c6b0901300528h160c0c81t74e44497fbcf709e@mail.gmail.com> 2009/1/30 Steve Holden : > Most consistently missing from this picture has been effective > communications (in both directions) with the user base. Serious question: does anybody know how to get better communication from the user base? My impression is that it's pretty hard to find out who is actually using 3.0, and get any feedback from them. I suppose a general query on clp might get some feedback, but otherwise, what? I've not seen any significant amount of blog activity on 3.0. As a small contribution, my position is as follows: I use Python mostly for one-off scripts, both at home and at work. I also use Python for a suite of database monitoring tools, as well as using some applications written in Python (Mercurial and MoinMoin, in particular). Ignore the applications, they aren't moving to 3.0 in the short term (based on comments from the application teams). For my own use, the key modules I need are cx_Oracle and pywin32. cx_Oracle was available for 3.0 very quickly (and apparently the port wasn't too hard, which is good feedback!). pywin32 is just now available in preview form. My production box is still using 2.5, and I will probably migrate to 2.6 in due course - but I'll probably leave 3.0 for the foreseeable future (I may rethink if MoinMoin becomes available on 3.0 sooner rather than later). For my desktop PC, I'm using 2.6 but as I do a fair bit of experimenting with modules, I'm taking it slowly (I'd like to see 2.6 binaries for a few more packages, really). I have 3.0 installed, but not as default, so frankly it doesn't get used unless I'm deliberately trying it out. Based on the recent threads, I'm thinking I really should make 3.0 the default just to get a better feel for it. The io-in-C changes would probably help push me to doing so (performance isn't really an issue for me, but I find I'm irrationally swayed by the "3.0 io is slow, but it's getting fixed soon by the io-in-C rewrite" messages I've been seeing - I have no idea if that's a general impression, or just a result of my following python-dev, though). It would make no difference to me, personally, whether *any* of the changes being discussed were released in 3.0.1 or 3.1 (except insofar as I'd like to see them sooner rather than later). So, in summary, for practical purposes I use 2.6. I probably could use 3.0 for a significant proportion of my needs, but the impressions I've been getting make me cautious. I'm using Windows, and although I *can* build a lot of stuff myself, I really don't want to be bothered, so I rely on bdist_wininst installers being available, which is an additional constraint. Paul. From p.f.moore at gmail.com Fri Jan 30 15:51:12 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Fri, 30 Jan 2009 14:51:12 +0000 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <49830FB1.3060306@livinglogic.de> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <498230C7.2040403@pearwood.info> <498272CB.5040503@ronadam.com> <4982A51B.5020801@trueblade.com> <4982DEC8.90900@pearwood.info> <79990c6b0901300402i5d93aa3br57a2bbf5ad57a52b@mail.gmail.com> <49830FB1.3060306@livinglogic.de> Message-ID: <79990c6b0901300651m4e27bdd9i7c7b2484ab0353bd@mail.gmail.com> 2009/1/30 Walter D?rwald : > Paul Moore wrote: > >> [...] >> In all honesty, I think pkgutil.simplegeneric should be documented, >> exposed, and moved to a library of its own[1]. > > http://pypi.python.org/pypi/simplegeneric Thanks, I was aware of that. I assume that the barrier to getting this into the stdlib will be higher than to simply exposing an implementation already available in the stdlib. To be honest, all I would like is for these regular "let's have another special method" discussions to become unnecessary... Paul. From walter at livinglogic.de Fri Jan 30 15:33:21 2009 From: walter at livinglogic.de (=?ISO-8859-1?Q?Walter_D=F6rwald?=) Date: Fri, 30 Jan 2009 15:33:21 +0100 Subject: [Python-Dev] pprint(iterator) In-Reply-To: <79990c6b0901300402i5d93aa3br57a2bbf5ad57a52b@mail.gmail.com> References: <306DE7912FCF48AD9E1F77481B3617BC@RaymondLaptop1> <20090129142021.GA8996@panix.com> <4981BBB7.50502@voidspace.org.uk> <498230C7.2040403@pearwood.info> <498272CB.5040503@ronadam.com> <4982A51B.5020801@trueblade.com> <4982DEC8.90900@pearwood.info> <79990c6b0901300402i5d93aa3br57a2bbf5ad57a52b@mail.gmail.com> Message-ID: <49830FB1.3060306@livinglogic.de> Paul Moore wrote: > [...] > In all honesty, I think pkgutil.simplegeneric should be documented, > exposed, and moved to a library of its own[1]. http://pypi.python.org/pypi/simplegeneric > [...] Servus, Walter From barry at python.org Fri Jan 30 16:16:53 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 30 Jan 2009 10:16:53 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> <20090129113130.GA2490@amk.local> <7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> Message-ID: <16513B54-BBD7-4640-AD40-EE8B8B6FCE78@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 29, 2009, at 1:13 PM, Guido van Rossum wrote: > I'd like to find a middle ground. We can all agree that the users of > 3.0 are a small minority compared to the 2.x users. Therefore I think > we can bend the rules more than we have done for the recent 2.x > releases. Those rules weren't always there (anyone remember the > addition of bool, True and False to 2.2.1?). The rules were introduced > for the benefit of our most conservative users -- people who introduce > Python in an enterprise and don't want to find that they are forced to > upgrade in six months. Removing stuff that should have been removed is fine, and I'm even okay with bending the "should have been" definition. > Frankly, I don't really believe the users for whom those rules were > created are using 3.0 yet. Instead, I expect there to be two types of > users: people in the educational business who don't have a lot of > bridges to burn and are eager to use the new features; and developers > of serious Python software (e.g. Twisted) who are trying to figure out > how to port their code to 3.0. The first group isn't affected by the > changes we're considering here (e.g. removing cmp or some obscure > functions from the operator module). The latter group *may* be > affected, simply because they may have some pre-3.0 code using old > features that (by accident) still works under 3.0. I mostly agree. I'm also concerned about downstream consumers that may be distributing 3.0 and will have a different schedule for doing their upgrades. What I really want to avoid is people having to do stuff like the ugliness to work around the 2.2.1 additions: try: True except NameError: True = 1 False = 0 Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYMZ5nEjvBPtnXfVAQJZyAP/dAbxc37a3HPfZ6SYH29OxfsyWeist6yk 0jli2WVDiLnc9iYmLky3Bj/B7aijZpq2X2/UOS/F6akOYJhLKfjYckiXzcjBmBIK Ypy3uGrw1wRFxz4ZrJGGzBjxvzSkbYj8ijkGsPqm95FDalq2YOXtrRbOft861dyy 4i2APtZ40AA= =s7U3 -----END PGP SIGNATURE----- From scav at blueyonder.co.uk Fri Jan 30 15:57:04 2009 From: scav at blueyonder.co.uk (scav at blueyonder.co.uk) Date: Fri, 30 Jan 2009 14:57:04 -0000 (GMT) Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: References: Message-ID: <31113.84.19.238.82.1233327424.VFkUQmFaS098Sh0W.squirrel@84.19.238.82> Hi all, > On Thu, Jan 29, 2009 at 6:12 AM, Ben North wrote: >> Hi, >> >> I find 'functools.partial' useful, but occasionally I'm unable to use it >> because it lacks a 'from the right' version. > -1 For me, the main objection to a partial that places its stored positional arguments from the right is that you don't know which positions those arguments will actually occupy until the partial is called. Who *really* thinks that would be a neat feature? There's probably a reason why Haskell doesn't do this... Peter Harris From barry at python.org Fri Jan 30 16:24:15 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 30 Jan 2009 10:24:15 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <20090129220951.GA17786@panix.com> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 29, 2009, at 5:09 PM, Aahz wrote: > The problem is that the obvious candidate for doing the vetting is the > Release Manager, and Barry doesn't like this approach. The vetting > does > need to be handled by a core committer IMO -- MAL, are you > volunteering? > Anyone else? > > Barry, are you actively opposed to marking 3.0.x as experimental, or > do > you just dislike it? (I.e. are you -1 or -0?) I'm opposed to marking 3.0 experimental, so I guess -1 there. It's the first model year of a redesigned nameplate, but it's still got four wheels, a good motor and it turns mostly in the direction you point it. :) No release is ever what everyone wants. There has never been a release where I haven't wanted to add or change something after the fact (see my recent 2.6 unicode grumblings). Perhaps frustratingly, but usually correctly, the community is very resistant to making such feature or API changes after a release is made. That's just life; we deal with it, workaround it and work harder towards the next major release. If that's too burdensome, then maybe it's really the 18 month development cycle that needs to be re-examined. All that aside, I will support whatever community consensus or BDFL pronouncement is made here. Don't be surprised if when you ask me though I'm more conservative than you want. You can always appeal to a higher authority (python-dev or Guido). So don't worry, I'll continue to RM the 3.0 series! Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYMbn3EjvBPtnXfVAQLsUAP+J3WPGMNgGPSWrawJa8Yp+1RBTIt2vOif rgV+5xyOQqOKnuDntZPAv1R2SqrTCHv8abyLP4pBaoklqtymIDgikiOLJkI2tHij MT+gfPu4Xb7F35HAXE/6vhel124nr8JG15fXBQdEWqiozNZl9GaXEqKZY8tdhgkC 4VDdY6KEwL0= =kvOy -----END PGP SIGNATURE----- From barry at python.org Fri Jan 30 16:28:28 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 30 Jan 2009 10:28:28 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com><497F6E55.6090608@v.loewis.de><49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: <01897A99-0135-4CAC-AD16-A255EC3EDD15@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 29, 2009, at 6:27 PM, Raymond Hettinger wrote: >> The problem is that the obvious candidate for doing the vetting is >> the >> Release Manager, and Barry doesn't like this approach. The vetting >> does >> need to be handled by a core committer IMO -- MAL, are you >> volunteering? >> Anyone else? > > It should be someone who is using 3.0 regularly (ideally someone who > is working on fixing it). IMO, people who aren't exercising it > don't really > have a feel for the problems or the cost/benefits of the fixes. That's not the right way to look at it. I'm using 2.6 heavily these days, does that mean I get to decide what goes in it or not? No. Everyone here, whether they are using 2.6 or not should weigh in, with of course one BDFL to rule them all. Same goes for 3.0. This is a community effort and I feel strongly that we should work toward reaching consensus (that seems to be an American theme these days). Make your case, we'll listen to the pros and cons, decide as a community and then move on. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYMcnHEjvBPtnXfVAQK+aQQApR5McrCOiYUf6RiNvmrDKmTShMde4iWt Rh9x3wY3EVQskcgdpd+05VSfceVCKJJlqbR1NdMDtnuzM8aD56qQyAxYHhqYyxkh 0adHg1ZmYt/95K0/WE3DM8NoBUPxUFIb4nyeprGBsYola9BUQNc//VSRSIyXf0U6 p3xwN8oQS/c= =KKeq -----END PGP SIGNATURE----- From barry at python.org Fri Jan 30 16:28:54 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 30 Jan 2009 10:28:54 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 29, 2009, at 6:40 PM, Guido van Rossum wrote: > On Thu, Jan 29, 2009 at 3:27 PM, Raymond Hettinger > wrote: >> To get the ball rolling, I have a candidate for discussion. >> >> Very late in the 3.0 process (after feature freeze), the bsddb code >> was >> ripped out (good riddance). This had the unfortunate side-effect of >> crippling shelves which now fall back to using dumbdbm. >> >> I'm somewhat working on an alternate dbm based on sqlite3: >> http://code.activestate.com/recipes/576638/ >> It is a pure python module and probably will not be used directly, >> but shelves >> will see an immediate benefit (especially for large shelves) in >> terms of speed >> and space. >> >> On the one hand, it is an API change or new feature because people >> can >> (if they choose) access the dbm directly. OTOH, it is basically a >> performance fix for shelves whose API won't change at all. The >> part that is visible >> and incompatible is that 3.0.1 shelves won't be readable by 3.0.0. > > That is too much for 3.0.1. It could affect external file formats > which strikes me as a bad idea. > > Sounds like a good candidate for 3.1, which we should be expecting in > 4-6 months I hope. Also you could try find shelve users (are there > any?) and recommend they install this as a 3rd party package, with the > expectation it'll be built into 3.1. I concur. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYMctnEjvBPtnXfVAQKC3QP/bVCQ6KTI5Kd1H/y2Qp85pkLiC8JAH7ap 8vJ2xPjZde4oe6tz5WRziUparpM5FMA4Cz0fuMg4C7vtt6ZLIG27OKVuXx9i4atG zrtnEfs129Xouq4se6UFiIaIj1KNiNWbZa4cOkSlQFUq37Ww/B25JlrtGnreZB4v 13r8lRzTNOU= =8Fo7 -----END PGP SIGNATURE----- From barry at python.org Fri Jan 30 16:33:09 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 30 Jan 2009 10:33:09 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: <38A60A2A-7B03-4EF3-AB41-C9B462665226@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 29, 2009, at 7:43 PM, Raymond Hettinger wrote: > We should have insisted that bsddb not be taken out until a > replacement > was put in. The process was broken with the RM insisting on feature > freeze early in the game but letting tools like bsddb get ripped-out > near the end. IMO, it was foolish to do one without the other. Very good arguments were made for ripping bsddb out. Guido agreed. A replacement would have delayed 3.0 even more than it originally was, and the replacement would not have been battle tested. It's possible, maybe even likely, that the replacement would have been found inadequate later on and then we'd be saddled with a different mistake. Given that it's easy to make 3rd party packages work, I firmly believe this was the right decision. With a proven, solid, popular replacement available for several months, it will be easy to pull that into the 3.1 release. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYMdtXEjvBPtnXfVAQK+FAQAlNL26s4ekva/3jpnATfZfXtAkHa+Wqdo f9luB8gkLk3Dk0qXyjm6AisFCMh+Zgu8g+OgrWS3DO6yR+/SlfjVcPbq0kr8nP+L +EXXisuZofeHuxp0JZ3ePoL94ALbv35norx1yHqiKnEMEvUbCfdNWb4sGE2kM5ZE snfeFattlIg= =RQ7t -----END PGP SIGNATURE----- From barry at python.org Fri Jan 30 16:40:42 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 30 Jan 2009 10:40:42 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <873af1efvf.fsf@xemacs.org> References: <1afaf6160901271222i2e2d9525i883367789219f96d@mail.gmail.com> <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> <873af1efvf.fsf@xemacs.org> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 29, 2009, at 8:34 PM, Stephen J. Turnbull wrote: > I think that the important question is "can the 3.0.x series be made > 'viable' in less than the time frame for 3.1?" If not, I really have > to think it's DOA from the point of view of folks who consider 3.0.0 > non-viable. I think that's what Barry and Martin are saying. Of course, the definition of "viable" is the key thing here. I'm not picking on Raymond, but what is not viable for him will be perfectly viable for other people. We have to be very careful not to view our little group of insiders as the sole universe of Python users (3.0 or otherwise). > Guido is saying something different. AIUI, he's saying that > explicitly > introducing controlled instability into 3.0.x of the form "this is > what the extremely stable non-buggy inherited-from-3.0 core of 3.1 is > going to look like" will be a great service to those who consider > 3.0.0 non-viable. > > The key point is that new features in 3.1 are generally going to be > considered less reliable than those inherited from 3.0, and thus a > debugged 3.0, even if the implementations have been unstable, provides > a way for the very demanding to determine what that set is, and to > test how it behaves in their applications. I'm not sure I agree with that last paragraph. We have a pretty good track record of introducing stable new features in dot-x releases, so there's no reason to believe that the same won't work for 3.x. > I think it's worth a try, after consultation with some of the major > developers who are the ostensible beneficiaries. But if tried, I > think it's important to mark 3.0.x as "not yet stable" even if the > instability is carefully defined and controlled. It all depends on where that instability lies. If 3.0 crashed every time you raised an exception due to some core design flaw, then yeah, we'd have a problem. The fact that a bundled module doesn't do what you want it to does not scream instability to me. The should-have- been-removed features don't either. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYMfenEjvBPtnXfVAQLIhwP+JVFJWXRoQ5Fz65vrmmGo+8w7ZspjVCWP 9a+yrAh1aGHf0w4vQAirRuBGZNWvl4e5F/Pd4DoWdFVPPKuEhyOiavPAP90ViThy yKHHoEBv6cloUIRXrKendJGzA7L5bDVN0CoQjcPh499mpDxvq7aGgru2lYdD7iT0 KuB21maqMTc= =dWTA -----END PGP SIGNATURE----- From barry at python.org Fri Jan 30 17:02:19 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 30 Jan 2009 11:02:19 -0500 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: References: Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 29, 2009, at 10:59 PM, Brett Cannon wrote: > 1. Barry, who is the release manager for 3.0.1, does not like the idea > of the cruft that is being proposed removed from 3.0.1. Personally I > say we continue to peer pressure him as I think a new major release is > not like our typical minor release, but I am not about to force Barry > to go against what he thinks is reasonable unless I am willing to step > up as release manager (and I am not since I simply don't have the time > to learn the process fast enough along with just a lack of time with > other Python stuff). I followed up in a different thread, but just FTR here. I'll continue to RM 3.0. I'll follow the community consensus on specific issues, but if there isn't a clear one and I have to decide, I'll likely take the more conservative path. Appealing to python-dev and Guido is (as always :) allowed. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYMkjHEjvBPtnXfVAQK/fgP/T4uWwU41k1OEgS6ngXlZvUao3dVh0Hni f+iyeo+cyvWggp6ks1NLoJ+BOH/lpwIybwtuLqUI/FcajctdlOUaTyw2CE2jPjgD SMJID5oj1e/7vpB3Dk26RCIB+trZ6GTg1lC4OjRVn0vrKK/QVRg6dYD2YKcW0Seh fF++3EHxhW0= =TMO+ -----END PGP SIGNATURE----- From barry at python.org Fri Jan 30 17:03:17 2009 From: barry at python.org (Barry Warsaw) Date: Fri, 30 Jan 2009 11:03:17 -0500 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: <498295F5.2050607@v.loewis.de> References: <498295F5.2050607@v.loewis.de> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 30, 2009, at 12:53 AM, Martin v. L?wis wrote: >> 1. Barry, who is the release manager for 3.0.1, does not like the >> idea >> of the cruft that is being proposed removed from 3.0.1. > > I don't think he actually said that (in fact, I think he said the > opposite). It would be good if he clarified, though. To clarify: cruft that should have been removed 3.0 is fine to remove for 3.0.1, for some definition of "should have been". Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYMkxXEjvBPtnXfVAQIqtgP+Mra/z5nLY5SU56cw0JjgBwCVY1N3060K TSG90E4R+JpCsXRD7sjf2UfSAzKAGKz6gYja3hnt5awzhnCJMacgN0tvXNaAmuYi b7Qb6N4oV3izDGZPl3x0EO3DGimov2Nq8hCsEZbYnNd3U62MwRlzpW+FJbFJlZHO VR1jiVWX8Ig= =p0VE -----END PGP SIGNATURE----- From nde at comp.leeds.ac.uk Fri Jan 30 17:01:19 2009 From: nde at comp.leeds.ac.uk (Nick Efford) Date: Fri, 30 Jan 2009 16:01:19 +0000 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: Message-ID: <4983244F.3060801@comp.leeds.ac.uk> > Paul Moore wrote: > > Serious question: does anybody know how to get better communication > from the user base? My impression is that it's pretty hard to find out > who is actually using 3.0, and get any feedback from them. I suppose a > general query on clp might get some feedback, but otherwise, what? > I've not seen any significant amount of blog activity on 3.0. I teach programming in a CS dept. at a UK university. We've been teaching Python in one context or another for 5 years now, and are currently in our second year of teaching it as the primary programming language. We have to make decisions on software versions for the coming academic year during the summer months. This means that we've had to be content this year with Python 2.5. We'd love to switch to 3.0 as soon as possible (i.e., Oct 2009), as it is a significantly cleaner language for our purposes. However, we make extensive use of third-party libraries and frameworks such as Pygame, wxPython, etc, to increase the motivation levels of students. The 3.0-readiness of these libraries and frameworks is inevitably going to be a factor in the decision we make this summer. Nick From dickinsm at gmail.com Fri Jan 30 17:50:36 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Fri, 30 Jan 2009 16:50:36 +0000 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: References: <498295F5.2050607@v.loewis.de> Message-ID: <5c6f2a5d0901300850q7b261e47k347811f7b183718b@mail.gmail.com> On Fri, Jan 30, 2009 at 4:03 PM, Barry Warsaw wrote: > To clarify: cruft that should have been removed 3.0 is fine to remove for > 3.0.1, for some definition of "should have been". Just to double check, can I take this as a green light to continue with the cmp removal (http://bugs.python.org/issue1717) for 3.0.1? Mark From status at bugs.python.org Fri Jan 30 18:06:48 2009 From: status at bugs.python.org (Python tracker) Date: Fri, 30 Jan 2009 18:06:48 +0100 (CET) Subject: [Python-Dev] Summary of Python tracker Issues Message-ID: <20090130170648.2AB007859E@psf.upfronthosting.co.za> ACTIVITY SUMMARY (01/23/09 - 01/30/09) Python tracker at http://bugs.python.org/ To view or respond to any of the issues listed below, click on the issue number. Do NOT respond to this message. 2352 open (+54) / 14582 closed (+20) / 16934 total (+74) Open issues with patches: 788 Average duration of open issues: 697 days. Median duration of open issues: 6 days. Open Issues Breakdown open 2328 (+53) pending 24 ( +1) Issues Created Or Reopened (74) _______________________________ urrlib2/httplib doesn't reset file position between requests 01/23/09 http://bugs.python.org/issue5038 created matejcik Adjust reference-counting note 01/24/09 http://bugs.python.org/issue5039 created tjreedy Bug of CGIXMLRPCRequestHandler 01/24/09 http://bugs.python.org/issue5040 created WayneHuang Memory leak in imp.find_module 01/24/09 CLOSED http://bugs.python.org/issue5041 created ocean-city patch, easy Structure sub-subclass does not initialize with base class posit 01/24/09 http://bugs.python.org/issue5042 created jaraco get_msvcr() returns None rather than [] 01/24/09 http://bugs.python.org/issue5043 created lkcl name not found in generator in eval() 01/24/09 http://bugs.python.org/issue5044 created fjhpy imaplib should remove length of literal strings 01/24/09 http://bugs.python.org/issue5045 created bmoore native win32 and wine mingw+msys build of python2.7 01/24/09 CLOSED http://bugs.python.org/issue5046 created lkcl patch Remove Monterey support from configure.in 01/24/09 http://bugs.python.org/issue5047 created skip.montanaro patch Extending itertools.combinations 01/25/09 CLOSED http://bugs.python.org/issue5048 created konryd ctypes unwilling to allow pickling wide character 01/25/09 http://bugs.python.org/issue5049 created jaraco unicode(C) invokes C.__unicode__ when __unicode__ is defined 01/25/09 CLOSED http://bugs.python.org/issue5050 created livibetter test_update2 in test_os.py invalid due to os.environ.clear() fol 01/25/09 http://bugs.python.org/issue5051 created lkcl Mark distutils to stay compatible with 2.3 01/25/09 http://bugs.python.org/issue5052 reopened tarek http.client.HTTPMessage.getallmatchingheaders() always returns [ 01/25/09 http://bugs.python.org/issue5053 created mwatkins patch CGIHTTPRequestHandler.run_cgi() HTTP_ACCEPT improperly parsed 01/25/09 http://bugs.python.org/issue5054 created mwatkins Distutils-SIG page needs to be updated 01/25/09 CLOSED http://bugs.python.org/issue5055 created akitada PyAPI assumes OS can access shared data in loadable modules (win 01/25/09 CLOSED http://bugs.python.org/issue5056 created lkcl patch Unicode-width dependent optimization leads to non-portable pyc f 01/25/09 http://bugs.python.org/issue5057 created pitrou stop pgen.exe from generating CRLF-ended files and causing mayhe 01/25/09 CLOSED http://bugs.python.org/issue5058 created lkcl Policy.DomainStrict in cookielib example code 01/25/09 http://bugs.python.org/issue5059 created babo gcc profile guided optimization 01/25/09 http://bugs.python.org/issue5060 created rpetrov patch Inadequate documentation of the built-in function open 01/25/09 http://bugs.python.org/issue5061 created MLModel Rlcompleter.Completer does not use __dir__ magic method 01/26/09 http://bugs.python.org/issue5062 created carlj patch python-2.6.spec doesn't build properly 01/26/09 http://bugs.python.org/issue5063 created ptoal patch compiler.parse raises SyntaxErrors without line number informati 01/26/09 http://bugs.python.org/issue5064 created exarkun IDLE improve Subprocess Startup Error message 01/26/09 http://bugs.python.org/issue5065 created stevenjd IDLE documentation for Unix obsolete/incorrect 01/26/09 http://bugs.python.org/issue5066 created stevenjd Error msg from using wrong quotes in JSON is unhelpful 01/26/09 http://bugs.python.org/issue5067 created stevenjd patch tarfile loops forever on broken input 01/26/09 http://bugs.python.org/issue5068 created fijal Use sets instead of list in posixpath._resolve_link 01/26/09 CLOSED http://bugs.python.org/issue5069 created tzot patch Distutils should create install dir if needed 01/26/09 http://bugs.python.org/issue5070 created andybuckley Distutils should not fail if install dir is not in PYTHONPATH 01/26/09 http://bugs.python.org/issue5071 created andybuckley urllib.open sends full URL after GET command instead of local pa 01/26/09 http://bugs.python.org/issue5072 created olemis bsddb/test/test_lock.py sometimes fails due to floating point er 01/26/09 CLOSED http://bugs.python.org/issue5073 created ocean-city patch python3 and ctypes, script causes crash 01/27/09 CLOSED http://bugs.python.org/issue5074 created pooryorick bdist_wininst should not depend on the vc runtime? 01/27/09 CLOSED http://bugs.python.org/issue5075 created mhammond patch bdist_wininst fails on py3k 01/27/09 CLOSED http://bugs.python.org/issue5076 created mhammond patch, patch 2to3 fixer for the removal of operator functions 01/27/09 http://bugs.python.org/issue5077 created alexandre.vassalotti Avoid redundant call to FormatError() 01/27/09 http://bugs.python.org/issue5078 created eckhardt patch time.ctime docs refer to "time tuple" for default 01/27/09 http://bugs.python.org/issue5079 created tlynn PyArg_Parse* should raise TypeError for float parsed with intege 01/27/09 http://bugs.python.org/issue5080 created marketdickinson Unable to print Unicode characters in Python 3 on Windows 01/27/09 CLOSED http://bugs.python.org/issue5081 created giampaolo.rodola Let frameworks to register attributes as builtins 01/27/09 CLOSED http://bugs.python.org/issue5082 created andrea-bs New resource ('gui') for regrtest 01/27/09 CLOSED http://bugs.python.org/issue5083 created gpolo patch unpickling does not intern attribute names 01/27/09 http://bugs.python.org/issue5084 created jakemcguire patch distutils/test_sdist failure on windows 01/27/09 CLOSED http://bugs.python.org/issue5085 created ocean-city patch set_daemon does not exist in Thread 01/28/09 CLOSED http://bugs.python.org/issue5086 created mnewman set_daemon does not exist in Thread 01/28/09 CLOSED http://bugs.python.org/issue5087 created mnewman optparse: inconsistent default value for append actions 01/28/09 http://bugs.python.org/issue5088 created pycurry Error in atexit._run_exitfuncs [...] Exception expected for valu 01/28/09 http://bugs.python.org/issue5089 created marketdickinson import tkinter library Visual C++ Concepts:C Run-Time Error R603 01/28/09 http://bugs.python.org/issue5090 created guxianminer Segfault in PyObject_Malloc(), address out of bounds 01/28/09 http://bugs.python.org/issue5091 created christian.heimes weird memory usage in multiprocessing module 01/29/09 CLOSED http://bugs.python.org/issue5092 created Orlowski 2to3 with a pipe on non-ASCII script 01/29/09 http://bugs.python.org/issue5093 created haypo patch datetime lacks concrete tzinfo impl. for UTC 01/29/09 http://bugs.python.org/issue5094 created brett.cannon msi missing from "bdist --help-formats" 01/29/09 http://bugs.python.org/issue5095 created bethard strange thing after call PyObject_CallMethod 01/29/09 http://bugs.python.org/issue5096 created exe asyncore.dispatcher_with_send undocumented 01/29/09 http://bugs.python.org/issue5097 created exe Environ doesn't escape spaces properly 01/29/09 CLOSED http://bugs.python.org/issue5098 created stuaxo subprocess.POpen.__del__() AttributeError (os module == None!) 01/29/09 http://bugs.python.org/issue5099 created marystern ElementTree.iterparse and Element.tail confusion 01/29/09 http://bugs.python.org/issue5100 created jeroen.dirks test_funcattrs truncated during unittest conversion 01/29/09 http://bugs.python.org/issue5101 created marketdickinson urllib2.py timeouts do not propagate across redirects for 2.6.1 01/29/09 http://bugs.python.org/issue5102 created jacques ssl.SSLSocket timeout not working correctly when remote end is h 01/29/09 http://bugs.python.org/issue5103 created jacques getsockaddrarg() casts port number from int to short without any 01/29/09 http://bugs.python.org/issue5104 created roman.zeyde sqlite3.Row class, handling duplicate column names resulting fro 01/29/09 http://bugs.python.org/issue5105 created sockonafish Update Naming & Binding statement for 3.0 01/30/09 http://bugs.python.org/issue5106 created tjreedy built-in open(..., encoding=vague_default) 01/30/09 http://bugs.python.org/issue5107 created sjmachin Invalid UTF-8 ("%s") length in PyUnicode_FromFormatV() 01/30/09 http://bugs.python.org/issue5108 created haypo patch array.array constructor very slow when passed an array object. 01/30/09 http://bugs.python.org/issue5109 created malcolmp Printing Unicode chars from the interpreter in a non-UTF8 termin 01/30/09 http://bugs.python.org/issue5110 created ezio.melotti patch httplib: wrong Host header when connecting to IPv6 loopback 01/30/09 http://bugs.python.org/issue5111 created gdesmott Issues Now Closed (47) ______________________ [distutils] - error when processing the "--formats=tar" option 30 days http://bugs.python.org/issue1885 tarek patch shutil.destinsrc returns wrong result when source path matches b 357 days http://bugs.python.org/issue2047 pitrou patch, easy msi installs to the incorrect location (C drive) 320 days http://bugs.python.org/issue2271 loewis Ttk support for Tkinter 246 days http://bugs.python.org/issue2983 gpolo Python 2.5.2 Windows Source Distribution missing Visual Studio 2 225 days http://bugs.python.org/issue3105 loewis multiprocessing adds built-in types to the global copyreg.dispat 196 days http://bugs.python.org/issue3350 jnoller Fix gdbinit for Python 3.0 164 days http://bugs.python.org/issue3610 skip.montanaro patch importing from UNC roots doesn't work 152 days http://bugs.python.org/issue3677 ocean-city patch _tkinter._flatten() doesn't check PySequence_Size() error code 136 days http://bugs.python.org/issue3880 benjamin.peterson patch IDLE won't start in custom directory. 130 days http://bugs.python.org/issue3881 loewis patch zipfile and winzip 11 days http://bugs.python.org/issue3997 amaury.forgeotdarc patch, needs review open(0, closefd=False) prints 3 warnings 92 days http://bugs.python.org/issue4233 haypo patch Portability fixes in longobject.c 62 days http://bugs.python.org/issue4393 marketdickinson patch 2.6.1 breaks many applications that embed Python on Windows 52 days http://bugs.python.org/issue4566 mhammond patch, needs review range objects becomes hashable after attribute access 41 days http://bugs.python.org/issue4701 jcea patch python3.0 -u: unbuffered stdout 7 days http://bugs.python.org/issue4705 pitrou patch round(25, 1) should return an integer, not a float 39 days http://bugs.python.org/issue4707 marketdickinson patch [PATCH] zipfile.ZipFile does not extract directories properly 34 days http://bugs.python.org/issue4710 loewis patch, needs review deprecate/delete distutils.mwerkscompiler... 19 days http://bugs.python.org/issue4863 tarek patch Inconsistent usage of next/__next__ in ABC collections; collecti 17 days http://bugs.python.org/issue4920 rhettinger patch Make heapq work with all mutable sequences 14 days http://bugs.python.org/issue4948 rhettinger doctest.testfile should set __name__, can't use namedtuple 6 days http://bugs.python.org/issue5021 rhettinger test_kqueue failure on OS X 3 days http://bugs.python.org/issue5025 marketdickinson patch itertools.fixlen 4 days http://bugs.python.org/issue5034 rhettinger patch Memory leak in imp.find_module 6 days http://bugs.python.org/issue5041 ocean-city patch, easy native win32 and wine mingw+msys build of python2.7 0 days http://bugs.python.org/issue5046 loewis patch Extending itertools.combinations 3 days http://bugs.python.org/issue5048 rhettinger unicode(C) invokes C.__unicode__ when __unicode__ is defined 0 days http://bugs.python.org/issue5050 loewis Distutils-SIG page needs to be updated 0 days http://bugs.python.org/issue5055 loewis PyAPI assumes OS can access shared data in loadable modules (win 4 days http://bugs.python.org/issue5056 eckhardt patch stop pgen.exe from generating CRLF-ended files and causing mayhe 3 days http://bugs.python.org/issue5058 amaury.forgeotdarc Use sets instead of list in posixpath._resolve_link 1 days http://bugs.python.org/issue5069 benjamin.peterson patch bsddb/test/test_lock.py sometimes fails due to floating point er 0 days http://bugs.python.org/issue5073 marketdickinson patch python3 and ctypes, script causes crash 0 days http://bugs.python.org/issue5074 loewis bdist_wininst should not depend on the vc runtime? 2 days http://bugs.python.org/issue5075 mhammond patch bdist_wininst fails on py3k 3 days http://bugs.python.org/issue5076 loewis patch, patch Unable to print Unicode characters in Python 3 on Windows 0 days http://bugs.python.org/issue5081 loewis Let frameworks to register attributes as builtins 0 days http://bugs.python.org/issue5082 loewis New resource ('gui') for regrtest 1 days http://bugs.python.org/issue5083 gpolo patch distutils/test_sdist failure on windows 2 days http://bugs.python.org/issue5085 tarek patch set_daemon does not exist in Thread 0 days http://bugs.python.org/issue5086 benjamin.peterson set_daemon does not exist in Thread 2 days http://bugs.python.org/issue5087 benjamin.peterson weird memory usage in multiprocessing module 2 days http://bugs.python.org/issue5092 LambertDW Environ doesn't escape spaces properly 0 days http://bugs.python.org/issue5098 loewis threading module can deadlock after fork 163 days http://bugs.python.org/issue874900 jcea patch, needs review Improve itertools.starmap 971 days http://bugs.python.org/issue1498370 rhettinger patch cPickle cannot unpickle subnormal floats on some machines 694 days http://bugs.python.org/issue1672332 marketdickinson patch Top Issues Most Discussed (10) ______________________________ 16 Get rid of more references to __cmp__ 106 days open http://bugs.python.org/issue1717 14 Extending itertools.combinations 3 days closed http://bugs.python.org/issue5048 14 round(25, 1) should return an integer, not a float 39 days closed http://bugs.python.org/issue4707 11 python3 closes + home keys 45 days open http://bugs.python.org/issue4676 8 weird memory usage in multiprocessing module 2 days closed http://bugs.python.org/issue5092 8 xml.parsers.expat make a dictionary which keys are broken if bu 8 days open http://bugs.python.org/issue5036 8 python3.0 -u: unbuffered stdout 7 days closed http://bugs.python.org/issue4705 8 Use a named tuple for sys.version_info 83 days open http://bugs.python.org/issue4285 7 bdist_wininst fails on py3k 3 days closed http://bugs.python.org/issue5076 7 Rlcompleter.Completer does not use __dir__ magic method 5 days open http://bugs.python.org/issue5062 From exarkun at divmod.com Fri Jan 30 18:15:54 2009 From: exarkun at divmod.com (Jean-Paul Calderone) Date: Fri, 30 Jan 2009 12:15:54 -0500 Subject: [Python-Dev] Summary of Python tracker Issues In-Reply-To: <20090130170648.2AB007859E@psf.upfronthosting.co.za> Message-ID: <20090130171554.12853.1364077741.divmod.quotient.534@henry.divmod.com> On Fri, 30 Jan 2009 18:06:48 +0100 (CET), Python tracker wrote: > [snip] > >Average duration of open issues: 697 days. >Median duration of open issues: 6 days. It seems there's a bug in the summary tool. I thought it odd a few weeks ago when I noticed the median duration of open issues was one day. I just went back and checked and the week before it was one day it was 2759 days. Perhaps there is some sort of overflow problem when computing this value? Jean-Paul From Scott.Daniels at Acm.Org Fri Jan 30 18:33:15 2009 From: Scott.Daniels at Acm.Org (Scott David Daniels) Date: Fri, 30 Jan 2009 09:33:15 -0800 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: <31113.84.19.238.82.1233327424.VFkUQmFaS098Sh0W.squirrel@84.19.238.82> References: <31113.84.19.238.82.1233327424.VFkUQmFaS098Sh0W.squirrel@84.19.238.82> Message-ID: scav at blueyonder.co.uk wrote: > Hi all, > >> On Thu, Jan 29, 2009 at 6:12 AM, Ben North wrote: >>> I find 'functools.partial' useful, but occasionally I'm unable to use it >>> because it lacks a 'from the right' version. > -1 > > For me, the main objection to a partial that places > its stored positional arguments from the right is > that you don't know which positions those arguments > will actually occupy until the partial is called. Certainly this interacts in a magical way with keyword args. That definitional problem is the reason there was no curry_right in the original recipe that was the basis of the first partial. If you have: def button(root, position, action=None, text='*', color=None): ... ... blue_button = partial(button, my_root, color=(0,0,1)) Should partial_right(blue_button, 'red') change the color or the text? It is computationally hard to do that (may have to chase chains of **kwarg-passing functions), but even hard to document. So, I'd avoid it. --Scott David Daniels Scott.Daniels at Acm.Org From guido at python.org Fri Jan 30 18:42:19 2009 From: guido at python.org (Guido van Rossum) Date: Fri, 30 Jan 2009 09:42:19 -0800 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> Message-ID: On Thu, Jan 29, 2009 at 8:25 PM, Raymond Hettinger wrote: > > [Guido van Rossum] >> >> Sorry, not convinced. > > No worries. Py3.1 is not far off. > > Just so I'm clear. Are you thinking that 3.0.x will never have > fast shelves, or are you thinking 3.0.2 or 3.0.3 after some > external deployment and battle-testing for the module? I don't know about fast shelves, but I don't think your new module should be added to 3.0.x for any x. Who knows if there even will be a 3.0.2 -- it sounds like it's better to focus on 3.1 after 3.0.1. -- --Guido van Rossum (home page: http://www.python.org/~guido/) From Chris.Barker at noaa.gov Fri Jan 30 18:43:05 2009 From: Chris.Barker at noaa.gov (Christopher Barker) Date: Fri, 30 Jan 2009 09:43:05 -0800 Subject: [Python-Dev] Universal newlines, and the gzip module. In-Reply-To: <18818.64219.137713.112225@montanaro.dyndns.org> References: <49821403.1030603@noaa.gov> <18818.64219.137713.112225@montanaro.dyndns.org> Message-ID: <49833C29.8000106@noaa.gov> skip at pobox.com wrote: > Christopher> 1) It would be nice if the gzip module (and the zip lib > Christopher> module) supported Universal newlines -- you could read a > Christopher> compressed text file with "wrong" newlines, and have > Christopher> them handled properly. However, that may be hard to do, > Christopher> so at least: > > Christopher> 2) Passing a 'U' flag in to gzip.open shouldn't break it. > > I agree with Brett that 'U' is meaningless on the compressed file itself. right -- I think the code that deals with the flags is not smart enough -- it adds the 'b' flag if it isn't already there, but that's all it does. There are only a few flags that make sense for opening a gzip file -- it should only use those, and either ignore others or raise an exception if there are others that don't make sense. > You want it applied to the contents of the compressed file though, is that > right? That would be great. > That makes sense to me. It probably belongs in a separate argument > though. I could go either way on that -- if we simply extracted the 'U' from the passed in mode, we wouldn't have to change the API at all, and it wouldn't break any code that wasn't broken already. As for having 'U' applied to the uncompressed data -- I have no idea how much work that would be -- it depends on how it is currently handling text files (does that work -- i.e \r\n converted to \n on Windows?), and how the Universal newline code is written. In any case, the 'U' flag should NEVER get passed through to the file opening code, and that's easy to fix. I tried to post this to the bug tracker, but my attempt to create an account failed -- do I need to be pre-approved or something? -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker at noaa.gov From ben at redfrontdoor.org Fri Jan 30 19:35:15 2009 From: ben at redfrontdoor.org (Ben North) Date: Fri, 30 Jan 2009 18:35:15 +0000 Subject: [Python-Dev] Partial function application 'from the right' Message-ID: <5169ff10901301035l3010e678je69a390533179f75@mail.gmail.com> Hi, > [ Potential new "functools.partial_right", e.g., > > split_comma = partial_right(str.split, '.') > ] Thanks for the feedback. Apologies if (as was suggested) this should have gone to python-ideas; I thought as a fairly small extension to existing functionality it would be OK here. I'll try to summarise the responses. There was some very luke-warm support. Terry Reedy suggested it would be worth posting a patch to the tracker, for the record, even if it turns out to be rejected. Nick Coghlan made more clear than I did the main reason a 'partial_right' would be useful: > [...] some functions and methods written in C (such as string methods) > *don't* [support keyword args], so partial's keyword support doesn't > help. > > A functools.rpartial would go some way towards addressing that. On the other hand, Collin Winter asked for more evidence that real benefit (beyond mere 'completeness' of the functools module) would result. I don't really have to hand anything more than the three cases mentioned in my original email (str.split, math.log, itertools.islice), but since the change is so small, I thought the feature worth raising. Leif Walsh pointed out that you could achieve the same effect by defining your own function. This is true, but functools.partial exists because it's sometimes useful to create such functions either more concisely, or anonymously. A 'partial_right' would allow more such functions to be so created. Peter Harris was negative on the idea, pointing out that after g = partial_right(f, 7) you don't know which argument of 'f' the '7' is going to end up as, because it depends on how many are supplied in the eventual call to 'g'. This is true, and would require some care in partial_right's use. Peter also wondered > There's probably a reason why Haskell doesn't do this... I have only written about five lines of Haskell in my life, so take this with a hefty pinch of salt, but: Haskell does have a 'flip' function which reverses the order of a function's arguments, so it looks like you can very easily build a 'partial_right' in Haskell, especially since standard functions are in curried form. There was some discussion (started by Antoine Pitrou) of an idea to generalise 'partial' further, potentially using the Ellipsis object, to allow arbitrarily-placed 'holes' in the argument list. E.g., split_comma = partial(str.split, ..., ',') In some ways I quite like the even-more-completeness of this idea, but think that it might be the wrong side of the complexity/benefit trade-off. Scott David Daniels pointed out that using Ellipsis would have the downside of > [...] preventing any use of partial when an argument could be an the > Ellipsis instance. This could be fixed by making the general form be something with the meaning partial_explicit(f, hole_sentinel, *args, **kwargs) where appearances of the exact object 'hole_sentinel' in 'args' would indicate a hole, to be filled in at the time of the future call. A user wanting to have '...' passed in as a true argument could then do g = partial_explicit(f, None, 3, ..., 4, axis = 2) or hole = object() g = partial_explicit(f, hole, 3, ..., hole, 4, axis = 2) if they wanted a true '...' argument and a hole. (I might have the syntax for this wrong, having not played with Python 3.0, but I hope the idea is clear.) There was some concern expressed (by Daniel Stutzbach, Alexander Belopolsky) that the meaning of '...' would be confusing --- 'one hole' or 'arbitrary many holes'? I think the extra complexity vs extra functionality trade-off is worth considering for 'partial_right', but my personal opinion is that a 'partial_explicit' has that trade-off the wrong way. I'll try to find time to create the patch in the tracker in the next few days, by which time perhaps it'll have become clearer whether the idea is a good one or not. Thanks, Ben. From brett at python.org Fri Jan 30 19:56:46 2009 From: brett at python.org (Brett Cannon) Date: Fri, 30 Jan 2009 10:56:46 -0800 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: References: <498295F5.2050607@v.loewis.de> Message-ID: On Fri, Jan 30, 2009 at 08:03, Barry Warsaw wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On Jan 30, 2009, at 12:53 AM, Martin v. L?wis wrote: > >>> 1. Barry, who is the release manager for 3.0.1, does not like the idea >>> of the cruft that is being proposed removed from 3.0.1. >> >> I don't think he actually said that (in fact, I think he said the >> opposite). It would be good if he clarified, though. > > To clarify: cruft that should have been removed 3.0 is fine to remove for > 3.0.1, for some definition of "should have been". Great! Then should we start planning for 3.0.1 in terms of release dates and what to have in the release so we can get this out the door quickly? -Brett From benjamin at python.org Fri Jan 30 21:07:29 2009 From: benjamin at python.org (Benjamin Peterson) Date: Fri, 30 Jan 2009 15:07:29 -0500 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: References: <498295F5.2050607@v.loewis.de> Message-ID: <1afaf6160901301207y1b8e9c3clee9d9b4b071f365b@mail.gmail.com> On Fri, Jan 30, 2009 at 1:56 PM, Brett Cannon wrote: > Great! Then should we start planning for 3.0.1 in terms of release > dates and what to have in the release so we can get this out the door > quickly? I think considering there's only two release blockers we should plan for about a week or two from now. I'm not sure if we want to do a release candidate; we didn't for 2.6.1, but maybe it would be good to see if the community can find any other horrible problems. -- Regards, Benjamin From brett at python.org Fri Jan 30 21:14:02 2009 From: brett at python.org (Brett Cannon) Date: Fri, 30 Jan 2009 12:14:02 -0800 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: <1afaf6160901301207y1b8e9c3clee9d9b4b071f365b@mail.gmail.com> References: <498295F5.2050607@v.loewis.de> <1afaf6160901301207y1b8e9c3clee9d9b4b071f365b@mail.gmail.com> Message-ID: On Fri, Jan 30, 2009 at 12:07, Benjamin Peterson wrote: > On Fri, Jan 30, 2009 at 1:56 PM, Brett Cannon wrote: >> Great! Then should we start planning for 3.0.1 in terms of release >> dates and what to have in the release so we can get this out the door >> quickly? > > I think considering there's only two release blockers we should plan > for about a week or two from now. > > I'm not sure if we want to do a release candidate; we didn't for > 2.6.1, but maybe it would be good to see if the community can find any > other horrible problems. I say it's Barry's call. If he has the time and wants to, then great; they don't hurt. But I know I won't object if we don't have one. -Brett From mike.klaas at gmail.com Fri Jan 30 22:20:31 2009 From: mike.klaas at gmail.com (Mike Klaas) Date: Fri, 30 Jan 2009 13:20:31 -0800 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> <5E031BDD-32EC-47CC-8CE4-D4D3888A49F2@gmail.com> Message-ID: On 29-Jan-09, at 3:38 PM, Daniel Stutzbach wrote: > On Thu, Jan 29, 2009 at 5:24 PM, Mike Klaas > wrote: >> And yet, python isn't confined to mathematical notation. *, ** are >> both overloaded for use in argument lists to no-one's peril, AFAICT. > > Certainly, but there is no danger of confusion them for > multiplication in context, whereas: > > split_comma = partial(str.split, ..., ',') > > to me looks like "make ',' the last argument" rather than "make ',' > the second argument". Yes, I agree. I mistakenly thought that that was the proposal under discussion (that partial(f, ..., 2) == right_curry(f, 2)) -Mike From tjreedy at udel.edu Fri Jan 30 23:27:25 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 30 Jan 2009 17:27:25 -0500 Subject: [Python-Dev] Universal newlines, and the gzip module. In-Reply-To: <49833C29.8000106@noaa.gov> References: <49821403.1030603@noaa.gov> <18818.64219.137713.112225@montanaro.dyndns.org> <49833C29.8000106@noaa.gov> Message-ID: Christopher Barker wrote: > I tried to post this to the bug tracker, but my attempt to create an > account failed -- do I need to be pre-approved or something? No. If you do not get a response from the above, and a retry does not work, you could email webmaster at python.org with details on what you did and how it failed. From tjreedy at udel.edu Sat Jan 31 00:07:28 2009 From: tjreedy at udel.edu (Terry Reedy) Date: Fri, 30 Jan 2009 18:07:28 -0500 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <79990c6b0901300528h160c0c81t74e44497fbcf709e@mail.gmail.com> References: <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> <79990c6b0901300528h160c0c81t74e44497fbcf709e@mail.gmail.com> Message-ID: Paul Moore wrote: > Serious question: does anybody know how to get better communication > from the user base? One of the nice things about Python is that the downloads are truly free -- no required 'registration'. On the other hand, there is no option to give feedback either. If PSF/devs wanted to add something to the site, and someone else volunteered to do the implementation, I would volunteer to help with both design and analysis. That said, I think a main determinant of general 3.0 use will be availability of 3rd-party libraries, including Windows binaries. So perhaps we should aim survey efforts at their authors. I have the impression that the C-API porting guide needs improvement for such effort. On the other hand, perhaps they wonder whether ports will be used. In that case, we need more reports like the post of Nick Efford: " > We'd love to switch to 3.0 as soon as possible (i.e., Oct 2009), > as it is a significantly cleaner language for our purposes. > [university CS courses] > However, we make extensive use of third-party libraries and > frameworks such as Pygame, wxPython, etc, to increase the > motivation levels of students. The 3.0-readiness of these > libraries and frameworks is inevitably going to be a factor in > the decision we make this summer. " Terry Jan Reedy From ironfroggy at gmail.com Sat Jan 31 01:38:23 2009 From: ironfroggy at gmail.com (Calvin Spealman) Date: Fri, 30 Jan 2009 19:38:23 -0500 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> <5E031BDD-32EC-47CC-8CE4-D4D3888A49F2@gmail.com> Message-ID: <76fd5acf0901301638x6d4da376w34fe78d327af7c87@mail.gmail.com> I am just replying to the end of this thread to throw in a reminder about my partial.skip patch, which allows the following usage: split_one = partial(str.split, partial.skip, 1) Not looking to say "mine is better", but if the idea is being given merit, I like the skipping arguments method better than just the "right partial", which I think is confusing combined with keyword and optional arguments. And, this patch already exists. Could it be re-evaluated? On Fri, Jan 30, 2009 at 4:20 PM, Mike Klaas wrote: > On 29-Jan-09, at 3:38 PM, Daniel Stutzbach wrote: > >> On Thu, Jan 29, 2009 at 5:24 PM, Mike Klaas wrote: >>> >>> And yet, python isn't confined to mathematical notation. *, ** are both >>> overloaded for use in argument lists to no-one's peril, AFAICT. >> >> Certainly, but there is no danger of confusion them for multiplication in >> context, whereas: >> >> split_comma = partial(str.split, ..., ',') >> >> to me looks like "make ',' the last argument" rather than "make ',' the >> second argument". > > Yes, I agree. I mistakenly thought that that was the proposal under > discussion (that partial(f, ..., 2) == right_curry(f, 2)) > > -Mike > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/ironfroggy%40gmail.com > -- Read my blog! I depend on your acceptance of my opinion! I am interesting! http://techblog.ironfroggy.com/ Follow me if you're into that sort of thing: http://www.twitter.com/ironfroggy From martin at v.loewis.de Sat Jan 31 01:38:22 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 31 Jan 2009 01:38:22 +0100 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: <4982F0AE.20308@gmail.com> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> <49826E1A.6080809@v.loewis.de> <49826F5D.7000009@v.loewis.de> <4982F0AE.20308@gmail.com> Message-ID: <49839D7E.7070902@v.loewis.de> >> Ah. In the 3.0 branch, always do "svn revert ." after svnmerge. >> It's ok (Nick says it isn't exactly ok, but I don't understand why) > > Doing "svn revert ." before making the commit will lose the metadata > changes that svnmerge uses for its bookkeeping (i.e. if this practice is > used regularly, the tool will completely lose track of which revisions > have already been merged). How so? The metadata are getting tracked just fine, no loss whatsoever. > That won't bother those of us that are only > backporting cherry-picked revisions, but is rather inconvenient for > anyone checking for revisions that haven't been backported yet, but > haven't been explicitly blocked either. Take a look at r68901, which I merged using the procedure I described. svn diff -r68900:68901 --depth empty . gives Modified: svnmerge-integrated - /python/branches/py3k:1-67498,67522-67529,67531-67533,67535-67544,67546-67549,67551-67584,67586-67602,67604-67606,67608-67609,67611-67619,67621-67635,67638,67650,67653-67701,67703-67712,67714,67716-67746,67748,67750-67762,67764-67797,67799-67809,67811-67822,67825-67838,67840-67850,67852-67857,67859-67885,67888-67902,67904-67909,67911-67931,67933,67937-67938,67940,67950-67955,67957-67958,67960-67963,67965-67973,67975-67980,67982,67984-68014,68016-68058,68060-68089,68091-68093,68101,68103,68132,68137,68139-68152,68169-68170,68175,68178,68184,68193,68200,68205-68206,68212,68216,68223-68224,68226-68229,68237,68242,68245,68247,68249,68309,68321,68342,68363,68375,68401,68427,68440,68443,68451,68454,68463,68474-68475,68477,68508,68511,68525,68529,68553,68581,68587,68615,68619,68630,68638,68650-68653,68662,68669,68675,68677,68700,68709,68730,68732,68746,68767-68770,68782,68814-68815,68836,68855,68857,68887,68895 + /python/branches/py3k:1-67498,67522-67529,67531-67533,67535-67544,67546-67549,67551-67584,67586-67602,67604-67606,67608-67609,67611-67619,67621-67635,67638,67650,67653-67701,67703-67712,67714,67716-67746,67748,67750-67762,67764-67797,67799-67809,67811-67822,67825-67838,67840-67850,67852-67857,67859-67885,67888-67902,67904-67909,67911-67931,67933,67937-67938,67940,67950-67955,67957-67958,67960-67963,67965-67973,67975-67980,67982,67984-68014,68016-68058,68060-68089,68091-68093,68101,68103,68132,68137,68139-68152,68169-68170,68175,68178,68184,68193,68200,68205-68206,68212,68216,68223-68224,68226-68229,68237,68242,68245,68247,68249,68309,68321,68342,68363,68375,68401,68427,68440,68443,68451,68454,68463,68474-68475,68477,68508,68511,68525,68529,68553,68581,68587,68615,68619,68630,68638,68650-68653,68662,68669,68675,68677,68700,68709,68730,68732,68746,68767-68770,68782,68814-68815,68836,68855,68857,68887,68895,68898 As you can see, 68898 has been added to svnmerge-integrated, and this is indeed the revision that I merged. > Doing "svn resolved ." assumes that you did everything else correctly, > and even then I don't see how svnmerge could both backport the py3k > changes to the metadata and make its own changes and still get the > metadata to a sane state. The *only* interesting metadata in the svnmerge-integrated property are the ones that svnmerge has written, and svnmerge writes them correctly. > The consequence of getting this approach wrong > is that the merge state of the 3.0 maintenance branch can be clobbered > completely (losing track both of which revisions have been backported > and which have been blocked). Not with the procedure I described. > > Doing both "svn revert ." and "svnmerge merge -M -F " clears > out the conflicted metadata and then correctly updates the metadata for > the revisions that have been backported. It will always update the > svnmerge metadata correctly, regardless of the relative order of the > svnmerge and svn update operations. I don't understand why you bring up this "regardless of the relative order"? Who ever proposed a different order? If you do things in the order I suggest, everything will be fine, right? > Given the choice of a method which will always do the right thing, over > one which always does the wrong thing and another one which only does > the right thing if I did two other things in the right order and will > completely trash the bookkeeping if I get it wrong That's open for debate. What *specific* wrong order are you talking about? If you do things in the right order, will it still get the bookkeeping wrong? > If there's something wrong with my understanding of either svn > properties or the operation of svnmerge that means the quicker > approaches aren't as broken as I think they are, then I'd be happy to > adopt one of them (since they *are* faster than my current approach). > But until someone pokes a hole in my logic, I'll stick with the > slower-but-always-correct methodology (and continue advocating that > approach to everyone else doing updates that affect all four branches). See above. You claim that doing things the way I recommend will lose metadata; I believe this claim is false. Regards, Martin From solipsis at pitrou.net Sat Jan 31 01:42:40 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 31 Jan 2009 00:42:40 +0000 (UTC) Subject: [Python-Dev] Partial function application 'from the right' References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> <5E031BDD-32EC-47CC-8CE4-D4D3888A49F2@gmail.com> <76fd5acf0901301638x6d4da376w34fe78d327af7c87@mail.gmail.com> Message-ID: Calvin Spealman gmail.com> writes: > > I am just replying to the end of this thread to throw in a reminder > about my partial.skip patch, which allows the following usage: > > split_one = partial(str.split, partial.skip, 1) > > Not looking to say "mine is better", but if the idea is being given > merit, I like the skipping arguments method better than just the > "right partial", which I think is confusing combined with keyword and > optional arguments. And, this patch already exists. Could it be > re-evaluated? Sorry, where is the patch? If one writes X = partial.skip, it looks quite nice: split_one = partial(str.split, X, 1) Regards Antoine. From martin at v.loewis.de Sat Jan 31 01:43:36 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 31 Jan 2009 01:43:36 +0100 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: <4982F4B5.7090001@gmail.com> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> <4982F4B5.7090001@gmail.com> Message-ID: <49839EB8.2060506@v.loewis.de> > (I believe that svnmerge actually does get that case right, but I > haven't checked it extensively - since if it does get it right, I don't > understand why it leaves the conflict in place instead of automatically > marking it as resolved). I think this is a plain bug. It invokes "svn merge", which creates a conflict, then removes the conflicted property (regardless of whether there was a conflict), then writes the property fresh. It doesn't consider the case that there might have been a conflict, just because such conflict didn't occur in their testing. > Regardless, the consequences of forgetting that you did the svn up after > the merge instead of before (e.g. if it took some time to get the > backported version working, or if something interrupted you between the > initial backport/update and the final test and commit step) are fairly > hard to clean up, so I prefer the safe approach (despite the extra > minute or two it takes for svnmerge to recalculate the metadata changes). If I find that it conflicts on commit, I rather restart all over. Regards, Martin From martin at v.loewis.de Sat Jan 31 02:17:45 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 31 Jan 2009 02:17:45 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <79990c6b0901300528h160c0c81t74e44497fbcf709e@mail.gmail.com> References: <497F6E55.6090608@v.loewis.de> <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> <79990c6b0901300528h160c0c81t74e44497fbcf709e@mail.gmail.com> Message-ID: <4983A6B9.40304@v.loewis.de> > Serious question: does anybody know how to get better communication > from the user base? My impression is that it's pretty hard to find out > who is actually using 3.0, and get any feedback from them. I think the bug tracker is a way in which users communicate with developers. There have been 296 issues since Dec 3rd that got tagged with version 3.0. The absolute majority of these were documentation problems (documentation was incorrect). Then, I would say we have installation problems, and then problems with IDLE. There is also a significant number of 2to3 problems. > I'm using Windows, and although I *can* build a lot of stuff myself, I > really don't want to be bothered, so I rely on bdist_wininst > installers being available, which is an additional constraint. Notice that bdist_wininst doesn't really work in 3.0. So you likely won't see many packages until 3.0.1 is released. Regards, Martin From alexander.belopolsky at gmail.com Sat Jan 31 04:02:21 2009 From: alexander.belopolsky at gmail.com (Alexander Belopolsky) Date: Fri, 30 Jan 2009 22:02:21 -0500 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> <5E031BDD-32EC-47CC-8CE4-D4D3888A49F2@gmail.com> <76fd5acf0901301638x6d4da376w34fe78d327af7c87@mail.gmail.com> Message-ID: On Fri, Jan 30, 2009 at 7:42 PM, Antoine Pitrou wrote: .. > If one writes X = partial.skip, it looks quite nice: > > split_one = partial(str.split, X, 1) Or even _ = partial.skip split_one = partial(str.split, _, 1) From aahz at pythoncraft.com Sat Jan 31 05:51:27 2009 From: aahz at pythoncraft.com (Aahz) Date: Fri, 30 Jan 2009 20:51:27 -0800 Subject: [Python-Dev] FINAL REMINDER: OSCON 2009: Call For Participation Message-ID: <20090131045127.GA6423@panix.com> The O'Reilly Open Source Convention has opened up the Call For Participation -- deadline for proposals is Tuesday Feb 3. OSCON will be held July 20-24 in San Jose, California. For more information, see http://conferences.oreilly.com/oscon http://en.oreilly.com/oscon2009/public/cfp/57 -- Aahz (aahz at pythoncraft.com) <*> http://www.pythoncraft.com/ Weinberg's Second Law: If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization. From ncoghlan at gmail.com Sat Jan 31 06:00:49 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 31 Jan 2009 15:00:49 +1000 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: <49839D7E.7070902@v.loewis.de> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> <49826E1A.6080809@v.loewis.de> <49826F5D.7000009@v.loewis.de> <4982F0AE.20308@gmail.com> <49839D7E.7070902@v.loewis.de> Message-ID: <4983DB01.70503@gmail.com> Martin v. L?wis wrote: > See above. You claim that doing things the way I recommend will lose > metadata; I believe this claim is false. I can see how "svn resolved ." gets it right (now that I understand how the conflict is being produced and then fixed automatically by svnmerge, but not actually marked as resolved). I still don't understand how "svn revert ." can avoid losing the metadata changes unless svnmerge is told to modify the properties again after they have been reverted. Or am I misunderstanding SVN, and the revert command doesn't actually revert property changes? Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From martin at v.loewis.de Sat Jan 31 08:18:44 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 31 Jan 2009 08:18:44 +0100 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: <4983DB01.70503@gmail.com> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> <49826E1A.6080809@v.loewis.de> <49826F5D.7000009@v.loewis.de> <4982F0AE.20308@gmail.com> <49839D7E.7070902@v.loewis.de> <4983DB01.70503@gmail.com> Message-ID: <4983FB54.4050802@v.loewis.de> > I can see how "svn resolved ." gets it right (now that I understand how > the conflict is being produced and then fixed automatically by svnmerge, > but not actually marked as resolved). > > I still don't understand how "svn revert ." can avoid losing the > metadata changes unless svnmerge is told to modify the properties again > after they have been reverted. Or am I misunderstanding SVN, and the > revert command doesn't actually revert property changes? Oops, I meant "svn resolved ." all the time. When I wrote "svn revert .", it was by mistake. Regards, Martin From regebro at gmail.com Sat Jan 31 09:33:19 2009 From: regebro at gmail.com (Lennart Regebro) Date: Sat, 31 Jan 2009 09:33:19 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <4983A6B9.40304@v.loewis.de> References: <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> <79990c6b0901300528h160c0c81t74e44497fbcf709e@mail.gmail.com> <4983A6B9.40304@v.loewis.de> Message-ID: <319e029f0901310033i404139d3n107f25e87def27b9@mail.gmail.com> Just my 2 eurocents: I think version numbers communicate a couple of things. One thing the communicate is that if you go from x.y.0 to x.y.1 (or from x.y.34 to x.y.35 for that matter) you signify that this is a bug fix release, and that the risk of any of your stuff breaking is close to zero, unless you somehow where relying on what essentially was broken behavior. It's also correct that a .0 anywhere indicates that you should wait, and that a .1 indicated that this should be safer. Of course, you can end up where these two things clash. Where you need to make a major change that breaks something, but you at the same time don't want to flag "Yes, this will be as bugfree as you normally would expect from a .1 release." My opinion is that in that case, the first rule should win out. Don't make potentially incompatible changes in a minor version increase. So it seems to me here that a 3.0.1 bugfix release, and then a 3.1 with the API changes and C IO is at least the type of numbering I would expect. From p.f.moore at gmail.com Sat Jan 31 11:39:44 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 31 Jan 2009 10:39:44 +0000 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <4983A6B9.40304@v.loewis.de> References: <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> <79990c6b0901300528h160c0c81t74e44497fbcf709e@mail.gmail.com> <4983A6B9.40304@v.loewis.de> Message-ID: <79990c6b0901310239y7fe73255o988c4b5dbc7b3ba4@mail.gmail.com> 2009/1/31 "Martin v. L?wis" : > Notice that bdist_wininst doesn't really work in 3.0. So you likely > won't see many packages until 3.0.1 is released. Ah, that might be an issue :-) Can you point me at specifics (bug reports or test cases)? I could see if I can help in fixing things. Paul. From ncoghlan at gmail.com Sat Jan 31 11:45:04 2009 From: ncoghlan at gmail.com (Nick Coghlan) Date: Sat, 31 Jan 2009 20:45:04 +1000 Subject: [Python-Dev] [Python-checkins] Merging to the 3.0 maintenance branch In-Reply-To: <4983FB54.4050802@v.loewis.de> References: <20090129043706.5614B1E4002@bag.python.org> <1afaf6160901290804t3c08134egadfd14493dcb09ae@mail.gmail.com> <498229BE.4060408@gmail.com> <49826577.4030808@v.loewis.de> <49826E1A.6080809@v.loewis.de> <49826F5D.7000009@v.loewis.de> <4982F0AE.20308@gmail.com> <49839D7E.7070902@v.loewis.de> <4983DB01.70503@gmail.com> <4983FB54.4050802@v.loewis.de> Message-ID: <49842BB0.40901@gmail.com> Martin v. L?wis wrote: >> I can see how "svn resolved ." gets it right (now that I understand how >> the conflict is being produced and then fixed automatically by svnmerge, >> but not actually marked as resolved). >> >> I still don't understand how "svn revert ." can avoid losing the >> metadata changes unless svnmerge is told to modify the properties again >> after they have been reverted. Or am I misunderstanding SVN, and the >> revert command doesn't actually revert property changes? > > Oops, I meant "svn resolved ." all the time. When I wrote > "svn revert .", it was by mistake. Ah, in that case we now agree on the right way to do things :) With the explanation as to where the (spurious) conflict is coming from on the initial merge to the maintenance branch, I'm now happy that the only time the revert + regenerate metadata should ever be needed is if someone else checks in a backport between the time when I start a backport and when I go to check it in (which is pretty unlikely in practice). Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia --------------------------------------------------------------- From martin at v.loewis.de Sat Jan 31 11:58:59 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 31 Jan 2009 11:58:59 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <79990c6b0901310239y7fe73255o988c4b5dbc7b3ba4@mail.gmail.com> References: <49809B0C.4020905@egenix.com> <20090129220951.GA17786@panix.com> <79990c6b0901300528h160c0c81t74e44497fbcf709e@mail.gmail.com> <4983A6B9.40304@v.loewis.de> <79990c6b0901310239y7fe73255o988c4b5dbc7b3ba4@mail.gmail.com> Message-ID: <49842EF3.4010003@v.loewis.de> > Can you point me at specifics (bug reports or test cases)? I could see > if I can help in fixing things. See r69098. Regards, Martin From barry at python.org Sat Jan 31 14:44:12 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 31 Jan 2009 08:44:12 -0500 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: <5c6f2a5d0901300850q7b261e47k347811f7b183718b@mail.gmail.com> References: <498295F5.2050607@v.loewis.de> <5c6f2a5d0901300850q7b261e47k347811f7b183718b@mail.gmail.com> Message-ID: <95F1D7BB-5729-4127-9167-8B1FAE8EEB66@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 30, 2009, at 11:50 AM, Mark Dickinson wrote: > On Fri, Jan 30, 2009 at 4:03 PM, Barry Warsaw > wrote: >> To clarify: cruft that should have been removed 3.0 is fine to >> remove for >> 3.0.1, for some definition of "should have been". > > Just to double check, can I take this as a green light to continue > with the cmp removal (http://bugs.python.org/issue1717) for 3.0.1? Yep, go ahead. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYRVrHEjvBPtnXfVAQK9SQQAiJct3mWt/+ZIOkI7DDRoBdz8yFvrmbLX 6AnbW+owvnnlzB9QX5PyDfTaTJa5pLJuoiWYRb7vCzxH1daW9KuFvF9qnaYXUhiO TLkyaO/R40aarB79NkE6J8wyRjYRyMoZgz10/GzxWkQgvTg38ESeKh3b6YRyph0N uo18odqAGEs= =QDP8 -----END PGP SIGNATURE----- From barry at python.org Sat Jan 31 14:46:44 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 31 Jan 2009 08:46:44 -0500 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: References: <498295F5.2050607@v.loewis.de> Message-ID: <592904F8-89F5-4AC7-B3BF-0C51D3B474A6@python.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 30, 2009, at 1:56 PM, Brett Cannon wrote: > On Fri, Jan 30, 2009 at 08:03, Barry Warsaw wrote: >> -----BEGIN PGP SIGNED MESSAGE----- >> Hash: SHA1 >> >> On Jan 30, 2009, at 12:53 AM, Martin v. L?wis wrote: >> >>>> 1. Barry, who is the release manager for 3.0.1, does not like the >>>> idea >>>> of the cruft that is being proposed removed from 3.0.1. >>> >>> I don't think he actually said that (in fact, I think he said the >>> opposite). It would be good if he clarified, though. >> >> To clarify: cruft that should have been removed 3.0 is fine to >> remove for >> 3.0.1, for some definition of "should have been". > > Great! Then should we start planning for 3.0.1 in terms of release > dates and what to have in the release so we can get this out the door > quickly? How about Friday February 13? If that works for everybody, I'll tag the release on my evening of the 12th so that Martin and other east-of- mes will be able to do their thing by my morning of the 13th. I've added this to the Python release calendar. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYRWRXEjvBPtnXfVAQI/+wQAm95gTGojwZXSU8qfBtNXgD/lALMi1ncK ctEOhueAwnRBCnFg9UyqgX8dcmogWL7M+pikpOjVeH/TUiArXDIlcY+glkVzgMo4 7DizBu5b6SpJq8h1iTvniqsT7SDZeE1S1FhPBIi5cIja78fD2F5Ny5OGV2K377TP GhjZxX8gepw= =OPBI -----END PGP SIGNATURE----- From barry at python.org Sat Jan 31 14:47:23 2009 From: barry at python.org (Barry Warsaw) Date: Sat, 31 Jan 2009 08:47:23 -0500 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: <1afaf6160901301207y1b8e9c3clee9d9b4b071f365b@mail.gmail.com> References: <498295F5.2050607@v.loewis.de> <1afaf6160901301207y1b8e9c3clee9d9b4b071f365b@mail.gmail.com> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Jan 30, 2009, at 3:07 PM, Benjamin Peterson wrote: > On Fri, Jan 30, 2009 at 1:56 PM, Brett Cannon > wrote: >> Great! Then should we start planning for 3.0.1 in terms of release >> dates and what to have in the release so we can get this out the door >> quickly? > > I think considering there's only two release blockers we should plan > for about a week or two from now. > > I'm not sure if we want to do a release candidate; we didn't for > 2.6.1, but maybe it would be good to see if the community can find any > other horrible problems. Let's JFDI. No release candidate. Barry -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (Darwin) iQCVAwUBSYRWa3EjvBPtnXfVAQIVpgQAo1tb/RJ81WFBJHH1GhdhtKagrB5p9MSl U+GfnLx9mEtqBqQ9rnXaQQaPpJjvNmXc10K+8oDdwCJHSX3k66JbK4U4BOBqWgc3 0PTrdIn5/4PqfexT3HWNmH/mZCZXb36HDcE6fxW5CWxuxHbNLypBY7P52XgVJIBW hqMBQVVNxgw= =Zq3w -----END PGP SIGNATURE----- From martin at v.loewis.de Sat Jan 31 16:47:30 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 31 Jan 2009 16:47:30 +0100 Subject: [Python-Dev] Subversion upgraded to 1.5 Message-ID: <49847292.5080401@v.loewis.de> I have now upgraded subversion to 1.5.1 on svn.python.org. Please let me know if you encounter problems. Regards, Martin From ludvig at lericson.se Sat Jan 31 16:50:55 2009 From: ludvig at lericson.se (Ludvig Ericson) Date: Sat, 31 Jan 2009 16:50:55 +0100 Subject: [Python-Dev] Fwd: Partial function application 'from the right' References: <64D950B1-C422-45CE-BBF0-AAA3764BE31B@lericson.se> Message-ID: Begin forwarded message: > From: Ludvig Ericson > Date: January 31, 2009 16:43:50 GMT+01:00 > To: Alexander Belopolsky > Subject: Re: [Python-Dev] Partial function application 'from the > right' > > On Jan 31, 2009, at 04:02, Alexander Belopolsky wrote: > >> On Fri, Jan 30, 2009 at 7:42 PM, Antoine Pitrou >> wrote: >> .. >>> If one writes X = partial.skip, it looks quite nice: >>> >>> split_one = partial(str.split, X, 1) >> >> Or even >> >> _ = partial.skip >> split_one = partial(str.split, _, 1) > > Or even > > ? = partial.skip > split_one = partial(str.split, ?, 1) From p.f.moore at gmail.com Sat Jan 31 17:01:09 2009 From: p.f.moore at gmail.com (Paul Moore) Date: Sat, 31 Jan 2009 16:01:09 +0000 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: <49842EF3.4010003@v.loewis.de> References: <79990c6b0901300528h160c0c81t74e44497fbcf709e@mail.gmail.com> <4983A6B9.40304@v.loewis.de> <79990c6b0901310239y7fe73255o988c4b5dbc7b3ba4@mail.gmail.com> <49842EF3.4010003@v.loewis.de> Message-ID: <79990c6b0901310801m33286a0bmee745ff71269f09a@mail.gmail.com> 2009/1/31 "Martin v. L?wis" : >> Can you point me at specifics (bug reports or test cases)? I could see >> if I can help in fixing things. > > See r69098. Thanks. So 3.0.1 and later will be fine - my apologies, I hadn't quite understood what you said. Paul. From digitalxero at gmail.com Sat Jan 31 18:45:12 2009 From: digitalxero at gmail.com (Dj Gilcrease) Date: Sat, 31 Jan 2009 10:45:12 -0700 Subject: [Python-Dev] PEP 374 (DVCS) now in reST In-Reply-To: References: <497BA590.7060406@v.loewis.de> <497BB5E7.4080606@gmail.com> <8E8084F3-0FAF-468E-8820-1ADA3C49380F@python.org> <497CB185.3010601@v.loewis.de> <573EFD3B-2215-4747-B4B0-45C35A9F9F86@gmail.com> Message-ID: a Mercurial "super client" http://blog.red-bean.com/sussman/?p=116 Figured I would link to this for the people doing the HG investigation From g.brandl at gmx.net Sat Jan 31 19:25:45 2009 From: g.brandl at gmx.net (Georg Brandl) Date: Sat, 31 Jan 2009 19:25:45 +0100 Subject: [Python-Dev] Python 3.0.1 In-Reply-To: References: <49809B0C.4020905@egenix.com> <49809C1B.3090805@voidspace.org.uk> <4980CD7A.6000506@v.loewis.de> <87d4e7dj0p.fsf@xemacs.org> <20090129113130.GA2490@amk.local> <7F2AD110-96A2-4367-821E-6DB8E2E3DC86@python.org> Message-ID: Guido van Rossum schrieb: > Frankly, I don't really believe the users for whom those rules were > created are using 3.0 yet. Instead, I expect there to be two types of > users: people in the educational business who don't have a lot of > bridges to burn and are eager to use the new features; and developers > of serious Python software (e.g. Twisted) who are trying to figure out > how to port their code to 3.0. The first group isn't affected by the > changes we're considering here (e.g. removing cmp or some obscure > functions from the operator module). The latter group *may* be > affected, simply because they may have some pre-3.0 code using old > features that (by accident) still works under 3.0. > > On the one hand I understand that those folks want a stable target. On > the other hand I think they would prefer to find out sooner rather > than later they're using stuff they shouldn't be using any more. It's > a delicate balance for sure, and I certainly don't want to open the > floodgates here, or rebrand 3.1 as 3.0.1 or anything like that. But I > really don't believe that the strictest interpretation of "no new > features" will benefit us for 3.0.1. Perhaps we should decide when to > go back to a more strict interpretation of the rules based on the > uptake of Python 3 compared to Python 2. +1. Georg -- Thus spake the Lord: Thou shalt indent with four spaces. No more, no less. Four shall be the number of spaces thou shalt indent, and the number of thy indenting shall be four. Eight shalt thou not indent, nor either indent thou two, excepting that thou then proceed to four. Tabs are right out. From leif.walsh at gmail.com Sat Jan 31 20:40:12 2009 From: leif.walsh at gmail.com (Leif Walsh) Date: Sat, 31 Jan 2009 14:40:12 -0500 Subject: [Python-Dev] Partial function application 'from the right' In-Reply-To: <76fd5acf0901301638x6d4da376w34fe78d327af7c87@mail.gmail.com> References: <5169ff10901290612y745bdeefhb84ff03bfc1e63bf@mail.gmail.com> <5E031BDD-32EC-47CC-8CE4-D4D3888A49F2@gmail.com> <76fd5acf0901301638x6d4da376w34fe78d327af7c87@mail.gmail.com> Message-ID: On Fri, Jan 30, 2009 at 7:38 PM, Calvin Spealman wrote: > I am just replying to the end of this thread to throw in a reminder > about my partial.skip patch, which allows the following usage: > > split_one = partial(str.split, partial.skip, 1) > > Not looking to say "mine is better", but if the idea is being given > merit, I like the skipping arguments method better than just the > "right partial", which I think is confusing combined with keyword and > optional arguments. And, this patch already exists. Could it be > re-evaluated? +1 but I don't know where the patch is. -- Cheers, Leif From martin at v.loewis.de Sat Jan 31 20:43:06 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 31 Jan 2009 20:43:06 +0100 Subject: [Python-Dev] 3.0.1/3.1.0 summary In-Reply-To: <592904F8-89F5-4AC7-B3BF-0C51D3B474A6@python.org> References: <498295F5.2050607@v.loewis.de> <592904F8-89F5-4AC7-B3BF-0C51D3B474A6@python.org> Message-ID: <4984A9CA.1010307@v.loewis.de> > How about Friday February 13? Fine with me (although next Friday (Feb 6) would work slightly better) Martin From dickinsm at gmail.com Sat Jan 31 22:07:37 2009 From: dickinsm at gmail.com (Mark Dickinson) Date: Sat, 31 Jan 2009 21:07:37 +0000 Subject: [Python-Dev] Removing tp_compare? Message-ID: <5c6f2a5d0901311307t42029621s3d05a0f31dd26d9d@mail.gmail.com> Here's a question (actually, three questions) for python-dev that came up in the issue 1717 (removing cmp) discussion. Once the cmp removal is complete, the type object's tp_compare slot will no longer be used. The current plan is to rename it to tp_reserved, change its type to (void *), and raise TypeError when initializing any type that attempts to put something nonzero into that slot. But another possibility would be to remove it entirely. So... Questions: (1) Is it desirable to remove tp_compare entirely, instead of just renaming it? (2) If so, for which Python version should that removal take place? 3.0.1? 3.1.0? 4.0? and the all-important bikeshed question: (3) In the meantime, what should the renamed slot be called? tp_reserved? In the issue 1717 discussion, Raymond suggested tp_deprecated_compare. Any thoughts? My own opinion is that it really doesn't matter that much if the slot is left in; it's just a little annoying to have such backwards-compatibility baggage already present in the shiny new 3.0 series. A little like finding a big scratch on your brand-new bright yellow Hummer H3. Or not. N.B. The same questions apply to nb_reserved (which used to be nb_long) in the PyNumberMethods structure. Mark From benjamin at python.org Sat Jan 31 22:18:13 2009 From: benjamin at python.org (Benjamin Peterson) Date: Sat, 31 Jan 2009 15:18:13 -0600 Subject: [Python-Dev] Removing tp_compare? In-Reply-To: <5c6f2a5d0901311307t42029621s3d05a0f31dd26d9d@mail.gmail.com> References: <5c6f2a5d0901311307t42029621s3d05a0f31dd26d9d@mail.gmail.com> Message-ID: <1afaf6160901311318y74b30a2bt8d9e9804525bb130@mail.gmail.com> On Sat, Jan 31, 2009 at 3:07 PM, Mark Dickinson wrote: > Once the cmp removal is complete, the type object's tp_compare > slot will no longer be used. The current plan is to rename it to > tp_reserved, change its type to (void *), and raise TypeError when > initializing any type that attempts to put something nonzero into > that slot. But another possibility would be to remove it entirely. > So... I think we should keep as tp_reserved in 3.0.1. In 3.1, as I mentioned in the issue, I'd like to reuse it as a slot for __bytes__. Confusion could be avoided still raising a TypeError for a non-null tp_reserved slot unless the type has Py_TPFLAGS_HAVE_BYTES flag set. After a while, we could just make it default. > > N.B. The same questions apply to nb_reserved (which used > to be nb_long) in the PyNumberMethods structure. IMO, it's fine to keep them around, just in case. -- Regards, Benjamin From martin at v.loewis.de Sat Jan 31 22:28:22 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 31 Jan 2009 22:28:22 +0100 Subject: [Python-Dev] Removing tp_compare? In-Reply-To: <5c6f2a5d0901311307t42029621s3d05a0f31dd26d9d@mail.gmail.com> References: <5c6f2a5d0901311307t42029621s3d05a0f31dd26d9d@mail.gmail.com> Message-ID: <4984C276.70100@v.loewis.de> > (1) Is it desirable to remove tp_compare entirely, instead of > just renaming it? No. > (2) If so, for which Python version should that removal take place? > 3.0.1? 3.1.0? 4.0? If it is removed, it definitely shouldn't be removed in 3.0.1; that would be a binary-incompatible change. > (3) In the meantime, what should the renamed slot be called? > tp_reserved? In the issue 1717 discussion, Raymond suggested > tp_deprecated_compare. tp_reserved sounds fine. In 3.0.1, filling it with a function pointer should give no error, since that would be a binary-incompatible change. > Any thoughts? My own opinion is that it really doesn't matter > that much if the slot is left in; it's just a little annoying to have > such backwards-compatibility baggage already present in > the shiny new 3.0 series. A little like finding a big scratch > on your brand-new bright yellow Hummer H3. Or not. Well, there is also PY_SSIZE_T_CLEAN. I asked before 3.0, and was told that it was too late to remove it. Regards, Martin From greg at krypto.org Sat Jan 31 22:35:23 2009 From: greg at krypto.org (Gregory P. Smith) Date: Sat, 31 Jan 2009 13:35:23 -0800 Subject: [Python-Dev] Subversion upgraded to 1.5 In-Reply-To: <49847292.5080401@v.loewis.de> References: <49847292.5080401@v.loewis.de> Message-ID: <52dc1c820901311335h17ab0ed3h220b0bcbc75f274@mail.gmail.com> I'm seeing the following when trying to svn commit: Transmitting file data ...Read from remote host svn.python.org: Operation timed out svn: Commit failed (details follow): svn: Connection closed unexpectedly ... That was with subversion 1.4.4; copying my changes to a different host with subversion 1.5.1 has the same result. svn update works fine on both hosts in the same sandbox i'm trying to commit from. fwiw, they are both connecting to svn.python.org using IPv6 but that should be irrelevant to svn+ssh as the tcp6 ssh connection works fine. any ideas? -Greg On Sat, Jan 31, 2009 at 7:47 AM, "Martin v. L?wis" wrote: > I have now upgraded subversion to 1.5.1 on svn.python.org. > > Please let me know if you encounter problems. > > Regards, > Martin > _______________________________________________ > Python-Dev mailing list > Python-Dev at python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/greg%40krypto.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From solipsis at pitrou.net Sat Jan 31 22:37:05 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 31 Jan 2009 21:37:05 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Removing_tp=5Fcompare=3F?= References: <5c6f2a5d0901311307t42029621s3d05a0f31dd26d9d@mail.gmail.com> <4984C276.70100@v.loewis.de> Message-ID: Martin v. L?wis v.loewis.de> writes: > > > Any thoughts? My own opinion is that it really doesn't matter > > that much if the slot is left in; it's just a little annoying to have > > such backwards-compatibility baggage already present in > > the shiny new 3.0 series. A little like finding a big scratch > > on your brand-new bright yellow Hummer H3. Or not. > > Well, there is also PY_SSIZE_T_CLEAN. I asked before 3.0, and was told > that it was too late to remove it. Are all modules PY_SSIZE_T_CLEAN? Last I looked, _ssl.c still used int or long in various places instead of Py_ssize_t. Regards Antoine. From martin at v.loewis.de Sat Jan 31 22:48:44 2009 From: martin at v.loewis.de (=?UTF-8?B?Ik1hcnRpbiB2LiBMw7Z3aXMi?=) Date: Sat, 31 Jan 2009 22:48:44 +0100 Subject: [Python-Dev] Removing tp_compare? In-Reply-To: References: <5c6f2a5d0901311307t42029621s3d05a0f31dd26d9d@mail.gmail.com> <4984C276.70100@v.loewis.de> Message-ID: <4984C73C.3000804@v.loewis.de> > Are all modules PY_SSIZE_T_CLEAN? Last I looked, _ssl.c still used int or long > in various places instead of Py_ssize_t. That's probably still the case. Regards, Martin From martin at v.loewis.de Sat Jan 31 23:04:24 2009 From: martin at v.loewis.de (=?ISO-8859-1?Q?=22Martin_v=2E_L=F6wis=22?=) Date: Sat, 31 Jan 2009 23:04:24 +0100 Subject: [Python-Dev] Subversion upgraded to 1.5 In-Reply-To: <52dc1c820901311335h17ab0ed3h220b0bcbc75f274@mail.gmail.com> References: <49847292.5080401@v.loewis.de> <52dc1c820901311335h17ab0ed3h220b0bcbc75f274@mail.gmail.com> Message-ID: <4984CAE8.1050509@v.loewis.de> > any ideas? Assuming you reported this right after it happened - sorry, no. I can't find anything relevant in the log files (although a precise time of failure would have helped). Does a plain "ssh pythondev at svn.python.org" still work? What path did you try to check into? Regards, Martin From janssen at parc.com Sat Jan 31 23:06:48 2009 From: janssen at parc.com (Bill Janssen) Date: Sat, 31 Jan 2009 14:06:48 PST Subject: [Python-Dev] =?utf-8?q?Removing_tp=5Fcompare=3F?= In-Reply-To: References: <5c6f2a5d0901311307t42029621s3d05a0f31dd26d9d@mail.gmail.com> <4984C276.70100@v.loewis.de> Message-ID: <57543.1233439608@parc.com> Antoine Pitrou wrote: > Martin v. L?wis v.loewis.de> writes: > > > > > Any thoughts? My own opinion is that it really doesn't matter > > > that much if the slot is left in; it's just a little annoying to have > > > such backwards-compatibility baggage already present in > > > the shiny new 3.0 series. A little like finding a big scratch > > > on your brand-new bright yellow Hummer H3. Or not. > > > > Well, there is also PY_SSIZE_T_CLEAN. I asked before 3.0, and was told > > that it was too late to remove it. > > Are all modules PY_SSIZE_T_CLEAN? Last I looked, _ssl.c still used int or long > in various places instead of Py_ssize_t. _ssl.c does indeed use int or long in various places. I'm not sure how far it can go with Py_ssize_t -- is OpenSSL 64-bit clean? Bill From solipsis at pitrou.net Sat Jan 31 23:09:54 2009 From: solipsis at pitrou.net (Antoine Pitrou) Date: Sat, 31 Jan 2009 22:09:54 +0000 (UTC) Subject: [Python-Dev] =?utf-8?q?Removing_tp=5Fcompare=3F?= References: <5c6f2a5d0901311307t42029621s3d05a0f31dd26d9d@mail.gmail.com> <4984C276.70100@v.loewis.de> <57543.1233439608@parc.com> Message-ID: Bill Janssen parc.com> writes: > > is OpenSSL 64-bit clean? I'm afraid I'm completely incompetent on this subject...! Regards Antoine. From greg at krypto.org Sat Jan 31 23:49:17 2009 From: greg at krypto.org (Gregory P. Smith) Date: Sat, 31 Jan 2009 14:49:17 -0800 Subject: [Python-Dev] Subversion upgraded to 1.5 In-Reply-To: <4984CAE8.1050509@v.loewis.de> References: <49847292.5080401@v.loewis.de> <52dc1c820901311335h17ab0ed3h220b0bcbc75f274@mail.gmail.com> <4984CAE8.1050509@v.loewis.de> Message-ID: <52dc1c820901311449w6b487195t55f249ca88d163f2@mail.gmail.com> I'm just trying to commit the following to trunk: Sending Lib/test/test_socket.py Sending Misc/NEWS Sending Modules/socketmodule.c Transmitting file data ... I have another svn commit attempt which appesrs to be hanging and destined to timeout running right now. ssh -v pythondev at svn.python.org works fine. -gps On Sat, Jan 31, 2009 at 2:04 PM, "Martin v. L?wis" wrote: > > any ideas? > > Assuming you reported this right after it happened - sorry, no. > I can't find anything relevant in the log files (although a > precise time of failure would have helped). > > Does a plain "ssh pythondev at svn.python.org" still work? > > What path did you try to check into? > > Regards, > Martin > -------------- next part -------------- An HTML attachment was scrubbed... URL: